id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2304.11420 | On K-stability of $\mathbb{P}^3$ blown up along a (2,3) complete
intersection | We prove K-stability of every smooth member of the family 2.15 of the
Mukai-Mori classification. | Luca Giovenzana, Tiago Duarte Guerreiro, Nivedita Viswanathan | 2023-04-22T14:36:00Z | http://arxiv.org/abs/2304.11420v1 | # On K-stability of \(\mathbb{P}^{3}\) blown up along a \((2,3)\) complete intersection
###### Abstract.
We prove K-stability of every smooth member of the family 2.15 of the Mukai-Mori classification.
Key words and phrases:K-stability, Fano threefolds 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 202 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 20202 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 20202 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 20202 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 20202 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 20220 2020 2020 20202 2020 2020 2020 2020 2020 2020 2020 2202 2020 2022 2020 2020 2020 2020 20220 202 2020 2020 2020 2020 2020 2020 2020 2022 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 20220 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 202 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2202 2020 2020 2202 2020 202 2020 2020 2020 2020 2020 22020 2020 22020 2020 2202 2020 2020 2020 2202 2020 2020 2020 220 2020 2202 2020 2020 2020 2020 2022 2020 2202 2020 220 2020 2202 202 2020 2020 2020 220 2020 2202 2020 202 2020 20 220 2202 2020 202 2020 220 2022 2020 2202 202 2020 220 220 2202 202 202 202 202 2022 202 2022 202 2022 202 2022 2022 2022 2022 2022 202 222 2022 222 222 222 222 2222 222 222 222 2222 2222 222 222 222 2222 2222 222 2222 2222 222 2222 2222 2222 2222 2222 222 2222 2222 22222 2222 2222 22222 22222 222222 22222 22222 2222222 222222 222222 2222222222 222222222222222222222222222222222222222222222
Introduction
Let \(X\) be a smooth smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(X\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) be a smooth manifold. Let \(\mathbb{Q}\) be a smooth manifold and \(\mathbb{Q}\) a smooth manifold.
In the following we study K-(semi)stability of certain Fano 3-folds \(X\). We do this by employing the Abban-Zhuang theory developed in [1] to estimate the local stability threshold \(\delta_{p}\) for every point in \(X\). We recall the main results we need by referring to the book [1].
Given a smooth Fano threefold \(X\), so that, in particular \(\mathrm{Nef}(X)\)=\(\mathrm{Mov}(X)\), and a point \(p\in X\) we consider flags \(p\in Z\subset Y\subset X\) where:
* \(Y\) is an irreducible surface with at most Du Val singularities;
* \(Z\) is a non-singular curve such that \((Y,Z)\) is plt.
We denote by \(\Delta_{Z}\) the different of the log pair \((Y,Z)\).
For \(u\in\mathbb{R}\), we consider the divisor class \(-K_{X}-uY\) and we denote by \(\tau=\tau(u)\) its pseudoeffective threshold, i.e. the largest number for which \(-K_{X}-uY\) is pseudoffective. For \(u\in[0,\tau]\), let \(P(u)\) (respectively \(N(u)\)) be the positive (respectively negative) part of its Zariski decomposition. Since \(Y\not\subset\mathrm{Supp}(N(u))\) we can consider the restriction \(N(u)|_{Y}\) and define \(N^{\prime}_{Y}(u)\) to be its part not supported on \(Z\), i.e. \(N^{\prime}_{Y}(u)\) is the effective \(\mathbb{R}\)-divisor such that \(Z\not\subset\mathrm{Supp}(N^{\prime}_{Y}(u))\) defined by:
\[N_{Y}(u)=d(u)Z+N^{\prime}_{Y}(u)\]
where \(d(u):=\)ord\({}_{Z}(N(u)|_{Y})\).
We consider then for every \(u\in[0,\tau]\) the restriction \(P(u)|_{Y}\) and denote by \(t(u)\) the pseudoeffective threshold of the divisor \(P(u)|_{Y}-vZ\), by \(P(u,v)\) and \(N(u,v)\) the positive and negative part of its Zariski decomposition. Let \(V^{Y}_{\bullet,\bullet}\) and \(W^{Y,Z}_{\bullet,\bullet,\bullet}\) be the multigraded linear series defined in [1, Page 57].
Finally we can state the main tool we use to estimate the local \(\delta_{p}\)-invariant:
**Theorem 2.5**.: _[_1_, Theorem 1.112]___
\[\delta_{p}(X)\geq\min\left\{\frac{1-\mathrm{ord}_{p}\Delta_{Z}}{S(W^{Y,Z}_{ \bullet,\bullet,\bullet};p)},\ \frac{1}{S(V^{Y}_{\bullet,\bullet};Z)},\ \frac{1}{S_{X}(Y)}\right\}\]
_where_
\[S(V^{Y}_{\bullet,\bullet};Z)=\frac{3}{(-K_{X})^{3}}\int_{0}^{\tau}(P(u)^{2} \cdot Y)\cdot\mathrm{ord}_{Z}(N(u)|_{Y})du+\frac{3}{(-K_{X})^{3}}\int_{0}^{ \tau}\int_{0}^{\infty}\mathrm{vol}(P(u)|_{Y}-vZ)dvdu, \tag{1}\]
_and_
\[S(W^{Y,Z}_{\bullet,\bullet,\bullet};p)=\frac{3}{(-K_{X})^{3}}\int_{0}^{\tau} \int_{0}^{t(u)}(P(u,v)\cdot Z)^{2}dvdu+F_{p}(W^{Y,Z}_{\bullet,\bullet,\bullet}), \tag{2}\]
_with_
\[F_{p}(W^{Y,Z}_{\bullet,\bullet})=\frac{6}{(-K_{X})^{3}}\int_{0}^{\tau}\int_{0} ^{t(u)}(P(u,v)\cdot Z)\cdot\mathrm{ord}_{p}(N^{\prime}_{Y}(u)|_{Z}+N(u,v)|_{Z })dvdu. \tag{3}\]
The theorem above admits a slight generalization which allows to consider not only flags of varieties in \(X\), but also over \(X\). In particular, let \(X\) and \(Y\) be as above, in order to estimate \(\delta_{p}\) for \(p\in Y\) it turns out to be useful to consider curves over \(Y\). For this, let \(\sigma:\widetilde{Y}\to Y\) be a plt blow-up of \(Y\) in \(p\) and denote by \(\widetilde{Z}\) its exceptional divisor. We consider the linear system \(\sigma^{*}(P(u)|_{Y})-v\widetilde{Z}\) and denote by \(\tilde{t}(u)\) its pseudoeffective threshold, i.e.
\[\tilde{t}(u)=\max\{v\in\mathbb{R}_{\geq 0}\ :\ \sigma^{*}(P(u)|_{Y})-v\widetilde{Z} \text{ is pseudoeffective}\}.\]
For every \(v\in[0,\tilde{t}(u)]\) we denote by \(\widetilde{P}(u,v)\) and \(\widetilde{N}(u,v)\) the positive and negative part of its Zariski decomposition. We also denote by \(N^{\prime}|_{\widetilde{V}}(u)\) the strict transform of the divisor \(N(u)|_{Y}\).
**Theorem 2.6**.: _[_1_, Remark 1.113]___
\[\delta_{p}(X)\geq\min\left\{\min_{q\in\tilde{Z}}\frac{1-\mathrm{ord}_{q} \Delta_{\tilde{Z}}}{S(W^{Y,\tilde{Z}}_{\bullet,\bullet,\bullet};q)},\ \frac{A_{Y}(\tilde{Z})}{S(V^{Y}_{\bullet,\bullet};\tilde{Z})},\ \frac{1}{S_{X}(Y)}\right\}\]
_where:_
\[\begin{split} S(V^{Y}_{\bullet,\bullet};\widetilde{Z})=& \frac{3}{(-K_{X})^{3}}\int_{0}^{\tau}(P(u)^{2}\cdot Y)\cdot \mathrm{ord}_{\widetilde{Z}}(\sigma^{*}(N(u)|_{Y}))du\ +\\ &\frac{3}{(-K_{X})^{3}}\int_{0}^{\tau}\int_{0}^{\infty}\mathrm{ vol}\left(\sigma^{*}\left(P(u)|_{Y}\right)-v\widetilde{Z}\right)dvdu,\end{split} \tag{4}\]
_and_
\[S(W^{Y,\tilde{Z}}_{\bullet,\bullet,\bullet};q)=\frac{3}{(-K_{X})^{3}}\int_{0} ^{\tau}\int_{0}^{\tilde{t}(u)}(\tilde{P}(u,v)\cdot\tilde{Z})^{2}dvdu+F_{q}(W^ {Y,\tilde{Z}}_{\bullet,\bullet,\bullet}), \tag{5}\]
_with_
\[F_{q}(W^{Y,\tilde{Z}}_{\bullet,\bullet,\bullet})=\frac{6}{(-K_{X})^{3}}\int_{0 }^{\tau}\int_{0}^{\tilde{t}(u)}(\tilde{P}(u,v)\cdot\tilde{Z})\cdot\mathrm{ord }_{q}(N^{\prime}_{\widetilde{Y}}(u)|_{\tilde{Z}}+\tilde{N}(u,v)|_{\tilde{Z}} )dvdu. \tag{6}\]
## 3. K-stability of the family 2.15
We briefly review the geometry of a smooth Fano threefold in the family 2.15. Each smooth member is a threefold of Picard number 2 obtained as the blow-up of \(\mathbb{P}^{3}\) in a (2,3)-complete intersection, see [1, Section 4.4] and references therein.
Let \(\mathscr{C}\subset\mathbb{P}^{3}\) be the complete intersection of a quadric \(Q=(f_{2}=0)\) and a cubic \(S_{3}=(f_{3}=0)\). We are interested in the K-stability of the blow-up \(X:=\mathrm{Bl}_{\mathscr{C}}\,\mathbb{P}^{3}\). We stress the fact that the quadric \(Q\) can be either smooth or a quadric cone. Let \(\alpha:X\to\mathbb{P}^{3}\) be the projection, \(E\) the exceptional divisor and \(\widetilde{Q}\) the strict transform of \(Q\). The linear system of cubics vanishing along \(\mathscr{C}\) gives a rational map:
\[\varphi\colon\mathbb{P}^{3}\dashrightarrow\mathbb{P}^{4}\] \[[x:y:z:w]\mapsto[xf_{2}:yf_{2}:zf_{2}:wf_{2}:f_{3}].\]
with indeterminacy locus \(\mathscr{C}\). The blow-up \(X\) is a resolution of indeterminacy of \(\varphi\) fitting in the diagram
where \(\beta\) contracts \(\widetilde{Q}\) to a point and maps \(X\) to a cubic threefold \(V_{3}\) singular at the point \(\beta(\widetilde{Q})=[0:0:0:0:1]\).
We denote by \(H\in\operatorname{NS}(X)\) the pullback of the line bundle \(\mathcal{O}_{\mathbb{P}^{3}}(1)\) along \(\alpha\). The Neron-Severi group of \(X\) is generated by \(H\) and \(E\) and its anti-canonical divisor is given by
\[-K_{X}=4H-E=2H+\widetilde{Q}=2\widetilde{Q}+E,\]
where we used the equality \(\widetilde{Q}=2H-E\). We denote by \(f_{1}\in N_{1}(X)\) the class of the fiber of the restriction \(\alpha|_{E}\colon E\to\mathscr{C}\) and by \(f_{2}\in N_{1}(X)\) the class of a ruling of \(\widetilde{Q}\) so that the Mori cone is \(\overline{NE}(X)=\mathbb{R}_{\geq 0}f_{1}+\mathbb{R}_{\geq 0}f_{2}\). The intersection numbers are as follows:
\[E\cdot f_{1}=\widetilde{Q}\cdot f_{2}=-1,\quad E\cdot f_{2}=3\] \[H\cdot f_{2}=\widetilde{Q}\cdot f_{1}=1,\quad H\cdot f_{1}=0,\] \[H^{3}=1,\quad H\cdot E^{2}=-6,\quad H^{2}\cdot E=0,\quad\text{ and}\] \[E^{3}=-\deg N_{\mathscr{C}\mid\mathbb{P}^{3}}=-2g+2+K_{\mathbb{P }^{3}}\cdot\mathscr{C}=-30.\]
### Estimate of \(\delta_{p}\) for \(p\) in \(\widetilde{Q}\) when \(Q\) is a smooth quadric
In this section we estimate the K-stability threshold \(\delta_{p}\) for a point \(p\in\widetilde{Q}\) by applying Theorem 2.5 to a specific flag.
**Proposition 3.2**.: _If \(p\) is a point in \(\widetilde{Q}\) and not in \(E\), then_
\[\delta_{p}(X)=\frac{44}{37}\]
_and it is computed by the divisor \(\tilde{Q}\) in \(X\). If \(p\in E\cap\widetilde{Q}\), then_
\[\delta_{p}(X)\geq\frac{8}{7}.\]
Proof.: Given a point \(p\in\widetilde{Q}\), we consider the flag
\[p\in L\subset\widetilde{Q}\subset X\]
where \(L\) is a line of \(\widetilde{Q}\) through \(p\) which is not tangent to the curve \(E\cap\widetilde{Q}\) at \(p\), or equivalently, whose image under the map \(\alpha\) is not tangent to \(\mathscr{C}\) at \(\alpha(p)\).
We start by computing \(S_{X}(\widetilde{Q})\). For this, we consider the linear system \(K_{X}-u\widetilde{Q}=E+(2-u)\widetilde{Q}\) for \(u\in\mathbb{R}\). Clearly its pseudoeffective threshold is \(\tau=2\). The Zariski decomposition is given by: 1
Footnote 1: Since the Zariski decomposition is defined to be \(P(u)\) and \(N(u)\), here it is confusing to use \(P_{\widetilde{Q}}(u)\). Would suggest sticking to \(P(u)\).
\[P(u)=\begin{cases}(4-2u)H+(u-1)E&\text{if }u\in[0,1],\\ (4-2u)H&\text{if }u\in[1,2],\end{cases}\quad\text{and}\quad N(u)=\begin{cases}0& \text{if }u\in[0,1],\\ (u-1)E&\text{if }u\in[1,2].\end{cases}\]
Therefore the volume can be computed to be:
\[\operatorname{vol}(-K_{X}-u\tilde{Q})=(P(u))^{3}=\begin{cases}22-6u-6u^{2}-2u ^{3}&\text{if }u\in[0,1],\\ 64-96u+48u^{2}-8u^{3}&\text{if }u\in[1,2].\end{cases}\]
Hence we get:
\[S_{X}(\widetilde{Q})=\frac{1}{(-K_{X})^{3}}\int_{0}^{\tau(\tilde{Q})} \operatorname{vol}(-K_{X}-u\tilde{Q})du=\frac{37}{44}. \tag{7}\]
We move on to compute the value \(S(V^{\widetilde{Q}}_{\bullet,\bullet};L)\). For this, let \(\ell_{1},\ell_{2}\) the classes of the rulings of \(\widetilde{Q}\) so that the class of \(L\) is \(\ell_{1}\), we consider for \(v\in\mathbb{R}_{\geq 0}\) the linear system:
\[P(u)|_{\widetilde{Q}}-vL=\begin{cases}(1+u-v)\ell_{1}+(1+u)\ell_{2}&\text{if $u \in[0,1]$},\\ (4-2u-v)\ell_{1}+(4-2u)\ell_{2}&\text{if $u\in[1,2]$}.\end{cases}\]
The nefness and bigness of the above linear system is readily checked and its Zariski decomposition is given by:
\[P(u,v)=\begin{cases}(1+u-v)\ell_{1}+(1+u)\ell_{2}&\text{if $u\in[0,1]$},\ v\in[0,1+u]\\ (4-2u-v)\ell_{1}+(4-2u)\ell_{2}&\text{if $u\in[1,2]$},\ v\in[0,4-2u],\end{cases}N(u,v)= \begin{cases}0\\ 0.\end{cases}\]
Hence
\[\operatorname{vol}(P(u)|_{\widetilde{Q}}-vL)=\begin{cases}2(1+u-v)(1+u)& \text{if $u\in[0,1]$},\ v\in[0,1+u]\\ 4(4-2u-v)(2-u)&\text{if $u\in[1,2]$},\ v\in[0,4-2u].\end{cases}\]
We note that the restriction of the divisor \(E\) to \(\widetilde{Q}\) consists of an irreducible curve which is isomorphically mapped to \(\mathscr{C}\) by the blow-up morphism \(\alpha\). In particular, we see that \(E|_{\widetilde{Q}}\) has no support on \(L\) and the negative part \(N(u)\) does not contribute in the formula (1) and we get:
\[S(V^{\widetilde{Q}}_{\bullet,\bullet};L)=\frac{69}{88}. \tag{8}\]
We move on now to compute \(S(W^{\widetilde{Q},L}_{\bullet,\bullet};p)\).
If the point \(p\in\widetilde{Q}\setminus E\), then the order of \(E|_{\widetilde{Q}}\) at \(p\) is trivial, hence the value \(F_{p}(W^{\widetilde{Q},L}_{\bullet,\bullet,\bullet})\) of (3) is zero. A direct computation gives the value of (2):
\[S(W^{\widetilde{Q},L}_{\bullet,\bullet,\bullet};p)=\frac{69}{88}. \tag{9}\]
On the other hand, if the point \(p\) is in \(\widetilde{Q}\cap E\) the value \(F_{p}\) in (3) is not trivial. First of all we notice that \(L\) is not contained in \(E|_{\widetilde{Q}}\) so we have \(N(u)=N^{\prime}_{\widetilde{Q}}(u)\). Secondly, since in the choice of the flag we assumed that \(L\) intersects \(E\cap\widetilde{Q}\) transversely we have \(\operatorname{ord}_{p}(N^{\prime}_{\widetilde{Q}}(u)|_{L})=u-1\) if \(u\in[1,2].\) For the value in (3) we therefore get:
\[F_{p}=\frac{1}{11}. \tag{10}\]
If \(p\not\in E\), the values \(S_{X}(\widetilde{Q})\), \(S(V^{\widetilde{Q}}_{\bullet,\bullet};L)\) and \(S(W^{\widetilde{Q},L}_{\bullet,\bullet,\bullet};p)\) are computed in the formulas (7), (8) and (9), so that:
\[\frac{44}{37}=\frac{1}{S_{X}(\widetilde{Q})}\geq\delta_{p}(X)\geq\min\biggl{\{} \frac{44}{37},\ \frac{88}{69},\ \frac{88}{69}\biggr{\}}=\frac{44}{37}.\]
If the point \(p\) is in \(E\), the value \(S(W^{\widetilde{Q},L}_{\bullet,\bullet,\bullet};p)\) is obtained by summing up also \(F_{p}\), which is computed in (10) and one gets:
\[\delta_{p}(X)\geq\min\biggl{\{}\frac{44}{37},\ \frac{88}{69},\ \frac{8}{7} \biggr{\}}=\frac{8}{7}.\]
This concludes the proof.
### Estimate of \(\delta_{p}\) for \(p\) in \(\widetilde{Q}\) when \(Q\) is a quadric cone
We divide the computations in two separate cases: These are when \(p\) is the vertex of the quadric cone or \(p\) is away from it.
#### 3.3.1. \(p\) is the vertex of the quadric cone
Let \(\pi\colon\hat{X}\to X\) be the blowup of \(X\) at \(p\) with exceptional divisor \(G\simeq\mathbb{P}^{2}\). Let \(\hat{Q}\) be the strict transform of \(\tilde{Q}\) in \(\hat{X}\). Since \(\hat{Q}=\pi^{*}\tilde{Q}-2G\) and \(-K_{X}=2\tilde{Q}+E\), we have
\[\pi^{*}(-K_{X})-uG=2\hat{Q}+\hat{E}+(4-u)G \tag{11}\]
where \(\hat{E}\simeq E\) is the strict transform of \(E\) in \(\hat{X}\).
**Lemma 3.4**.: _The pseudo-effective threshold \(\tau\) of the linear system \(\pi^{*}(-K_{X})-uG\) is \(\tau=4\)._
Proof.: From Equation (11) we clearly we have that \(\tau\geq 4\). In order to prove the equality it is enough to show that the divisor \(2\hat{Q}+\hat{E}\) is not big. For this, let \(\gamma\colon\hat{X}\to\operatorname{Bl}_{\alpha(p)}\mathbb{P}^{3}\) be the divisorial contraction of \(\hat{E}\). Since the pushforward of a big divisor along a birational morphism is big, in order to show the claim it is enough to show that \(\gamma_{*}\hat{Q}\) is not big. For this, notice that \(\operatorname{Bl}_{\alpha(p)}\mathbb{P}^{3}\) is the resolution of indeterminacy of the projection from \(\alpha(p)\) and is a conic bundle \(h\colon\operatorname{Bl}_{\alpha(p)}\mathbb{P}^{3}\to\mathbb{P}^{2}\) which contracts \(\gamma(\hat{Q})\) to a conic. In particular \(\gamma(\hat{Q})\equiv h^{*}\mathcal{O}_{\mathbb{P}^{2}}(2)\) is not big. The claim is proven.
Let \(l\), \(f_{G}\) and \(f_{E}\) be the ruling of \(\hat{Q}\), the class in \(\operatorname{Pic}(G)\) of a line of \(G\) and a fibre of \(E\), respectively. We have the following intersection numbers
\[\begin{array}{c|cccc}&l&f_{G}&f_{E}\\ \hline\hat{Q}&-3&2&1\\ G&1&-1&0\\ \hat{E}&3&0&-1\end{array}\]
Moreover,
\[\hat{Q}^{2}\cdot\hat{E}=-6,\quad\hat{Q}\cdot G^{2}=-2,\quad\hat{Q}^{2}\cdot G =4,\quad G^{2}\cdot\hat{E}=G\cdot\hat{E}^{2}=0,\]
\[\hat{E}^{3}=-30\quad\hat{Q}\cdot\hat{E}^{2}=18,\quad\hat{Q}^{3}=-6,\quad\hat {Q}\cdot\hat{E}\cdot G=0,\quad G^{3}=1.\]
**Proposition 3.5**.: _If \(p\) is the vertex of the quadric cone \(\tilde{Q}\), then_
\[\delta_{p}(X)=\frac{11}{10},\]
_and it is computed by the exceptional divisor \(G\) corresponding to the ordinary blowup of \(X\) at \(p\)._
Proof.: By [23, Corollary 4.18 (2)], we have
\[\frac{A_{X}(G)}{S_{X}(G)}\geq\delta_{p}(X)\geq\min\bigg{\{}\frac{A_{X}(G)}{S_{X }(G)},\inf_{q\in G}\delta_{q}(G,\Delta_{G};V^{G}_{\bullet,\bullet})\bigg{\}}. \tag{12}\]
We compute first \(\frac{A_{X}(G)}{S_{X}(G)}\) and then show this is the bound given by the right hand side of the second inequality of (12). Let \(P(u)\) and \(N(u)\) be the positive and negative part of \(\pi^{*}(-K_{X})-uG\). We have:
\[P(u)=\begin{cases}2\hat{Q}+\hat{E}+(4-u)G&\text{if }u\in[0,1],\\ \frac{7-u}{3}\hat{Q}+\hat{E}+(4-u)G&\text{if }u\in[1,4],\end{cases}\text{ and }N(u)= \begin{cases}0&\text{if }u\in[0,1],\\ \frac{(u-1)}{3}\hat{Q}&\text{if }u\in[1,4],\end{cases}\]
A direct computation gives:
\[\frac{A_{X}(G)}{S_{X}(G)}=\frac{11}{10}.\]
We now compute \(\inf_{q\in G}\delta_{q}(G,\Delta_{G};V^{G}_{\bullet,\bullet})\).
* Suppose \(q\not\in\hat{Q}|_{G}\).
For every such point we choose a flag \(q\in L\subset G\), where \(L\) is a line in \(G\). Then, by [1, Theorem 3.2]
\[\delta_{q}(G,\Delta_{G};W^{G}_{\bullet,\bullet})\geq\min\left\{\frac{1}{S(W^{G }_{\bullet,\bullet};L)},\frac{1-\operatorname{ord}_{q}\!\Delta_{L}}{S(W^{G,L}_ {\bullet,\bullet};q)}\right\}.\]
Let \(P(u,v)\) and \(N(u,v)\) be the positive and negative part of \(P(u)|_{G}-vL\). These are given by
\[P(u,v)=\begin{cases}(u-v)L&\text{if }u\in[0,1],\ v\in[0,u],\\ \Big{(}\frac{2+u}{3}-v\Big{)}L&\text{if }u\in[1,4],\ v\in[0,\frac{2+u}{3}], \end{cases}\quad\text{ and }\quad N(u,v)=0.\]
Notice that \(\operatorname{ord}_{L}(N(u)|_{G})=0\) since \(\hat{Q}|_{G}\) is not supported on \(L\) and \(\operatorname{ord}_{q}(N^{\prime}_{G}(u)|_{L}+N(u,v)|_{L})=0\) since \(q\not\in\hat{Q}|_{G}\). Hence,
\[\frac{1}{S(W^{G}_{\bullet,\bullet};L)}=\frac{1-\operatorname{ord}_{q}\!\Delta _{L}}{S(W^{G,L}_{\bullet,\bullet};q)}=\frac{44}{23}.\]
* Suppose \(q\in\hat{Q}|_{G}\).
We denote by \(\eta:\hat{G}\to G\) the \((1,2)\)-weighted blowup of \(q\) with exceptional divisor \(F\simeq\mathbb{P}(1,2)\). By [13, Corollary 4.18 (1)], we have
\[\delta_{q}(G,\Delta_{G};W^{\hat{G}}_{\bullet,\bullet})\geq\min\bigg{\{}\frac{ A_{G}(F)}{S(V^{\hat{G}}_{\bullet,\bullet};F)},\inf_{\begin{subarray}{c}q^{ \prime}\in F\\ \eta(q^{\prime})=q\end{subarray}}\frac{A_{F,\Delta_{F}}(q^{\prime})}{S(W^{ \hat{G},F}_{\bullet,\bullet};q^{\prime})}\bigg{\}}. \tag{13}\]
The surface \(\hat{G}\) has an \(A_{1}\) singular point \(q_{0}\) lying on \(F\). Denote by \(C\) the conic \(\hat{Q}|_{G}\) and by \(\ell_{T}\) the line tangent to \(C\) at \(q\). Their strict transforms \(\widetilde{C}\) and \(\widetilde{\ell_{T}}\) intersect \(F\) at a regular point of \(\hat{G}\). We have
\[\widetilde{C}=\eta^{*}C-2F,\qquad\widetilde{\ell_{T}}=\eta^{*} \ell_{T}-2F,\ \text{ and }\] \[\widetilde{\ell_{T}}^{2}=-1,\quad\widetilde{C}^{2}=2,\quad F^{2 }=-\frac{1}{2},\quad\widetilde{\ell_{T}}\cdot F=1.\]
We consider the linear system
\[\eta^{*}(P(u)|_{G})-vF=\begin{cases}u\widetilde{\ell_{T}}+(2u-v)F&\text{if }u \in[0,1],\\ \frac{2+u}{3}\widetilde{\ell_{T}}+\Big{(}\frac{2}{3}(2+u)-v\Big{)}F&\text{if }u \in[1,4].\end{cases}\]
Then, its Zariski decomposition has positive part
\[\tilde{P}(u,v)=\begin{cases}u\widetilde{\ell_{T}}+(2u-v)F&\text{if }u\in[0,1] \ v\in[0,u]\\ (2u-v)(\widetilde{\ell_{T}}+F)&\text{if }u\in[0,1]\ v\in[u,2u]\\ \frac{2+u}{3}\widetilde{\ell_{T}}+\Big{(}\frac{4+2u}{3}-v\Big{)}F&\text{if }u \in[1,4]\ v\in[0,\frac{2+u}{3}]\\ \frac{4+2u}{3}(\widetilde{\ell_{T}}+F)&\text{if }u\in[1,4]\ v\in[\frac{2+u}{3}, \frac{4+2u}{3}].\end{cases}\]
and negative part
\[\tilde{N}(u,v)=\begin{cases}0&\text{if }u\in[0,1]v\in[0,u]\\ (v-u)\widetilde{\ell_{T}}&\text{if }u\in[0,1]v\in[u,2u]\\ 0&\text{if }u\in[1,4]\ v\in[0,\frac{2+u}{3}]\\ (v-\frac{2+u}{3})\widetilde{\ell_{T}}&\text{if }u\in[1,4]\ v\in[\frac{2+u}{3}, \frac{4+2u}{3}].\end{cases}\]
Notice that
\[\operatorname{ord}_{F}(\eta^{*}N(u)|_{G})=\begin{cases}0&\text{if }u\in[0,1] \\ \operatorname{ord}_{F}\Bigl{(}\frac{u-1}{3}\eta^{*}C\Bigr{)}&\text{if }u\in[1,4] \end{cases}=\begin{cases}0&\text{if }u\in[0,1],\\ \frac{2}{3}(u-1)&\text{if }u\in[1,4].\end{cases}\]
A direct computation gives
\[\frac{A_{G}(F)}{S(V^{G}_{\bullet,\bullet};F)}=\frac{11}{10}.\]
We now compute the second term in formula (13). For \(u\in[0,1]\),
\[\operatorname{ord}_{q^{\prime}}(\eta^{*}(N^{\prime}_{\tilde{G}}( u)|_{F}+N(u,v)|_{F})) =\operatorname{ord}_{q^{\prime}}(\eta^{*}N(u,v)|_{F})\] \[=\operatorname{ord}_{q^{\prime}}((v-u)\widetilde{\ell_{T}}|_{F})\] \[=\begin{cases}0&\text{if }q^{\prime}\not\in\widetilde{\ell_{T}}, \\ v-u&\text{otherwise}.\end{cases}\]
On the other hand, for \(u\in[1,4]\),
\[\operatorname{ord}_{q^{\prime}}(\eta^{*}(N^{\prime}_{\tilde{G}}( u)|_{F}+N(u,v)|_{F})) =\operatorname{ord}_{q^{\prime}}\Biggl{(}\frac{u-1}{3}\widetilde{C} |_{F}+\Bigl{(}v-\frac{2+u}{3}\Bigr{)}\widetilde{\ell_{T}}|_{F}\Biggr{)}\] \[=\begin{cases}0&\text{if }q^{\prime}\not\in\widetilde{\ell_{T}} \cup\widetilde{C},\\ \frac{u-1}{3}&\text{if }q^{\prime}\in\widetilde{C},\\ v-\frac{2+u}{3}&\text{if }q^{\prime}\in\widetilde{\ell_{T}}.\end{cases}\]
Then,
\[S(W^{\tilde{G},F}_{\bullet,\bullet,\bullet};q^{\prime})=\begin{cases}\frac{2 3}{88}&\text{if }q^{\prime}\not\in\widetilde{\ell_{T}}\cup\widetilde{C},\\ \frac{37}{44}&\text{if }q^{\prime}\in\widetilde{C},\\ \frac{23}{44}&\text{if }q^{\prime}\in\widetilde{\ell_{T}}.\end{cases}\]
Moreover, \(A_{F,\Delta_{F}}(q^{\prime})=1\) for every \(q^{\prime}\in\tilde{F}\) except when \(q^{\prime}\) is the \(A_{1}\) singularity introduced by \(\eta\), in which case it is \(\frac{1}{2}\). Hence,
\[\inf_{\begin{subarray}{c}q^{\prime}\in F\\ q(q^{\prime})=q\end{subarray}}\frac{A_{F,\Delta_{F}}(q^{\prime})}{S(W^{\tilde {G},F}_{\bullet,\bullet};q^{\prime})} =\min\Big{\{}\frac{1}{23/88},\frac{1/2}{23/88},\frac{1}{23/44}, \frac{1}{37/44}\Big{\}}\] \[=\min\left\{\frac{88}{23},\frac{88}{46},\frac{44}{23},\frac{37}{4 4}\right\}\] \[=\frac{44}{37}.\]
Therefore,
\[\delta_{q}(G,\Delta_{G};W^{\tilde{G}}_{\bullet,\bullet})\geq\min\left\{\frac{11}{ 10},\frac{44}{37}\right\}=\frac{11}{10}\]
for \(q\in C\).
Putting together the cases, \(q\not\in C\) and \(q\in C\), we have indeed,
\[\delta_{q}(G,\Delta_{G};W^{\tilde{G}}_{\bullet,\bullet})\geq\min\left\{\frac{1 1}{10},\frac{44}{23}\right\}=\frac{11}{10}.\]
Hence,
\[\delta_{p}(X)\geq\frac{11}{10}\]
and the claim follows.
#### 3.5.1. The point \(p\) is away from the vertex of the quadric cone
Let \(p\) be any point in \(\tilde{Q}\) such that \(\alpha(p)\) is not the vertex of \(Q\). We consider the general hyperplane section \(H\) of \(\mathbb{P}^{3}\) containing \(\alpha(p)\) and its strict transform \(S\) in \(X\). Then \(S\) is isomorphic to the blow-up of \(H\) in the six points \(p_{1},...,p_{6}\) given by \(Q\cap\mathscr{C}\), which lie on the conic \(C=Q\cap H\).
We consider the blow-up \(\sigma\colon\widetilde{S}\to S\) in the point \(p\) with exceptional divisor \(F\). We denote by \(\widetilde{C}\) the strict transform of \(C\) in \(\widetilde{S}\), by \(E_{1},...,E_{6}\) the curves lying over the points \(p_{1},...,p_{6}\) and by \(L_{j}\) the strict transform of the line through the points \(\alpha(p)\) and \(p_{j}\) for \(j=1,...,6\).
**Proposition 3.6**.: _Assume that \(Q\) is a quadric cone. Let \(p\in X\) be a point such that \(\alpha(p)\in Q\) is away from the vertex. Then:_
\[\delta_{p}(X)\geq\frac{44}{43}\]
Proof.: The result follows from applying Theorem 2.6 to the flag consisting of the strict transform \(S\) of a hyperplane in \(\mathbb{P}^{3}\), the exceptional curve \(F\) in \(\widetilde{S}\).
We consider the linear system \(-K_{X}-uS\). Its Zariski decomposition is then given by
\[P(u)=\begin{cases}(4-u)H-E&\text{if }u\in[0,1],\\ (6-3u)H+(u-2)E&\text{if }u\in[1,2],\end{cases}\text{ and }N(u)=\begin{cases}0&\text{if }u\in[0,1],\\ (u-1)\widetilde{Q}&\text{if }u\in[1,2],\end{cases}\]
A direct computation gives
\[\frac{A_{X}(S)}{S_{X}(S)}=\frac{44}{23}. \tag{14}\]
We consider then the linear system
\[D=\sigma^{*}(P(u)|_{S})-vF=\begin{cases}(4-u)h-\sum_{i=1}^{6}E_{i}-vF&\text{ if }u\in[0,1],\\ (6-3u)h-(2-u)\sum_{i=1}^{6}E_{i}-vF&\text{if }u\in[1,2].\end{cases}\]
Its Zariski decomposition for \(u\in[0,1]\) is given by
\[P=\begin{cases}D&\text{if }v\in[0,2-2u],\\ D-a\widetilde{C}&\text{if }v\in[2-2u,3-u],\\ D-a\widetilde{C}-b\sum_{i=1}^{6}L_{j}&\text{if }v\in[3-u,\frac{1}{4}(14-5u)]. \end{cases}\text{ and }N=\begin{cases}0&\text{if }v\in[0,2-2u],\\ a\widetilde{C}&\text{if }v\in[2-2u,3-u],\\ a\widetilde{C}+b\sum_{i=1}^{6}L_{j}&\text{if }v\in[3-u,\frac{1}{4}(14-5u)].\end{cases}\]
where \(a=\frac{1}{3}(v+2u-2)\) and \(b=v-3+u\). For \(u\in[1,2]\) it is given by
\[P=\begin{cases}D-a\widetilde{C}&\text{if }v\in[0,4-2u],\\ D-a\widetilde{C}-b\sum_{j=1}^{6}L_{j}&\text{if }v\in[4-2u,\frac{1}{4}(18-9u)]. \end{cases}\text{ and }N=\begin{cases}a\widetilde{C}&\text{if }v\in[0,4-2u],\\ a\widetilde{C}+b\sum_{j=1}^{6}L_{j}&\text{if }v\in[4-2u,\frac{1}{4}(18-9u)].\end{cases}\]
where \(a=\frac{v}{3}\) and \(b=v-4+2u\). Hence, for \(u\in[0,1]\) the volume of the divisor \(D\) is
\[\operatorname{vol}(D)=P^{2}=\begin{cases}u^{2}-v^{2}-8u+10&\text{if }v\in[0,2-2u],\\ \frac{1}{3}(7u^{2}+4uv-2v^{2}-32u-4v+34)&\text{if }v\in[2-2u,3-u],\\ \frac{1}{3}(5u+4v-14)^{2}&\text{if }v\in[3-u,\frac{1}{4}(14-5u)]\end{cases}\]
and for \(u\in[1,2]\)
\[\operatorname{vol}(D)=P^{2}=\begin{cases}u^{2}-v^{2}-8u+10&\text{if }v\in[0,4-2u],\\ \frac{1}{3}(7u^{2}+4uv-2v^{2}-32u-4v+34)&\text{if }v\in[4-2u,\frac{1}{4}(18-9u)]. \end{cases}\]
We note that for \(u\in[1,2]\) the contribution of the negative part in (4) is \(\operatorname{ord}_{F}(\sigma^{*}N(u)|_{S})=\operatorname{ord}_{F}((u-1)( \widetilde{C}+F))=u-1\). So the value can be computed
\[\frac{A_{S}(F)}{S(V_{\bullet,\bullet}^{S};F)}=\frac{8}{7}. \tag{15}\]
We now compute \(S(W_{\bullet,\bullet,\bullet}^{S,F};q)\). Since \(\widetilde{S}\) is smooth, the different \(\Delta_{q}\) is trivial for any point \(q\), while the value of \(F_{q}(W_{\bullet,\bullet,\bullet}^{S,F})\) depends on the position of \(q\) in \(F\). We split thus into following three cases:
* \(q\notin\widetilde{C}\cup\bigcup_{j=1}^{6}L_{j}\), so that \(\operatorname{ord}_{q}(N_{\tilde{S}}^{\prime}(u)|_{F}+\tilde{N}(u,v)|_{F})=0\) and \(F_{q}=0\). And one has: \[\frac{1-\operatorname{ord}_{q}\Delta_{\tilde{Z}}}{S(W_{\bullet,\bullet, \bullet}^{Y,\tilde{Z}};q)}=\frac{22}{15}.\]
* \(q=\widetilde{C}\cap F\) so that \[\operatorname{ord}_{q}(N_{\tilde{S}}^{\prime}(u)|_{F}+\tilde{N}(u,v)|_{F})= \begin{cases}\frac{1}{3}(v+2u-2)&\text{if }u\in[0,1]\text{ and }v\in[2-2u,\frac{1}{4}(14-5u)],\\ u-1+\frac{v}{3}&\text{if }u\in[1,2]\text{ and }v\in[0,\frac{1}{4}(18-9u)],\\ 0&\text{otherwise}.\end{cases}\] From which, one can compute: \[F_{q}(W_{\bullet,\bullet,\bullet}^{S,F})=\frac{13}{44}\quad\text{ and }\quad\frac{1-\operatorname{ord}_{q}\Delta_{\tilde{Z}}}{S(W_{\bullet,\bullet, \bullet}^{Y,\tilde{Z}};q)}=\frac{44}{43}.\]
* \(q=F\cap L_{j}\) for some \(j=1,...,6\) so that \[\operatorname{ord}_{q}(N_{\tilde{S}}^{\prime}(u)|_{F}+\tilde{N}(u,v)|_{F})= \begin{cases}v+u-3&\text{if }u\in[0,1]\text{ and }v\in[3-u,\frac{1}{4}(14-5u)],\\ v-4+2u&\text{if }u\in[1,2]\text{ and }v\in[4-2u,\frac{1}{4}(18-9u)],\\ 0&\text{otherwise}.\end{cases}\] From which, one can compute: \[F_{q}(W_{\bullet,\bullet,\bullet}^{S,F})=\frac{1}{66}\quad\text{ and }\quad\frac{1-\operatorname{ord}_{q}\Delta_{\tilde{Z}}}{S(W_{\bullet,\bullet, \bullet}^{Y,\tilde{Z}};q)}=\frac{33}{23}.\]
Therefore,
\[\min_{q\in F}\frac{1-\operatorname{ord}_{q}\Delta_{F}}{S(W^{S,F}_{\bullet,\bullet, \bullet};q)}=\min\left\{\frac{22}{15},\frac{44}{43},\frac{33}{23}\right\}=\frac {44}{43}. \tag{16}\]
Finally, by combining the Equations (14), (15), and (16), we get
\[\delta_{p}(X)\geq\min\left\{\frac{44}{37},\frac{8}{7},\frac{44}{43}\right\}= \frac{44}{43}.\]
### Estimate of \(\delta_{p}\) for a point \(p\) off \(E\) and \(\tilde{Q}\)
In this section, we estimate \(\delta_{p}(X)\) for a point \(p\in X\setminus(E\cup\tilde{Q})\). Roughly speaking, we consider the flag given by the general hyperplane section of \(V_{3}\) containing \(\beta(p)\) and the curve given by its tangent hyperplane section. The precise flag depends though on the singularity of the latter.
**Lemma 3.8**.: _Let \(S\) be the strict transform of a hyperplane section of \(V_{3}\) not containing the singular point of \(\beta(\widetilde{Q})\). Then_
\[S_{X}(S)=\frac{14}{33}.\]
Proof.: The linear system \(-K_{X}-uS\) can be written as
\[-K_{X}-uS=\Big{(}2-\frac{3}{2}u\Big{)}\tilde{Q}+\Big{(}1-\frac{u}{2}\Big{)}E= (4-3u)H+(u-1)E.\]
Thus its pseudoeffective threshold is \(\tau(u)=\frac{4}{3}\) and its Zariski decomposition is given by:
\[P(u)=\begin{cases}(4-3u)H+(u-1)E&\text{if }u\in[0,1],\\ (4-3u)H&\text{if }u\in[1,\frac{4}{3}].\end{cases}\text{ and }N(u)=\begin{cases}0&\text{if }u\in[0,1],\\ (u-1)E&\text{if }u\in[1,\frac{4}{3}].\end{cases}\]
Therefore, \(\operatorname{vol}(-K_{X}-uS)=\begin{cases}22-36u+18u^{2}-3u^{3}&\text{if }u \in[0,1],\\ 64-144u+108u^{2}-27u^{3}&\text{if }u\in[1,\frac{4}{3}].\end{cases}\)
Hence,
\[S_{X}(S)=\frac{1}{(-K_{X})^{3}}\int_{0}^{\tau(S)}\operatorname{vol}(-K_{X}-uS )du=\frac{14}{33}. \tag{17}\]
We consider a hyperplane section \(S\) of \(V_{3}\) containing the point \(\beta(p)\) and not containing the point \(\beta(\widetilde{Q})\), so that \(S\) is a smooth cubic surface. We study the singularities of its tangent hyperplane section, because the relevant flag we use to estimate \(\delta_{p}\) depends on them.
For an appropriate choice of coordinates \(\beta(p)=(0,0,0,0)\in\mathbb{C}^{4}_{x,y,z,t}\) in a chart of \(\mathbb{P}^{4}\) and the surface \(S\) is given by:
\[S=\begin{cases}x+f_{2}(x,y,z,t)+f_{3}(x,y,z,t)=0,\\ y=0.\end{cases}\]
where \(f_{2}\) (respectively \(f_{3}\)) is a homogeneous polynomial of degree \(2\) (respectively of degree \(3\)). By considering a suitable change of variables we might assume that no monomials containing \(x\) appear in the expression of \(f_{2}\) and so we have:
\[\operatorname{rk}(f_{2}|_{(y=0)})\in\{0,1,2\}.\]
The tangent hyperplane section of \(S\) is the curve \(C\) given by:
\[(x=0)\cap S=\begin{cases}x=y=0,\\ f_{2}(0,z,t)+f_{3}(0,0,z,t)=0.\end{cases}\]
Therefore, the curve \(C\) consists of
* a rational curve with a node at \(\beta(p)\) if \(\operatorname{rk}(f_{2})=2\);
* a rational curve with a cusp at \(\beta(p)\) if \(\operatorname{rk}(f_{2})=1\);
* three lines intersecting at \(\beta(p)\) if \(\operatorname{rk}(f_{2})=0\).
In each of these cases we use a different flag.
Since we are assuming that \(S\) does not contain the point \(\beta(\widetilde{Q})\), the surface \(S\) is isomorphic to its strict transform in \(X\), and so is \(C\). In what follows we slightly abuse notation and use the symbols \(S\) and \(C\) for the strict transforms as well.
#### 3.8.1. Nodal curve
Suppose the point \(p\) on \(X\) is such that the curve \(C\) on \(V_{3}\) is a curve with a node at \(\beta(p)\). In order to estimate \(\delta_{p}\), we make use of Theorem 2.6. Let \(\sigma\colon\widehat{S}\to S\) be the blow-up of \(S\) in \(p\) with exceptional curve \(G\). We denote by \(\widehat{C}\) the strict transform of \(C\) in \(\widehat{S}\). We have the following intersection numbers:
\[G^{2}=-1,\quad G\cdot\widehat{C}=2,\quad\widehat{C}^{2}=-1.\]
**Proposition 3.9**.: _Suppose that \(p\in X\backslash(\tilde{Q}\cup E)\) is such that \(\beta(p)\) is the node of the tangent hyperplane section to the general hyperplane section of \(V_{3}\) containing \(\beta(p)\), then_
\[\delta_{p}(X)\geq\frac{176}{161}.\]
Proof.: We apply Theorem 2.6 to the flag consisting of \(p\), the exceptional curve \(G\) and the strict transform of the general hyperplane section of \(V_{3}\) through \(\beta(p)\). For this, we consider the linear system
\[\sigma^{*}(P(u)|_{S})-vG=\begin{cases}(2-u)\widehat{C}+(4-2u-v)G&\text{if }u \in[0,1],\\ (4-3u)\widehat{C}+(8-6u-v)G&\text{if }u\in[1,\frac{4}{3}].\end{cases}\]
Its Zariski decomposition is given by
\[\tilde{P}(u,v)=\begin{cases}(2-u)\widehat{C}+(4-2u-v)G&\text{if }u\in[0,1]\ v \in[0,3-\frac{3u}{2}];\\ (4-2u-v)(2\widehat{C}+G)&\text{if }u\in[0,1]\ v\in[3-\frac{3u}{2},4-2u]\\ (4-3u)\widehat{C}+(8-6u-v)G&\text{if }u\in[1,\frac{4}{3}]\ v\in[0,6-\frac{9u}{2}] \\ (8-6u-v)(2\widehat{C}+G)&\text{if }u\in[1,\frac{4}{3}]\ v\in[6-\frac{9u}{2},8-6u]. \end{cases}\]
and by
\[\tilde{N}(u,v)=\begin{cases}0&\text{if }u\in[0,1]\ v\in[0,3-\frac{3u}{2}];\\ (2v+3u-6)\widehat{C}&\text{if }u\in[0,1]\ v\in[3-\frac{3u}{2},4-2u]\\ 0&\text{if }u\in[1,\frac{4}{3}]\ v\in[0,6-\frac{9u}{2}]\\ (2v+9u-12)\widehat{C}&\text{if }u\in[1,\frac{4}{3}]\ v\in[6-\frac{9u}{2},8-6u]. \end{cases}\]
Its volume can be directly computed to be
\[\operatorname{vol}(\sigma^{*}(P(u)|_{S})-vG)=\begin{cases}3u^{2}-v^{2}-12u+12& \text{if }u\in[0,1],\ v\in[0,3-\frac{3u}{2}],\\ 12u^{2}+12uv+3v^{2}-48u-24v+48&\text{if }u\in[0,1],\ v\in[3-\frac{3u}{2},4-2u],\\ 27u^{2}-v^{2}-72u+48&\text{if }u\in[1,\frac{4}{3}],\ v\in[0,6-\frac{9u}{2}],\\ 108u^{2}+36uv+3v^{2}-288u-48v+192&\text{if }u\in[1,\frac{4}{3}],\ v\in[6-\frac{9u}{2},8-6u]. \end{cases} \tag{18}\]
We note that
\[\operatorname{ord}_{p}N(u)|_{S}=\begin{cases}0&\text{if }u\in[0,1],\\ \operatorname{ord}_{p}(u-1)E|_{S}&\text{if }u\in[1,\frac{4}{3}],\end{cases}\]
and therefore \(\operatorname{ord}_{p}N(u)|_{S}=0\) since \(p\) is not in \(E\) by assumption. Thus,
\[S(V^{S}_{\bullet,\bullet};G)=\frac{161}{88}. \tag{19}\]
Since \(A_{S}(G)=1+\operatorname{ord}_{G}(K_{\widehat{S}}-\sigma^{*}(K_{S}))=2\), we have that \(\frac{A_{S}(G)}{S(V^{S}_{\bullet,\bullet};G)}=\frac{176}{161}\).
Next, we compute \(S(W^{S,G}_{\bullet,\bullet,\bullet};q)\). Straightforward computations using the intersection numbers gives us the first summand in (5)
\[\frac{3}{(-K_{X})^{3}}\int_{0}^{\tau}\int_{0}^{\tilde{t}(u)}(\tilde{P}(u,v) \cdot G)^{2}dvdu=\begin{cases}\frac{135}{176}&\text{if}u\in[0,1],\\ \frac{3}{176}&\text{if}u\in[1,\frac{4}{3}].\end{cases}\]
For \(u\in[0,1]\) since \(N_{S}(u)=0\), we have that \(N^{\prime}_{\widehat{S}}(u)=0\). When \(u\in[1,\frac{4}{3}]\), \(N_{\widehat{S}}(u)=(u-1)\widetilde{E|_{S}}\), where \(\widetilde{E|_{S}}\) is the strict transform of the curve \(E|_{S}\) on \(\widehat{S}\). Since by assumption \(p\notin E\), we have \(N_{\widehat{S}}(u)|_{G}=0\). We have different cases depending on the position of the point \(q\).
If \(q\in G\cap\widehat{C}\)
\[F_{q}(W^{S,G}_{\bullet,\bullet,\bullet})=\begin{cases}0&\text{if}u\in[0,1],v \in[0,3-\frac{3u}{2}],\\ \frac{45}{352}&\text{if}u\in[0,1],v\in[3-\frac{3u}{2},4-2u],\\ 0&\text{if}u\in[1,\frac{4}{3}],v\in[0,6-\frac{9u}{2}],\\ \frac{1}{352}&\text{if}u\in[1,\frac{4}{3}],v\in[6-\frac{9u}{2},8-6u].\end{cases}\]
and
If \(q\in G\backslash\widehat{C}\)
\[F_{q}(W^{S,G}_{\bullet,\bullet})=0.\]
The value in (5) is then given by:
\[S(W^{S,G}_{\bullet,\bullet,\bullet};q)=\frac{161}{176}\text{ when }q\in G\cap \widehat{C}\text{ and }\]
\[S(W^{S,G}_{\bullet,\bullet,\bullet};q)=\frac{69}{88}\text{ when }q\in G\backslash \widehat{C}\]
Since the surface \(\widehat{S}\) is smooth, the different \(\Delta_{G}\) is trivial and we get:
\[\min_{q\in G}\frac{1-\operatorname{ord}_{q}\Delta_{G}}{S(W^{S,G}_{\bullet, \bullet};P)}=\frac{176}{161}. \tag{20}\]
In conclusion, combining Lemma 3.8 and Equations (19) and (20) we get
\[\delta_{p}(X)\geq\min\left\{\frac{176}{161},\frac{176}{161},\frac{33}{14} \right\}=\frac{176}{161}.\]
#### 3.9.1. Cuspidal Curve
Suppose the point \(p\) on \(X\) is such that \(C\subset S\) is cuspidal at the point \(\beta(p)\). Similar to the previous subsection, we use Theorem 2.6 to obtain an estimate to \(\delta_{p}(X)\).
Let \(\sigma:\widehat{S}\to S\) be the \((2,3)\)-weighted blow up of \(S\) at the point \(p\) with exceptional divisor \(G\). The strict transform \(\widehat{C}\) of \(C\) in \(\widehat{S}\) intersects the exceptional curve \(G\) in one regular point. The following hold:
\[\widehat{C}=\sigma^{*}(C)-6G,\qquad K_{\widehat{S}}=\sigma^{*}(K_{S})+4G,\text { and }\]
\[G^{2}=-\frac{1}{6},\qquad\widehat{C}\cdot G=1,\qquad\widehat{C}^{2}=-3.\]
We note that \(G\) has two singular points, we denote by \(p_{0}\) the one of type \(\frac{1}{2}(1,1)\) and by \(p_{1}\) the one of type \(\frac{1}{3}(1,1)\). In particular, the different \(\Delta_{G}\) defined by:
\[(K_{\widehat{S}}+G)|_{G}=K_{G}+\Delta_{G}\quad\text{ is given by }\quad\Delta_{G}=\frac{1}{2}p_{0}+\frac{2}{3}p_{1}.\]
**Proposition 3.10**.: _Suppose the point \(p\in X\backslash(\tilde{Q}\cup E)\) is a cusp of the tangent hyperplane section to the general hyperplane section of \(V_{3}\) containing \(\beta(p)\), then_
\[\delta_{p}(X)\geq\frac{220}{207}.\]
Proof.: We apply Theorem 2.6 to the flag consisting of \(p\), the exceptional curve \(G\) and the strict transform of the general hyperplane section of \(V_{3}\) through \(\beta(p)\).
We start by computing \(S(V^{S}_{\bullet,\bullet},G)\). We consider the linear system
\[\sigma^{*}(P(u)|_{S})-vG=\begin{cases}(2-u)\widehat{C}+(12-6u-v)G&\text{if }u \ \in[0,1],\\ (4-3u)\widehat{C}+(24-18u-v)G&\text{if }u\ \in[1,\frac{4}{3}].\end{cases}\]
Its Zariski decomposition has positive part given by
\[\tilde{P}(u,v)=\begin{cases}(2-u)\widehat{C}+(12-6u-v)G&\text{if }u\in[0,1],v \in[0,6-3u];\\ (12-6u-v)(\frac{1}{3}\widehat{C}+G)&\text{if }u\in[0,1],v\in[6-3u,12-6u];\\ (4-3u)\widehat{C}+(24-18u-v)G&\text{if }u\in[1,\frac{4}{3}],v\in[0,12-9u];\\ (8-6u-\frac{v}{3})(\widehat{C}+3G)&\text{if }u\in[1,\frac{4}{3}],v\in[12-9u,24-18u]. \end{cases}\]
and negative given by:
\[\tilde{N}(u,v)=\begin{cases}0&\text{ if }u\in[0,1],v\in[0,6-3u];\\ (u-2+\frac{v}{3})\widehat{C}&\text{ if }u\in[0,1],v\in[6-3u,12-6u];\\ 0&\text{ if }u\in[1,\frac{4}{3}],v\in[0,12-9u];\\ (\frac{v}{3}+3u-4)\widehat{C}&\text{ if }u\in[1,\frac{4}{3}],v\in[12-9u,24-18u]. \end{cases}\]
Note that \(N_{S}(u)|_{S}=0\) for \(u\in[0,1]\) and \(\operatorname{ord}_{G}(N_{S}(u)|_{S})=\operatorname{ord}_{G}((u-1)E|_{S})=0\) for \(u\in[1,\frac{4}{3}]\) since by assumption \(p\notin E\). Therefore the value in Equation (4)
\[S(V^{S}_{\bullet,\bullet},G)=\frac{207}{44},\quad\text{ and thus }\quad\frac{A_{S}(G)}{S(V^{S}_{\bullet,\bullet},G)}=\frac{220}{207}, \tag{21}\]
since \(A_{S}(G)=1+\operatorname{ord}_{G}(K_{S}-\sigma^{*}(K_{S}))=5\).
We now compute \(S(W^{S,G}_{\bullet,\bullet};q)\) for various points \(q\in G\). To compute the value in formula (5) we notice that the first term is independent of the position of \(q\in G\), while \(F_{q}:=F_{q}(W^{S,G}_{\bullet,\bullet})\) varies, so we split in cases. We notice that \(\operatorname{ord}_{q}(N^{\prime}_{S}(u)|_{G})=0\) for any point \(q\in G\), since \(N_{S}(u)\) is a multiple of \(E\) and \(p\not\in E\) by assumption. Also, \(\tilde{N}(u,v)\) is a multiple of \(\widehat{C}\), hence \(F_{q}\neq 0\) only for \(q=G\cap\widehat{C}\). We have \(S(W^{S,G}_{\bullet,\bullet};q)=\frac{23}{88}+F_{q}\) and we get the cases:
* \(q=p_{0}\), so \(\operatorname{ord}_{p_{0}}(\Delta_{G})=\frac{1}{2}\) and \[\frac{1-\operatorname{ord}_{q}(\Delta_{G})}{S(W^{S,G}_{\bullet,\bullet};q)}= \left(1-\frac{1}{2}\right)\cdot\frac{88}{23}=\frac{44}{23};\]
* \(q=p_{1}\), so \(\operatorname{ord}_{p_{1}}(\Delta_{G})=\frac{2}{3}\) and \[\frac{1-\operatorname{ord}_{q}(\Delta_{G})}{S(W^{S,G}_{\bullet,\bullet};q)}= \left(1-\frac{2}{3}\right)\cdot\frac{88}{23}=\frac{88}{69};\]
* \(q=\widehat{C}\cap G\), so \(\operatorname{ord}_{q}(\Delta_{G})=0,\ F_{q}=\frac{23}{88}\) and \[\frac{1-\operatorname{ord}_{q}(\Delta_{G})}{S(W^{S,G}_{\bullet,\bullet};q)}= \frac{1}{\frac{23}{88}+\frac{23}{88}}=\frac{44}{23};\]
* \(q\not\in\{p_{0},p_{1},\widehat{C}\cap G\}\), so \(\operatorname{ord}_{p_{q}}(\Delta_{G})=0\) and \[\frac{1-\operatorname{ord}_{q}(\Delta_{G})}{S(W^{S,G}_{\bullet,\bullet};q)}= \frac{88}{23}.\]
Therefore,
\[\min_{q\in G}\frac{1-\operatorname{ord}_{q}\Delta_{G}}{S(W^{S,G}_{\bullet, \bullet};q)}=\min\left\{\frac{88}{23},\frac{44}{23},\frac{88}{69},\frac{44}{ 23}\right\}=\frac{88}{69}. \tag{22}\]
In conclusion, by Lemma 3.8 and Equations (21) and (22) we have:
\[\delta_{p}(X)\geq\min\left\{\frac{33}{14},\frac{220}{207},\frac{88}{69}\right\} =\frac{220}{207}.\]
#### 3.10.1. Three lines
Suppose the point \(p\in X\) is such that the curve \(C\subset S\) containing \(\beta(p)\) is a union of \(3\) lines that intersect at \(\beta(p)\). Then, unlike the previous \(2\) cases, blowing up the surface \(S\) in \(X\) does not prove useful in giving a good estimate to \(\delta_{p}(X)\) and therefore, we will use the notion of infinitesimal flags over \(X\).
Let \(\pi:\dot{X}\to X\) be the blow up of the \(3\)-fold \(X\) at the point \(p\), with the exceptional divisor given by \(G\) and the strict transform of the surface \(S\) given by \(\hat{S}\). Since \(-K_{X}=\beta^{*}(2S)-\tilde{Q}\) and \(-K_{\hat{X}}=\pi^{*}(-K_{X})-2G\), the divisor
\[\pi^{*}(-K_{X})-uG=\frac{4}{3}\hat{S}+\frac{1}{3}\hat{E}+(4-u)G \tag{23}\]
where we also use \(\hat{S}=\pi^{*}(S)-3G\) and \(\tilde{Q}=\frac{2}{3}S-\frac{1}{3}E\).
**Lemma 3.11**.: _The pseudo-effective threshold \(\tau\) of the linear system \(\pi^{*}(-K_{X})-uG\) is \(\tau=4\)._
Proof.: From Equation (23) we clearly have that \(\tau\geq 4\). In order to prove the equality we show that the divisor \(4\hat{S}+\hat{E}\) is not big. For this, let \(\gamma\colon\hat{X}\to\operatorname{Bl}_{\alpha(p)}\mathbb{P}^{3}\) be the divisorial contraction of \(\hat{E}\). Since the pushforward of a big divisor along a birational morphism is big, in order to show the claim it is enough to show that \(\gamma_{*}\hat{S}\) is not big. For this, notice that \(\operatorname{Bl}_{\alpha(p)}\mathbb{P}^{3}\) is the resolution of indeterminacy of the projection from \(\alpha(p)\) and is a conic bundle \(h\colon\operatorname{Bl}_{\alpha(p)}\mathbb{P}^{3}\to\mathbb{P}^{2}\) which contracts \(\gamma(\hat{S})\) to an elliptic curve. In particular \(\gamma(\hat{S})=h^{*}\mathcal{O}_{\mathbb{P}^{2}}(3)\) is not big. The claim is proven.
**Proposition 3.12**.: _Suppose \(p\in X\backslash(\tilde{Q}\cup E)\) is such that \(p\) is the Eckardt point of curve \(C\) given by the tangent hyperplane section to the general hyperplane section of \(V_{3}\) containing \(\beta(p)\). Then_
\[\delta_{p}(X)=\frac{22}{17}\]
_and it is computed by the exceptional divisor \(G\) corresponding to the ordinary blowup of \(X\) at \(p\)._
Proof.: By [22, Corollary 4.18 (2)], we have
\[\frac{A_{X}(G)}{S_{X}(G)}\geq\delta_{p}(X)\geq\min\bigg{\{}\frac{A_{X}(G)}{S_{ X}(G)},\inf_{q\in G}\delta_{q}(G,\Delta_{G};V_{\bullet,\bullet}^{G})\bigg{\}}, \tag{24}\]
where the infimum runs over all points \(q\in G\).
We first compute the left hand side of inequality 24 and prove that the right hand side is bounded below by \(\frac{A_{X}(G)}{S_{X}(G)}\). From the proof of Lemma 3.11, we know that \(\hat{S}\) is a cone over an elliptic curve. Let \(L\) be the class of a line in \(\hat{S}\), then:
\[G\cdot L=1\quad\hat{E}\cdot L=2\quad\text{and}\quad\hat{S}\cdot L=-2.\]
Moreover,
\[\hat{S}^{2}\cdot E=6,\quad\hat{S}\cdot G^{2}=-3,\quad\hat{S}^{2} \cdot G=9,\quad G^{2}\cdot\hat{E}=G\cdot\hat{E}^{2}=0,\] \[\hat{E}^{3}=-30\quad\hat{S}\cdot E^{2}=12,\quad\hat{S}_{x}^{3}=-2 4,\quad\hat{S}\cdot\hat{E}\cdot G=0,\quad G^{3}=1.\]
Let \(P(u)\) and \(N(u)\) be the positive and negative part of \(\pi^{*}(-K_{X})-uG\). We have:
\[P(u)=\begin{cases}\frac{1}{3}\hat{E}+\frac{4}{3}\hat{S}+(4-u)G&\text{if }u\in[0,2],\\ \frac{1}{3}\hat{E}+\left(\frac{7}{3}-\frac{u}{2}\right)\hat{S}+(4-u)G&\text{if }u\in[2,4], \end{cases}\text{ and }N(u)=\begin{cases}0&\text{if }u\in[0,2],\\ \frac{(u-2)}{2}\hat{S}&\text{if }u\in[2,4],\end{cases}\]
and since \(-K_{\hat{X}}=\pi^{*}(-K_{X})-2G\), we have that \(A_{X}(G)=3\) and \(S_{X}(G)=\frac{51}{22}\), so that
\[\frac{A_{X}(G)}{S_{X}(G)}=\frac{22}{17}.\]
We now estimate \(\inf_{q\in G}\delta_{q}(G,\Delta_{G};V^{G}_{\bullet,\bullet})\). For every point \(q\in G\), we choose the flag \(q\in L\subset G\), where \(L\) is a line in \(G\) intersecting \(\hat{S}|_{G}\) transversely. Then, by [2, Theorem 3.2]
\[\delta_{q}(G,\Delta_{G};W^{G}_{\bullet,\bullet})\geq\min\left\{\frac{1}{S(W^{G }_{\bullet,\bullet};L)},\frac{1-\operatorname{ord}_{q}\Delta_{L}}{S(W^{G,L}_ {\bullet,\bullet};q)}\right\}.\]
Let \(P(u,v)\) and \(N(u,v)\) be the positive and negative part of \(P(u)|_{G}-vL\). These are given by
\[P(u,v)=\begin{cases}(u-v)L&\text{if }u\in[0,2],\ v\in[0,u],\\ \Big{(}\frac{6-u}{2}-v\Big{)}L&\text{if }u\in[2,4],\ v\in[0,\frac{6-u}{2}], \end{cases}\quad\text{ and }\quad N(u,v)=0.\]
Notice that \(\operatorname{ord}_{L}(N(u)|_{G})=0\) since \(\hat{S}|_{G}\) is not supported on \(L\). Then,
\[\frac{1}{S(W^{G}_{\bullet,\bullet};L)}=\frac{44}{23}.\]
Let \(Z\) be the elliptic curve \(\hat{S}|_{G}\). Then,
\[\operatorname{ord}_{q}(N^{\prime}_{G}(u)|_{L}+N(u,v)|_{L}) =\operatorname{ord}_{q}(N^{\prime}_{G}(u)|_{L})\] \[=\operatorname{ord}_{q}\biggl{(}\frac{u-2}{2}Z|_{L}\biggr{)}\] \[=\begin{cases}0&\text{if }q\not\in Z|_{L},\\ \frac{u-2}{2}&\text{otherwise}.\end{cases}\]
Then,
\[S(W^{G,L}_{\bullet,\bullet};q)=\begin{cases}\frac{23}{44}&\text{if }q\not\in Z |_{L},\\ \frac{17}{22}&\text{if }q\in Z|_{L}.\end{cases}\]
Hence,
\[\inf_{q\in G}\delta_{q}(G,\Delta_{G};V^{G}_{\bullet,\bullet})\geq\min\left\{ \frac{44}{23},\min\left\{\frac{44}{23},\frac{22}{17}\right\}\right\}=\frac{2 2}{17}.\]
The claim follows.
### Estimate of \(\delta_{p}\) for a point \(p\) in \(E\)
We now estimate \(\delta_{p}(X)\) where \(p\in E\). Let \(H\subseteq\mathbb{P}^{3}\) be a general hyperplane containing \(\alpha(p)\). Then, recall that \(H\) intersects the curve \(\mathscr{C}\) in six points, which we denote \(p_{1}:=\alpha(p),p_{2},\ldots,p_{6}\), lying on the conic \(C:=Q\cap H\). Let \(S\) be the strict transform of \(H\). The morphism \(S\to H\) is the blow-up of \(H\simeq\mathbb{P}^{2}\) in the six points \(p_{1},p_{2},\ldots,p_{6}\) and we denote by \(E_{i}\) the associated exceptional divisors. Let \(l\) be the strict transform of a line in \(H\).
**Proposition 3.14**.: _If \(p\in E\cap\tilde{Q}\), then_
\[\delta_{p}(X)\geq\frac{132}{131}.\]
_If \(p\in E\setminus\tilde{Q}\), then_
\[\delta_{p}(X)\geq\frac{66}{65}.\]
Proof.: We apply Theorem 2.5 to the flag:
\[p\in E_{1}\subset S\subset X\]
To compute \(S_{X}(S)\) we consider the linear system \(-K_{X}-uS\) for \(u\in\mathbb{R}_{\geq 0}\) which, in terms of the generators of \(\mathrm{Eff}(X)\), is given by
\[\bigg{(}2-\frac{u}{2}\bigg{)}\tilde{Q}+\bigg{(}1-\frac{u}{2}\bigg{)}E.\]
Hence its pseudo-effective threshold is \(\tau=2\). Consider the Zariski decomposition of \(-K_{X}-uS\):
\[P(u)=\begin{cases}(4-u)H-E&\text{if $u\in[0,1]$},\\ (6-3u)H+(u-2)E&\text{if $u\in[1,2]$},\end{cases}\text{ and }N(u)=\begin{cases}0&\text{if $u\in[0,1]$},\\ (u-1)\tilde{Q}&\text{if $u\in[1,2]$}.\end{cases}\]
Recall that this is the same as in Proposition 3.6, from which we have that \(S_{X}(S)=\frac{23}{44}\) (14). We now compute the value \(S(V_{\bullet,\bullet}^{S};E_{1})\). Consider the linear system
\[D=P(u)|_{S}-vE_{1}=\begin{cases}(4-u)l-\sum_{i=i}^{6}E_{i}-vE_{1}&\text{if $u\in[0,1]$},\\ (6-3u)l-(2-u)\sum_{i=i}^{6}E_{i}-vE_{1}&\text{if $u\in[1,2]$}\end{cases}\]
We denote by \(L_{i,j}\) the strict transform of the line through the points \(p_{i},p_{j}\). Its Zariski decomposition for \(u\in[0,1]\) is
\[P=\begin{cases}D&\text{if $v\in[0,2-2u]$},\\ D-a\tilde{C}&\text{if $v\in[2-2u,2-u]$},\text{ and }N=\begin{cases}0&\text{if $v\in[0,2-2u]$},\\ a\tilde{C}&\text{if $v\in[2-2u,2-u]$},\\ a\tilde{C}+b\sum_{j=2}^{6}L_{1,j}&\text{if $v\in[2-u,\frac{8-4u}{3}]$},\end{cases}\end{cases}\]
where \(a=\frac{v}{2}+u-1\) and \(b=v+u-2\) and for \(u\in[1,2]\) is
\[P=\begin{cases}D-a\tilde{C}&\text{if $v\in[0,2-u]$},\\ D-a\tilde{C}-b\sum_{j=2}^{6}L_{1,j}&\text{if $v\in[2-u,\frac{8-4u}{3}]$}\end{cases} \text{ and }N=\begin{cases}a\tilde{C}&\text{if $v\in[0,2-u]$},\\ a\tilde{C}+b\sum_{j=2}^{6}L_{1,j}&\text{if $v\in[2-u,\frac{8-4u}{3}]$} \end{cases}\]
where \(a=\frac{v}{2}\) and \(b=v+u-2\). Hence, the volume of the divisor \(D\) for \(u\in[0,1]\) is
\[\mathrm{vol}(D)=P^{2}=\begin{cases}u^{2}-v^{2}-8u-2v+10&\text{if $v\in[0,2-2u]$},\\ 12-4v-12u-\frac{1}{2}v^{2}+2vu+3u^{2}&\text{if $v\in[2-2u,2-u]$},\\ \frac{1}{2}(4u+3v-8)^{2}&\text{if $v\in[2-u,\frac{8-4u}{3}]$},\end{cases}\]
and for \(u\in[1,2]\) is
\[\mathrm{vol}(D)=P^{2}=\begin{cases}12-4v-12u-\frac{1}{2}v^{2}+2vu+3u^{2}& \text{if $v\in[0,2-u]$},\\ \frac{1}{2}(4u+3v-8)^{2}&\text{if $v\in[2-u,\frac{8-4u}{3}]$}.\end{cases}\]
We note that \(\tilde{Q}|_{S}=\tilde{C}\) which has no support on \(E_{1}\). Hence the negative part \(N_{S}(u)\) does not contribute in the formula (1) and we get:
\[S(V_{\bullet,\bullet}^{S};E_{1})=\frac{65}{66}. \tag{25}\]
We now compute \(S(W^{S,E_{1}}_{\bullet,\bullet,\bullet};p)\). Notice that the hyperplane \(H\) can be chosen so that \(p\) does not lie in any of the \(L_{1,j}\). Then
\[\mathrm{ord}_{p}(N^{\prime}_{S}(u)|_{E_{1}}+N|_{E_{1}})=\mathrm{ord}_{p}\bigg{(} \Big{(}u-1+\frac{v}{2}\Big{)}\tilde{C}|_{E_{1}}\bigg{)}=\begin{cases}u-1+\frac{ v}{2}&\text{if }p\in\tilde{C},\\ 0&\text{if }p\not\in\tilde{C},\end{cases}\]
since \(E_{1}\) is transversal to \(\tilde{C}\). By (3), we have,
\[F_{p}(W^{S,E_{1}}_{\bullet,\bullet,\bullet})=\begin{cases}\frac{5}{33}&\text{if }p \in\tilde{C},\\ 0&\text{if }p\not\in\tilde{C}.\end{cases}\]
By direct application of (2) we have
\[S(W^{\widetilde{H},E_{1}}_{\bullet,\bullet,\bullet};p)=\begin{cases}\frac{131 }{132}&\text{if }p\in\tilde{C},\\ \frac{37}{44}&\text{if }p\not\in\tilde{C}.\end{cases} \tag{26}\]
By Theorem 2.5 we have,
\[\delta_{p}(X)\geq\begin{cases}\min\left\{\frac{44}{23},\frac{66}{65},\frac{132 }{131}\right\}=\frac{132}{131}&\text{if }p\in\tilde{C},\\ \min\left\{\frac{44}{23},\frac{66}{65},\frac{44}{37}\right\}=\frac{66}{65}& \text{if }p\not\in\tilde{C}.\end{cases} \tag{27}\]
We can finally prove our main theorem:
**Theorem 3.15**.: _Every smooth member of the Fano family 2.15, which is the blow-up of \(\mathbb{P}^{3}\) in a curve given by the intersection of a quadric and a cubic, is K-stable. In particular,_
\[\delta(X)\geq\frac{132}{131}.\]
Proof.: The local stability threshold is estimated for every point \(p\in X\). In particular by Propositions 3.2, 3.5, 3.6, 3.9, 3.10, 3.12, 3.14 one has:
\[\delta(X)\geq\min\left\{\frac{8}{7},\frac{11}{10},\frac{8}{7},\frac{176}{161}, \frac{220}{207},\frac{22}{17},\frac{132}{131}\right\}=\frac{132}{131}.\] |
2303.04934 | Parallel Strong Connectivity Based on Faster Reachability | Computing strongly connected components (SCC) is a fundamental problems in
graph processing. As today's real-world graphs are getting larger and larger,
parallel SCC is increasingly important. SCC is challenging in the parallel
setting and is particularly hard on large-diameter graphs. Many existing
parallel SCC implementations can be even slower than Tarjan's sequential
algorithm on large-diameter graphs.
To tackle this challenge, we propose an efficient parallel SCC implementation
using a new parallel reachability algorithm. Our solution is based on a novel
idea referred to as vertical granularity control (VGC). It breaks the
synchronization barriers to increase parallelism and hide scheduling overhead.
To use VGC in our SCC algorithm, we also design an efficient data structure
called the \emph{parallel hash bag}. It uses parallel dynamic resizing to avoid
redundant work in maintaining frontiers (vertices processed in a round).
We implement the parallel SCC algorithm by Blelloch et al.\ (J.\ ACM, 2020)
using our new parallel reachability algorithm. We compare our implementation to
the state-of-the-art systems, including GBBS, iSpan, Multi-step, and our highly
optimized Tarjan's (sequential) algorithm, on 18 graphs, including social, web,
$k$-NN, and lattice graphs. On a machine with 96 cores, our implementation is
the fastest on 16 out of 18 graphs. On average (geometric means) over all
graphs, our SCC is 6.0$\times$ faster than the best previous parallel code
(GBBS), 12.8$\times$ faster than Tarjan's sequential algorithms, and
2.7$\times$ faster than the \emph{best existing implementation on each graph}.
We believe that our techniques are of independent interest. We also apply our
parallel hash bag and VGC scheme to other graph problems, including
connectivity and least-element lists (LE-lists). | Letong Wang, Xiaojun Dong, Yan Gu, Yihan Sun | 2023-03-08T23:08:54Z | http://arxiv.org/abs/2303.04934v2 | # Parallel Strong Connectivity Based on Faster Reachability
###### Abstract.
Computing strongly connected components (SCC) is among the most fundamental problems in graph processing. As today's real-world graphs are getting larger and larger, parallel SCC is increasingly important. SCC is challenging in the parallel setting and is particularly hard on large-diameter graphs. Many existing parallel SCC implementations can be even slower than Tarjan's sequential algorithm on large-diameter graphs.
To tackle this challenge, we propose an efficient parallel SCC implementation using a new parallel reachability algorithm. Our solution is based on a novel idea referred to as vertical granularity control (VGC). It breaks the synchronization barriers to increase parallelism and hide scheduling overhead. To use VGC in our SCC algorithm, we also design an efficient data structure called the _parallel hash bag_. It uses parallel dynamic resizing to avoid redundant work in maintaining frontiers (vertices processed in a round).
We implement the parallel SCC algorithm by Blelloch et al. (J. ACM, 2020) using our new parallel reachability algorithm. We compare our implementation to the state-of-the-art systems, including GBBS, iSpan, Multi-step, and our highly optimized Tarjan's (sequential) algorithm, on 18 graphs, including social, web, \(k\)-NN, and lattice graphs. On a machine with 96 cores, our implementation is the fastest on 16 out of 18 graphs. On average (geometric means) over all graphs, our SCC is 6.0x faster than the best previous parallel code (GBBS), 12.8x faster than Tarjan's sequential algorithms, and 2.7x faster than the _best existing implementation on each graph_.
We believe that our techniques are of independent interest. We also apply our parallel hash bag and VGC scheme to other graph problems, including connectivity and least-element lists (LE-lists). Our implementations improve the performance of the state-of-the-art parallel implementations for these two problems.
Parallel Algorithms, Graph Algorithms, Strong Connectivity, Reachability, Graph Analytics +
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
and lattice graphs, all existing parallel SCC algorithms on a 96-core machine are slower than the sequential Tarjan's algorithm.
In this paper, we propose _an efficient SCC implementation with high parallelism on a wide range of graphs_. We also use the BGSS algorithm to bound the work. The core of our idea is to improve parallelism by avoiding \(O(D)\) rounds of synchronization in reachability searches and thus reducing the scheduling overhead. To do this, we propose a novel idea referred to as the _vertical granularity control_ (VGC) optimization. The high-level idea of VGC is to break the synchronization barriers and increase parallelism.
Particularly, for the reachability queries, unlike parallel BFS that only visits the neighbors of the vertices in the frontier, we want to visit a much larger set of vertices that can be multiple hops away. This is achieved by a "local search" algorithm that allows each vertex in the frontier to visit more than direct neighbors in one round. We will discuss more details of VGC and local search in Sec. 3.1 and 3.2. This approach saves most synchronization rounds and improves the performance, especially on large-diameter graphs.
We note that one technical difficulty in VGC and local search is to handle the non-determinism in generating the next frontier--all vertices in the frontier explore their proximity in parallel, in a random order decided by the runtime scheduling. This disables the "edge-revisit" approach in existing BFS algorithms (Stein
**The BGSS Algorithm.** Our parallel SCC solution uses the BGSS SCC algorithm (Krishnam et al., 2017) based on reachability searches, which is shown in Alg. 1. To achieve good parallelism while bounding the work, the BGSS algorithm uses \(\log n\) batches of reachability searches. The algorithm first randomly permutes the vertices and groups them into batches of sizes \(1,2,4,8,...\) in a prefix-doubling manner (the multiplier is not necessary to be \(2\), but can be any constant \(\beta>1\)). In the \(i\)-th round, the algorithm uses batch \(i\) with \(2^{i-1}\) vertices as the sources to run (forward and backward) multireachability searches, marks SCCs, and removes cross edges. In this way, the BGSS algorithm takes \(O(W_{R}(n,m)\log n)\) expected work and \(O((D_{R}(n,m)+\log n)\log n)\) span _whp_, where \(W_{R}(n,m)\) and \(D_{R}(n,m)\) are the work, and span for a reachability search on a graph with \(n\) vertices and \(m\) edges. The BGSS algorithm was implemented by Dhulipala et al. as part of the GBBS library (Dhulipala et al., 2017; Dhulipala et al., 2017), which uses (multi-)BFS for reachability searches (see more details later). There are two SCC algorithms in GBBS, and we refer to the _RandomGreedy_ version, as it is faster in most of our tests.
**Parallel BFS.** We briefly review parallel BFS, because it is used in previous work for reachability search, and some of the concepts are also used in our algorithm. There are many BFS algorithms (e.g., (Krishnam et al., 2017; Krishnam et al., 2017; Krishnam et al., 2017)). We review the version in Ligra (Ligra, 2017), as it is widely-used in other graph libraries (Dhulipala et al., 2017; Dhulipala et al., 2017; Dhulipala et al., 2017), and more importantly, later extended to _multi-BFS_ that can be used in multi-reachabilty searches needed by our SCC algorithm. We start with BFS from a _single_ source \(s\in V\) (high-level idea in Alg. 2). The algorithm maintains a _frontier_ of vertices to explore in each round, starting from the source, and finishes in \(D\) rounds. In round \(i\), the algorithm _processes_ (visits their neighbors) the current frontier \(\mathcal{F}_{i}\), and puts all their (unvisited) neighbors to the next frontier \(\mathcal{F}_{i+1}\). If multiple vertices in \(\mathcal{F}_{i}\) attempt to add the same vertex to \(\mathcal{F}_{i+1}\), a compare_and_swap is used to guarantee that only one will _succeed_. In existing libraries (Dhulipala et al., 2017; Ligra, 2017), processing \(\mathcal{F}_{i}\) involves visiting all incident edges of \(\mathcal{F}_{i}\) twice. The first visit decides the _successfully_ visited neighbors for each \(v\in\mathcal{F}_{i}\), and assigns the right size of memory in \(\mathcal{F}_{i+1}\) for each of them. The second visit lets each \(v\in\mathcal{F}_{i}\) write these neighbors to \(\mathcal{F}_{i+1}\). We call this the _edge-revisit_ scheme, and we will show how our _hash bag_ avoids the second visit and improve the performance.
This idea has been extended to _multiple_ sources \(S\subseteq V\) with two changes (Dhulipala et al., 2017). First, a parallel hash table \(T\)(Dhulipala et al., 2017) is used to maintain the _reachability pairs_\((u,s)\) where \(v\in V\) and \(s\in S\). Second, for each \(v\in\mathcal{F}_{i}\) and its successfully visited neighbor \(u\), we find all pairs \((v,s)\) from the hash table \(T\), and add \((u,s)\) to \(T\) if \((u,s)\notin T\). The number of reachability pairs generated by the BGSS algorithm is proved to be \(O(n\log n)\)_whp_(Dhulipala et al., 2017).
One challenge of using parallel BFS for reachability queries is the large cost to create and synchronize threads between rounds, which is especially expensive for large-diameter graphs (more rounds needed). In this paper, we will show how our new techniques reduces the scheduling overhead to achieve better parallelism.
Figure 1. The heatmap of relative speedup for parallel SCC algorithms over the sequential algorithm using 96 cores (192 hyperthreads). Larger/white background means better. “Seq”= Tarjan’s algorithm (Krishnam et al., 2017). The numbers indicate how many times a parallel algorithm is faster than Tarjan’s sequential algorithm (\(<1\) means slower). The three baseline algorithms (Dhulipala et al., 2017; Dhulipala et al., 2017) are introduced in Sec. 6. “t”= timeout (\(>5\) hours). “c”= crash. “n”= no quantum. “Staschior-much-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantumquantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantumquantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantumquantum-quantum-quantum-quantumquantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantumquantum-quantum-quantum-quantum-quantumquantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantum-quantumquantum-quantum-quantum-quantum-quantum-quantum-quantum-quantumquantum-quantum-quantum-quantum-quantum-quantum-quantum-quantumquantum-quantum-quantum-quantumquantum-quantum-quantumquantum-quantum-quantum-quantum-quantumquantum-quantum-quantumquantum-quantumquantum-quantum-quantumquantum-quantum-quantum-quantum-quantumquantum-quantumquantum-quantum-quantum-quantumquantum-quantum-quantumquantum-quantum-quantumquantum-quantum-quantumquantum-quantum-quantumquantum-quantumquantum-quantumquantum-quantum-quantumquantum-quantumquantum-quantumquantum-quantum-quantumquantum-quantumquantum-quantumquantum-quantum-quantum-quantumquantum-quantumquantum-quantumquantum-quantum-quantumquantum-quantumquantum-quantumquantum-quantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantum-quantumquantum-quantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantum-quantumquantum-quantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantum-quantumquantum-quantumquantum-quantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantumquantum-quantumquantum-quantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantum-quantumquantumquantum-quantumquantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantumquantum-quantumquantumquantum-quantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantum-quantumquantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantum-quantumquantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantum-quantumquantumquantum-quantumquantum-quantumquantum-quantumquantumquantum-quantumquantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantumquantum-quantumquantum-quantumquantum-quantumquantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantumquantum-quantum-quantumquantum-quantumquantum-quantum-quantumquantum-quantumquantum-quantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantumquantum-quantum-quantumquantum-quantumquantum-quantum-quantumquantum-quantumquantum-quantum-quantumquantum-quantum-quantumquantum-quantum-quantumquantum-quantum-quantumquantum-quantum-quantum-quantum-quantumquantum-quantum-quantum-quantum
```
Input: A directed graph \(G=(V,E)\) and a source \(s\in V\)
1\(\mathcal{F}_{0}=\{s\}\)
2\(t\gets 0\)
3while\(\mathcal{F}_{t}\neq\emptyset\)do
4 Process all \(v\in\mathcal{F}_{t}\) and their edges in parallel, put all their unvisited neighbors (but avoid duplicates) to \(\mathcal{F}_{t+1}\) \(i\gets i+1\)
```
**General notations and vertical granularity control:**
\(n\) The number of vertices in a graph.
\(m\) The number of edges in a graph.
\(P\) The number of processors available.
\(\beta\) The multiplier of prefix-doubling for SCC, LDD, and LE-List algorithms. Usually \(\beta\in(1,2]\). We use \(\beta=1.5\) in our system.
\(\tau\) The threshold for vertical granularity control, which is the upper bound of visited neighborhood size per node. We use \(\tau=512\) as the default value.
**Hash Bag:**
\(\lambda\) The first chunk size of hash bag. Theoretically, \(\lambda=\Omega((P+\log n)\log n)\). We set \(\lambda=2^{10}\) in our system.
\(\sigma\) The threshold of number of samples to trigger hash bag resizing.
Theoretically, \(\sigma=\Omega(\log n)\). We use \(\sigma=50\) in our system. ```
**Algorithm 2**Framework of Parallel BFS
## 3. Fast Parallel Algorithm for Reachability
To implement an efficient parallel SCC algorithm, we use the BGSS algorithm to bound the work, and present novel ideas for fast reachability search to enable high parallelism. In this section, we present two main techniques in this paper: the _vertical granularity control (VGC)_ with the _hash bag_ data structure. Our VGC optimization is designed to address the challenge of low parallelism in computing SCC on sparse and large-diameter graphs. The goal is to enable a proper size for each parallel task to hide the scheduling overhead. Our idea is to let each vertex search out multiple hops in each parallel task, and thus the number of needed rounds in reachability searches is reduced. The details of the local search is in Sec. 3.1. While the high-level idea sounds simple, this brings up the challenge of non-determinism--each vertex may explore multiple hops and the explored neighborhood depend on runtime scheduling, which results in some complication in generating the frontier by the edge-revisit scheme (see Line 14). Therefore, we propose a data structure called the _parallel hash bag_, to maintain the frontier more efficiently. Our hash bag is theoretically-efficient and fast in practice, and more details are given in Sec. 3.3.
Combining both techniques, we achieve fast single- and multi-reachability searches. Plugging them into Lines 6 and 7 in Alg. 1 gives a high-performance parallel SCC algorithm. We believe that these techniques are general and useful in many graph algorithms. As proofs-of-concept, in Sec. 5, we apply the proposed ideas to two more algorithms: connected components (CC) and least-element lists (LE-lists), and show new algorithms with better performance. We present notation and parameters used in this paper in Tab. 1.
### Vertical Granularity Control
In this section, we present our vertical granularity control (VGC) optimization. As mentioned, previous work (Vigo et al., 2018) uses parallel BFS for reachability searches, where the number of rounds is proportional to the diameter of the graph. On many real-world sparse graphs with large diameters, both the frontier size and the average degree are small, which leads to two challenges. First, every parallel task (roughly processing one vertex in the frontier) is small, and the cost of distributing the tasks to the processor can be much more than the actual computation. Second, the number of rounds is large, resulting in many rounds of distributing and synchronizing threads.
Audience familiar with parallel programming must know the concept of _granularity control_ (aka. coarsening), aiming to avoid the overhead caused by generating unnecessary parallel tasks. For computations with sufficient parallelism, e.g., a parallel for-loop of size \(n\gg p\) where \(p\) is the number of processors, most existing parallel software (e.g., (Bouquet et al., 2018; Goyal et al., 2018; Goyal et al., 2018)) will automatically stop recursively creating parallel tasks at a certain subproblem size and switch to a sequential execution (see Fig. 3(a)) to hide the scheduling overhead. We refer to this classic approach as the horizontal granularity control (HCG) since it merges the computation on the same level (sibling leaf nodes in Fig. 3(a)).
Unfortunately, this idea does not directly apply to reachability searches or similar problems on sparse graphs. HGC is used when there is excessive parallelism to saturate all processors. However, when processing sparse graphs, the issue becomes that we have insufficient computation (frontiers with small sizes) to saturate the all processors for good parallelism, and grouping sibling (horizontal) computation in the same round only makes it worse. To tackle this challenge, we propose a novel and very different approach, referred to as _vertical granularity control_ (VCG). The high-level idea is still to increase each task size to hide the scheduling overhead, but we merge the computation across _different levels_ to acquire more work and saturate all processors in each round (an example in Fig. 3(b)). In this way, we break the synchronization points and reduce scheduling overhead. Note that this also means that VGC is unlikely to be automatic (unlike HGC)--breaking the synchronization structures may significantly change the computation and needs careful redesign of the algorithm (in our case, we need the new data structure _hash bag_ to deal with non-determinism, see Sec. 3.3).
\begin{table}
\begin{tabular}{l l} \hline \hline
**Input:** A directed graph \(G=(V,E)\) and a source \(s\in V\) \\ \(\mathcal{F}_{0}=\{s\}\) \\ \(1\gets 0\) \\ \(\forall\mathcal{F}_{t}\neq\emptyset\) \\ \(\sigma\) & The threshold of number of samples to trigger hash bag resizing. \\ Theoretically, \(\sigma=\Omega(\log n)\). We use \(\sigma=50\) in our system. \\ \hline \hline \end{tabular}
\end{table}
Table 1. Notations used in this paper.
Figure 3. An illustration of (a) the classic horizontal granularity control (HGC) and (b) our new vertical granularity control (VGC). We consider work-stealing schedulers. (a): The computation structure of parallel-for or divide-and-conquer algorithms. HGC groups computation in the same level and run sequentially, in order to reduce scheduling overhead. (b): The computation structure of several rounds of parallel-for, each starting from one thread and forking parallel tasks in a nested fashion. Each of them can be considered as a round (processing a frontier) in a parallel graph algorithm. VGC groups computations in different rounds and run sequentially, in order to reduce scheduling overhead. It breaks the synchronization points and reduces the number of rounds.
In the rest of this section, we will show how to apply VGC to reachability searches and achieve good parallelism. Our motivation is from some theory work for parallel reachability algorithms.
**Motivations from the Theory Work and Our Solution**. To reduce the number of rounds in BFS-like algorithms, many theoretical results use _shortcuts_(Kalouts and Kalouts, 2000; Kalouts and Kalouts, 2001; Kalouts and Kalouts, 2002; Kalouts and Kalouts, 2003; Kalouts and Kalouts, 2004; Kalouts and Kalouts, 2005; Kalouts and Kalouts, 2006) to reduce the diameter of the graph. Unfortunately, these approaches can be impractical because they incur high overhead for storing the shortcuts, increased memory footprint, and a significant preprocessing cost (e.g., Fineman's algorithm (Fineman, 1992) has \(O(m\log^{6}n+n\log^{10}n)\) preprocessing work). Hence, these algorithms are unlikely to beat the \(O(m)\) Tarjan's sequential algorithm using modern computers with tens to hundreds of processors.
To perform VGC without much overhead, we wish to add shortcuts but avoids explicitly generating them. Instead of shortcutting _every_ vertex (Kalouts and Kalouts, 2001; Kalouts and Kalouts, 2002), which can be costly, we only shortcut those in the current frontier, so that a vertex in the frontier can perform some work in the next several rounds "in advance" to enable VGC. Without shortcutting, visiting the same vertices may take multiple rounds to finish. We wish to process the shortcuts locally and sequentially to avoid space overhead. For each vertex \(v\) being processed, we shortcut it to some (nearest) vertices reachable from \(v\) on the fly by a _local search_ from \(v\), such that we do not need to store the shortcuts. Our local search is similar to the sequential BFS algorithm. We maintain a _local queue_ starting from \(v\) and visit each of its neighbor \(u\). We will set all sources reachable to \(v\) as reachable to \(u\). If any of these sources \(s\) is new to \(u\), we add \((u,s)\) as a reachable pair, and add \(u\) to the tail of the local queue. We then move to process the next vertex in the local queue. This process terminates when we have performed "sufficient work" in the local search, and we discuss how to control the granularity in Sec. 3.2.
We call it the "local" queue since we allocate it in the stack memory that is not visible to other processors. This avoids allocating arrays from the global memory (the heap space), which can be costly (and complicated) in parallel. The local searches from different vertices in the frontier are independent and in parallel with each other. The only information that one search needs is whether a vertex has been visited or not, which can be maintained using a boolean array (for single-reachability) or a parallel hash table (for multi-reachability) that support atomic updates. As such, all searches on the same processor can reuse the same stack space for different local queues.
Using VGC, we can reduce the number of rounds and thus the scheduling overhead since vertices in multiple hops can be visited in one round. Fig. 10 shows that VGC reduces the number of rounds in reachability searches by \(3\)-\(200\times\) and greatly improves performance.
Although our idea of using (on-the-fly) shortcuts for VGC is intuitive, two technical challenges remain. The first is load balancing\(-\) vertices can have various degrees and neighborhood patterns, and one vertex may explore a large neighborhood region sequentially. We discuss the control of granularity (the local search size) in Sec. 3.2. The second is non-determinism. In VGC, each vertex can search out several hops, and the explored region depends on runtime scheduling. Hence, we cannot use the edge-revisit scheme in previous work (Line 14) since the second visit may not perform the same computation as the first one. To tackle this, we propose the _hash bag_ data structure to efficiently maintain the frontier.
### Control of Granularity
The goal to control granularity is to make each task large enough and hide the cost of scheduling it. However, we cannot let them be arbitrarily large since they are executed sequentially and may cause load-imbalance. We wish to let all threads perform (roughly) a similar amount of work. In our implementation, we control the number of visited vertices in each local search by a parameter \(\tau\), including both successful and unsuccessful ones. This number provide an estimation of the workload for each local search.
In particular, when processing a vertex \(v\) in the frontier, we first check the number of \(v\)'s outgoing neighbors. If it is more than \(\tau\), we process all its neighbors in parallel as in the standard way, since we have sufficient work to do and no more shortcuts are needed. Otherwise, we start the local search and maintain a counter \(t\) starting from zero. When processing a vertex \(v\) in the local queue, we increment \(t\) for every neighbor visited, successfully or unsuccessfully. Note that since the local search is performed sequentially, there is no race condition in maintaining the counters. We stop the local search either when the queue becomes empty (all possible vertices have been visited), or when the counter reaches \(\tau\) (this task is reasonably large). For all remaining vertices in the local queue, we directly add them to the next frontier. Conceptually, we shortcut \(v\) to the \(\tau\) nearest vertices that otherwise may need multiple rounds to reach. We present an illustration in Fig. 4. The desired granularity can be controlled by the parameter \(\tau\).
Intuitively, we can choose the parameter \(\tau\) empirically based on traditional HGC base-case size: usually using base-case size around \(1000\) operations is sufficient to hide scheduling overheads. We also experimentally study the value of \(\tau\) in Sec. 6.3. Compared to plain BFS (no VGC used), we noticed that the performance on most graphs (both large- and low-diameter graphs) except three graphs improved using any \(1<\tau\leq 2^{16}\). Overall, the performance is not sensitive in a large parameter space \(2^{6}\leq\tau\leq 2^{12}\) on almost all graphs. We simply set \(\tau=2^{9}\) as the default value, which is similar to typical HGC threshold. One can control granularity using other measures such as the number of generated reachability pairs or successfully visited vertices. We believe that the measure of
Figure 4. A possible execution of a local search. The initial frontier is \(\{A,B,C\}\) and \(\tau=4.A\) successfully visits one neighbor \(D\). As its local queue is not full, it then visits \(D\)’s neighbor \(B\) (skipped) and \(J\). \(J\) visits \(E\) but fails. No more vertex are left in \(A\)’s local queue. \(B\) visits \(E\), and \(E\) visits \(K\) and \(M\). Then, \(K\) visits \(L\) and add it to the queue. Now \(B\)’s queue is full. The unfinished vertices \(M\) and \(L\) will be flushed to the next frontier. \(C\) has four (\(\geq\tau\)) neighbors, so we directly check all neighbors and add successful ones (\(F\), \(G\), and \(H\)) to the next frontier. We process \(2\)-hop neighbors from the frontier in one round.
granularity is independent with the idea of VGC. We plan to explore more criteria to control granularity for VGC in future work.
### Parallel Hash Bag
As mentioned, VGC brings up the challenge in maintaining the frontier efficiently. Recall that in parallel BFS, the "edge-revisit" scheme first visits all edges incident the frontier to decide the successfully visited vertices, and then revisits all edges to output them to a consecutive array as the next frontier. Since the candidate of the next frontier \(\mathcal{T}_{i+1}\) are all neighbors of \(\mathcal{T}_{i}\), we can use a boolean flag to record the success information and let the second visit do the same computation as the first time. However, with VGC, each vertex can search out several hops, and the order of the searches is decided by the runtime scheduling. Note that the local queue is stored in the stack space and discarded after the search. If we want to borrow the "edge-revisit" scheme in BFS, we need to explicitly store the information of the local queues, which can be very costly. To tackle this challenge, we propose a new data structure called _parallel hash bag_ to maintain the frontier efficiently, such that the next frontier can be generated by visiting the edges only once. Our hash bag supports Insert, ExtractAll, and ForAll efficiently both in theory (work, span, and I/O) and in practice. We start with defining the interface of hash bags:
* Insert(\(v\)): add the element \(v\) into the bag (resize if needed). It can be called concurrently by different threads.
* ExtractAll(): extract all elements in the bag into an array and remove them from the bag.
* ForAll(): apply a function to all elements in the bag in parallel.
In hash bags, we require to know an upper bound \(n\) of the total size, which is true for most applications of hash bags (e.g., \(n=|V|\) for maintaining frontiers). We pre-allocate \(O(n)\) number of slots as an array _bag_ to hold elements to be inserted. However, instead of directly using all the slots, we only use a prefix of them with \(O(s)\) in expectation, where \(s\) is the current number of elements in the bag. This guarantees the efficiency for ExtractAll and ForAll since we only need to touch \(O(s)\) space to process \(s\) elements. The problem then boils down to maintaining the right size of the used prefix and how to "resize" efficiently.
We show (part of) the pseudocode of hash bags in Fig. 5 and an illustration in Fig. 6. The size of the _bag_ is preset as \(\Theta(n/\alpha)\), where \(\alpha\) is the desired load factor and \(n\) is the upper bound of total size as mentioned before. We conceptually divide the _bag_ into chunks and use them one by one. A resizing means moving to the next chunk for use. The chunks have doubling sizes of \(\lambda,2\lambda,4\lambda\)..., where \(\lambda\) is a parameter for the first chunk size. At initialization, we set up an array _tail_[\(\cdot\)], where _tail_[\(i\)] is the end index of the \(i\)-th chunk. We use a variable \(r\) to indicate the current in-use chunk id, starting from 0. Elements are always inserted into the \(r\)-th chunk (indices from _tail_[\(r-1\)] to _tail_[\(r\)] for \(r\geq 1\)).
An Insert randomly selects an empty slot in this chunk (Line 22), attempts to put the element in this slot using CAS, and linear probes if the CAS fails (Lines 23-24). Note that different from the hash table, the Insert on hash bags does not check duplicates, but all applications in our paper (maintaining frontiers) can ensure that no duplicates will be added to the bag. For example, duplicates can be checked before calling Insert, e.g., using a boolean flag for each vertex to indicate if it is in the frontier (the array _visit_ in Alg. 3, details explained below).
To efficiently decide when resizing is needed, we use a sampling strategy to estimate the size of the hash bag. We use _sample_[\(\cdot\)] to count the number of samples in chunk \(i\), and resize when the number of samples hits \(\sigma\). We fix \(\sigma\) for all chunks, but set sample rate accordingly for each chunk as \(\sigma/\alpha\) divided by the chunk size. Conceptually, this means to trigger a resize once the load factor goes beyond \(\alpha\). The larger the chunk is, the smaller the sampling rate is. Theoretically, getting accurate estimations requires \(\sigma=\Omega(\log n)\).
In Insert, we sample the element with the current rate. If sampled successfully, we increment _sample_[\(r\)] (\(r\) is the current chunk) by 1 atomically by CAS (conceptually this is an atomic fetch_and_add operation). When _sample_[\(i\)] hits \(\sigma\), a constant fraction of this chunk is full _whp_, so a resizing attempt is triggered (try_resize). Also, when we linear probe for more than a certain number of times, we
Figure 5. Pseudocode for the hash bag.
Figure 6. Parallel Hash Bag. The hash bag is a preallocated array of size \(O(n)\), split into chunks of exponentially grown sizes, starting from \(\lambda\)_tail_[\(i\)] is the last index to use for chunk \(i\). The current chunk id is \(r\). An Insert puts the element to a random position in the current chunk (linear probe for conflict/collision). Each element is sampled at a certain rate. _sample_[\(i\)] is the number of samples in chunk \(i\). When the _sample_[\(r\)] reaches a threshold (\(\sigma=50\) in this example), we resize it by CAS \(r\) to \(r+1\).
also trigger a resizing (although this should be rare). In both cases, we resize by increasing \(r\) by \(1\) using a CAS, and call Insert again to add this element to the new chunk.
ExtractAll and ForAll are applied to all elements in \(bag[\cdot]\) up to the current chunk \(r\) (indices from \(0\) to \(tail[r]\)). ExtractAll uses a standard parallel pack (Sang et al., 2017) to output all (non-empty) elements in an array and remove them from the bag in parallel. ForAll calls a parallel for-loop to apply the function on all elements (skip the empty slots).
We present the framework on using a hash bag \(H\) for reachability query in Alg. 3. Given the current frontier \(\mathcal{F}_{i}\), we will visit all vertices in \(\mathcal{F}_{i}\) in parallel and perform local searches from them. We use an array of boolean flags \(visit[\cdot]\) to record whether each vertex has been visited. When a vertex \(v\in\mathcal{F}_{i}\) visits a vertex \(u\), we will use CAS to set \(visit[u]\) as \(true\) (Line 9). As mentioned, CAS guarantees that only one concurrent visit to \(u\) will succeed. Note that if \(visit[u]\) is already \(true\), this if-condition will also fail, which guarantees no duplicates in the hash bag. If the CAS succeeds, we call Insert\((u)\) to add \(u\) to the hash bag. Note that if local search is enabled, vertices visited within the local search are not added to the next frontier (see details in Sec. 3.1). We omit such cases in the pseudocode for simplicity. Finally, when we finish exploring all the vertices in \(\mathcal{F}_{i}\), we extract (emit and clean) all vertices from the hash bag to form the next frontier (Line 11).
**Theoretical Analysis of Hash Bags.** We now show the cost bounds of the hash bag.
Theorem 3.1 ().: _For a parallel hash bag of total size \(n\) and first chunk size \(\lambda=\Omega((P+\log n)\log n)\), inserting \(s\) elements using \(P\) processors costs \(O(s)\) expected work and \(O(\log s\log n)\) span whp, and listing or packing \(s\) elements uses \(O(s+\lambda)\) work and \(O(\log s)\) span, both whp, with mild assumptions (see below)._
We provide the formal proof in the full in Appendix A. In the analysis, we assume the threads are loosely synchronized, where between two consecutive executions of Line 18, other processors can execute at most a constant number of instructions. This assumption is reasonable in practice and is used in analyzing other parallel algorithms such as the analysis of the work-stealing scheduler (Brandt et al., 2017; Goyal et al., 2017). Note that the value \(P\) is usually a small number (up to hundreds) in practice, and can generally be considered as polylogarithmic of input size \(n\). In practice, we set \(\lambda=2^{10}\) and \(\sigma=50\). We pick \(\sigma=50\) since it is close to \(\log n\). We use \(\lambda=2^{10}\) since our analysis indicates \(\lambda\) should roughly be \(\log^{2}n\). We tested \(\lambda\) for a large range and it affected the running time minimally for \(2^{8}\leq\lambda\leq 2^{16}\), so we simply use a single value for all tests.
Our experiments show that hash bags are fast in practice due to the space efficiency and fewer memory accesses. Although we design hash bags for VGC, our experiments show that hash bag itself also improves the algorithms' performance because it avoids scanning the frontier twice. When applying it to LE-lists (see Sec. 5.2), where we can use hash bags but not VGC, we also achieve up to \(10\times\) speedup over existing implementations.
```
Input: A directed graph \(G=(V,E)\) and a set sources \(S\in V\) \(\mathcal{F}_{0}=S\) \(visit[0]\leftarrow false\) for all \(v\in V\) except \(visit[s]\gets true\) \(t\gets 0\) \(H\leftarrow Hashbag()\) \(\triangleright\) initial \(H\) as an empty hash bag while\(\mathcal{F}_{i}\neq\emptyset\)do parallel_for_each\(v\in\mathcal{F}_{i}\)do Visit \(v\)'s neighborhood, use local search if applicable foreach\(u\) visited by \(v\)do\(\triangleright\) Processing a reachability pair \((v,u)\) if\(\text{compare\_and\_swap}(\&visit[u],false,true)\)then(\({}^{\prime}\)) \(\triangleright\) H.Insert(\(u\)) \(\triangleright\) pack elements and clean the bag \(i\gets i+1\) \(\triangleright\) Note: vertices visited within local searches will not be added
```
**Algorithm 3**Parallel Single-Reachability Using Hash Bags
## 4. Implementation Details
We use the techniques in Sec. 3 (VGC with hash bags) to implement reachability searches in the BGSS algorithm for SCC (Alg. 1). This section further presents some details in the implementation. Many of these ideas are also adopted in other recent parallel SCC implementations or graph libraries (Sang et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). We summarize the cost of our implementation in five categories: _trimming_, _first SCC_, _multi-search_, _labeling_, and _hash table resizing_. In Sec. 6.2, we show a running time breakdown based on these five categories.
**4.1****Trimming**. The algorithm first filters all vertices with zero in- or out-degrees, since they must be in isolated SCCs. It is used in almost all existing SCC implementations.
**4.2****Finding the first SCC**. As the first reachability search in BGSS only contains one source, we use single-reachability to find the first SCC, and use the standard _dense-backward_(Grover and Leskovec, 2010; Wang et al., 2019) optimization. This optimization is designed for single-BFS when the frontier is large. Instead of checking all the _out-edges_ from \(\mathcal{F}_{i}\), the dense mode checks each unvisited vertex \(u\) and its \(i\)-neighbors. If any of \(u\)'s in-neighbor is in the previous frontier \(\mathcal{F}_{i}\), \(u\) must be reachable from the source, and we can skip the rest of the edges incident \(u\) to save work. We refer to this optimization as _dense mode_, and the aforementioned approach as _sparse mode_. We note that dense mode does not work in multi-reachability searches--even if we find a neighbor of \(u\) in \(\mathcal{F}_{i}\), we cannot skip the rest of the neighbors since they may come from different sources than \(v\). Therefore, we only use dense mode in single-reachability searches.
**4.3****Multi-reachability search**. Next, we start \((\log n)-1\) batches of multi-reachability searches in both forward and backward directions, where round \(i\) uses \(2^{i}\) sources (Lines 6-7). During the multi-reachability search, we need a hash table to identify the duplicate reachability pairs. We use the phase-concurrent hash table (Sang et al., 2017). To avoid high overhead in hash table resizing, we use a heuristic to estimate the hash table size, which is discussed below in Sec. 4.5.
**4.4****Labeling**. After finding all reachability pairs, we mark all vertices strongly connected with any source as _finished_, and label them using the largest vertex id in this SCC (Line 11). For the other vertices, we need to compute their "signatures" of reachability to determine cross edges. We do this also by setting a label for them (Line 12), which is a hash value of the set of vertices reachable from and to \(v\) (combining \(R_{1}\), \(R_{2}\) with its current label). In this way, two vertices with different labels are in different SCCs. We set the hash value also as the largest vertex id among all vertex reachable from or to \(v\). To avoid the cost of explicitly removing the cross edges, we
just skip cross edges in later reachability searches if the endpoints have different labels.
**Input:** A graph \(G=(V,E)\) with \(V=\{v_{1},\ldots,v_{n}\}\)
**Output:** The connectivity labels \(L(\cdot)\) of \(V\)
```
1\(L\leftarrow\) LDD(\(G\)) parallel_for_each\((v,u)\in E\)do
2if\(\textsc{Find}(L(v))\neq\textsc{Find}(L(u))\)thenUnion(\(L(v),L(u)\)) return\(L(\cdot)\)
3Function\(\textsc{LDD}(G=(V,E))\)
4 Set \(\textit{visit}[v]\leftarrow\textit{false}\) and \(L(v)\gets v\) for all \(v\in V\)
5\(B\leftarrow\) Permute \(V\) and group vertices into \(O(\log n)\) batches in exponentially increasing sizes
6\(F\gets B_{1}\)
7 Set \(\textit{visit}[v]\leftarrow\textit{true}\) for all \(v\in B_{1}\)
8for\(i\gets 2\ldots,|B|\)do
9\(F^{\prime}\leftarrow\emptyset\)
10parallel_for_each\(v\in F\)do
11parallel_for_each\(u:(u,v)\in E\), \(\textit{visit}[u]=\textit{false}\)do
12\(\textit{visit}[u]\leftarrow\textit{true}\)
13\(L(u)\gets L(v)\)
14 Add \(u\) to \(F^{\prime}\)
15\(F=F^{\prime}\cup\{v\mid v\in B_{1}\) and \(\textit{visit}[v]=\textit{false}\}\)
16 Set \(\textit{visit}[v]\leftarrow\textit{true}\) for all \(v\in B_{1}\) return\(L(\cdot)\)
```
**Algorithm 4**LDD-UF-JTB Algorithm for Connectivity [96]
### Heuristic for hash table resizing
The phase-concurrent hash table [95] requires knowing the upper bound of the size before concurrent insertions. With VGC, we do not know a tight upper bound of the number of reachability pairs \((v,s)\), since \(v\) can be several hops away from \(s\), and the number of possible pairs can be large. Instead, we compute the number of pairs \(a\) in the previous frontier and the number of unfinished vertices \(b\), and use \(\max(0.3b,1.5a)\) and round it up to the next power of \(2\) as the next hash table size. We resize our hash table once an insertion probes too many times. This heuristic is inspired by some recent analyses of the BGSS algorithm in [15]. As shown in Fig. 9, on many graphs, resizing hash tables can be costly, and our heuristic effectively reduces this cost.
## 5. Other Relevant Algorithms
The two general techniques (VGC and parallel hash bag) introduced in Sec. 3 are general. In this section, we use them to accelerate two other graph algorithms. In particular, in Sec. 5.1 we show how to apply these techniques in a parallel graph connectivity algorithm, and in Sec. 5.2 we show that using parallel hash bag in the algorithm for least-element lists can lead to significantly faster performance.
### Connected Components (CC)
Computing the connected components is one of the most widely-studied graph problems. A recent framework ConnectIt [36] implemented over 232 shared-memory parallel algorithms, based on numerous previous studies both theoretically [7; 16; 23; 53; 93; 94; 96; 99] and practically [13; 16; 35; 94; 96].
Since connectivity is not the main focus of this paper, we picked one of the algorithms from ConnectIt, referred to as "LDD-UF-JTB", as a proof-of-concept to show the effectiveness and generality of the new techniques in this paper. We note that none of the algorithms in ConnectIt has overwhelming advantages on all graphs. LDD-UF-JTB is one of the fastest algorithms, and our new version accelerates it by up to 3.2x compared to the original version in ConnectIt.
LDD-UF-JTB has two major components: the first step uses _low-diameter decomposition_ (LDD) [75], and the finishing step uses the _union-find structure_ by Jayanti et al. [56]. We apply our new techniques to the LDD step. An LDD of a graph means to find a decomposition (partition of vertices) of the graph where each component has a low diameter and the number of edges crossing components is small. This LDD step first randomly permutes all vertices, and then starts with a single source and searches out using BFS. Then in later rounds, new sources are added to the frontier (Line 17) in exponentially increasing batches (Line 7) along with the execution of BFS (Line 12 to Line 16). In our implementation, we increase the batch sizes by \(1.2\times\) in each round.
Our implementation replaces the BFS in ConnectIt with the more efficient reachability algorithm with VGC optimization and the parallel hash bag. Similar to SCC, we do not need the BFS ordering in computing connectivity, so replacing BFS with (undirected) reachability searches is still correct. In this case, our algorithm can explore more vertices in one round, which leads to fewer rounds and better parallelism. LDD has only \(O(\log n)\) rounds (Line 10), so it is already reasonably fast. By using local search and parallel hash bag, we further improve its performance by \(1.67\times\) (geometric mean on all graphs). We present the experimental details in Sec. 6.4.
```
Input: A graph \(G=(V,E)\) with \(V=\{v_{1},\ldots,v_{n}\}\) Output: The LE-lists \(L(\cdot)\) of \(G\)
1 Set \(\delta(v)=\)\(\leftarrow\)\(\infty\) and \(L(v)\leftarrow\emptyset\) for all \(v\in V\)
2 Partition \(V\) into \(\log n\) batches \(P_{1..\log n}\), where \(|P_{i}|=2^{l-1}\)
3for\(i\gets 1,\ldots,\log n\)do
4 Apply multi-BFS from vertices in \(P_{i}\), and let \(S=\{(u,v,d(u,u))\mid v\in P_{i},d(u,u)<\delta(u)\}\)
5 parallel_for_each\(\langle u,u,d\rangle\in S\)do\(\delta(u)\leftarrow\min\{\delta(u),d\}\)
6\(L^{\prime}(u)\leftarrow\{(u,v,i)\in S\}\)
7 Sort \(L^{\prime}(u)\) based on the distances in decreasing order, filter out triples that violate constraints, and append \(v\) (the second element) to \(L(u)\) return\(L(\cdot)\)
```
**Algorithm 5**BGSS Algorithm for LE-Lists [18]
### Algorithm on Least-Element Lists (LE-Lists)
Given an undirected graph \(G=(V,E)\) with \(V=\{v_{1},\ldots,v_{n}\}\) in a given random total order, a vertex \(u\) is in vertex \(v\)'s _least-element list (LE-list)_ if and only if there is no earlier vertex than \(u\) in \(V\) that is closer to \(v\)[27]. More formally, for \(d(u,v)\) being the shortest distance between \(u\) and \(v\), the LE-lists of \(v_{i}\) is:
\[L(v_{i})=\left\{v_{j}\in V\mid d(v_{i},v_{j})<\min_{1\leq k<j}d(v_{i},v_{k})\right\}\]
sorted by \(d(v_{i},v_{j})\).
LE-lists have applications in estimating the influence of vertices in a network [25; 39; 29], estimating reachability set size [60; 84], and probabilistic tree embeddings of a graph [19; 59], which further have numerous applications. In this paper, we focus on the unweighted LE-lists algorithm, so the distances can be computed by BFS.
The state-of-the-art parallel algorithm to compute LE-list is the BGSS algorithm (in the same paper as the BGSS SCC algorithm [18]). A pseudocode is given in Alg. 5. It first permutes the vertices \(V\)
divides \(V\) into \(\log_{2}n\) batches of size 1, 2, 4, 8,..., and processes one batch at a time. A tentative distance \(\delta(\cdot)\) is maintained for each vertex, initialized as \(+\infty\). In each batch, it runs multiple BFS from all vertices in the batch simultaneously, based on \(\delta(\cdot)\) from the previous batch. For a vertex \(v\in V\), if its search reaches \(u\) using a distance smaller than \(\delta(u)\), the algorithm add \(\langle u,v,d(u,u)\rangle\) to a set \(S\). Finally, we use \(S\) to update the tentative distance \(\delta(\cdot)\) in this round (Line 5), and the LE-list of each vertex (Lines 6 and 7). Blelloch et al. showed that running multi-BFS in \(\log_{2}n\) batches enables parallelism, while the work is asymptotically the same. Each LE-list has size \(O(\log n)\)_whp_, and the entire algorithm runs \(O(m\log n)\) time. A preliminary implementation is given in ParlayLib (Blelloch et al., 2017), using multi-BFS discussed in Sec. 2.
We note that we can use the parallel hash bag introduced in Sec. 3.3 to maintain the frontier in the multi-BFS, which avoids the second visit in multi-BFS. VGC is not directly applicable here because we need to preserve the BFS order. In addition, we use a parallel hash table (Kumar et al., 2019) to check if a source-target pair is already visited. In the round \(i\), if a source vertex \(v\) in the current batch reaches \(u\) by a distance shorter than \(\delta[u]\), and if \((v,u)\) are not in the hash table, we insert \((v,u)\) to the hash table and hash bag, and pack the hash bag as the next BFS frontier. We insert all such triples \((u,v,i)\) to a set \(S\) (Line 4), where \(v\) and \(u\) are described as above, and \(i\) is the current round (which is also the distance between \(u\) and \(v\)). Our parallel LE-lists implementation outperforms the existing implementation from ParlayLib (from the authors of the BGSS LE-lists algorithm) by \(4.34\times\) on average.
## 6. Experiments
**Setup.** We run our experiments on a 96-core (192 hyperthreads) machine with four Intel Xeon Gold 6252 CPUs and 1.5 TB of main memory. We implemented all algorithms in C++ using ParlayLib (Blelloch et al., 2017) for fork-join parallelism and parallel primitives (e.g., sorting). We use numactl -i all for parallel tests to interleave the memory pages across CPUs in a round-robin fashion. All reported numbers are the average running time of the last five out of six runs.
We use \(\tau=2^{9}\) in all tests except for those in Fig. 11 which studies the choice of \(\tau\). We tested 18 directed graphs, including social networks, web graphs, \(k\)-NN graphs, and lattice graphs. All social, web, and \(k\)-NN graphs are real-world graphs, with up to 3.6 billion vertices and up to 128 billion edges. The lattice graphs are generated by a similar model in (Kumar et al., 2019), which uses SCC to study the percolation on isotropically directed lattices. Basic information on the graphs is given in Tab. 2. For social graphs, we use _Livefournal_ (LJ) (Kumar et al., 2019) and _Twitter_ (TW) (Kumar et al., 2019). For web graphs (Kumar et al., 2019), we use _sd-arc_ (SD), _ClueWeb_ (CW), _Hyperlink12_ (HL12) and _Hyperlink14_ (HL14). \(k\)-NN graphs are widely used in machine learning algorithms (Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019). In \(k\)-NN graphs, each vertex is a multi-dimensional data point and has \(k\) edges pointing to its \(k\)-nearest neighbors (excluding itself). We use _Household_ with \(k=5\) (HH5) (Huffmann et al., 2019; Kumar et al., 2019), _Chemical_ with \(k=5\) (CH5) (Huffmann et al., 2019; Kumar et al., 2019), _GeoLife_ with \(k=2,5,10,15,20\) (GL2, GL5, GL10, GL15, GL20) (Kumar et al., 2019; Kumar et al., 2019), and _Cosmo50_ with \(k=5\) (COS5) (Kumar et al., 2019; Kumar et al., 2019). We also created four lattice graphs (Kumar et al., 2019), including two \(10^{4}\times 10^{4}\) 2D-lattices (SQR and SQR'), and two \(10^{3}\times 10^{4}\) 2D-lattices (REC and REC'). Each row and column in the lattice graphs are circular. In SQR and REC, for each vertex \(u\) and its adjacent vertex \(v\), we add a directed edge from \(u\) to \(v\) with probability 0.5, and from \(v\) to \(u\) otherwise, then remove duplicate edges. In SQR' and REC', for each vertex \(u\) and each of its adjacent vertex \(v\), we create an edge from \(u\) to \(v\) with probability 0.3, and from \(v\) to \(u\) with probability 0.3, and create no edge with probability 0.4, then remove duplicate edges.
To test our connectivity and LE-lists algorithms, we symmetrize all 18 directed graphs and use 4 more real-world undirected graphs, com-orkut (OK) (Kumar et al., 2019), Friendster (FT) (Kumar et al., 2019), RoadUSA (USA) (Blelloch et al., 2017), and Germany (GE) (Blelloch et al., 2017). Graph details are given in Tab. 3.
We call the social and web graphs _low-diameter graphs_ as they usually have low diameters (roughly polylogarithmic in \(n\)). We call the \(k\)-NN and lattice graphs _large-diameter graphs_ as their diameters are large (roughly \(\Theta(\sqrt{n})\)). When comparing the _average_ running times across multiple graphs, we always use the _geometric mean_.
**Baseline Algorithms.** We call all existing algorithms that we compare to the _baselines_. We compare the number of SCCs and the largest SCC size reported by each algorithm with SEQ to verify correctness. For SCC, we compare to GBBS (Kumar et al., 2019; Kumar et al., 2019), iSpan (Kumar et al., 2019), and Multi-step (Kumar et al., 2019). GBBS also implements the BGSS algorithm, so we also compare our breakdown and sequential running times with GBBS. We also implemented and compared to Tarjan's sequential SCC algorithm (Tarjan, 2019). We call it SEQ. On six graphs, iSpan's results are off by 1, noted with "?" in Tab. 2. (We communicated with the authors but could not correct it.) Multi-step and iSpan do not support CW, HL12, and HL14 because they have more than \(2^{32}\) edges. For connectivity, we only work techniques to the LDD-UF-JTB algorithm in Connectt (Kumar et al., 2019; Kumar et al., 2019) and compare it to the original implementation in Connectt. For LE-lists, we compare to ParlayLib, which is the only public parallel LE-lists implementation to the best of our knowledge.
We first summarize the overall performance of the algorithms and scalability tests in Sec. 6.1. Next, we show some experimental studies on performance breakdown in Sec. 6.2, and an in-depth study of VGC in Sec. 6.3. Finally, we provide a brief summary of our experimental results for connectivity and LE-lists in Sec. 6.4.
### Overall Performance
We show the running times in Tab. 2 and a heatmap in Fig. 1. We mark the parallel running times _slower than the sequential algorithms_ in red in Tab. 2. Our implementation is almost always the fastest except on SD and CH5. On CH5, we are 23% slower than SEQ. CH5 has a very large diameter (4000+) compared to its small size (4M vertices), and none of the parallel implementations outperform SEQ. On SD, we are only 4% slower than Multi-step with \(\tau=2^{9}\). SD is one of the graphs that are dense and potentially has good parallelism, and thus may prefer smaller \(\tau\). As we will show in Fig. 11, using \(\tau\leq 2^{8}\) will achieve a better performance than all existing implementations on SD, but we keep the results in Tab. 2 all using \(\tau=512\) for simplicity. The highlighted columns in Tab. 2 show the speedup of our algorithms to the _best_ baseline (including SEQ) on each graph. Compared to the best baseline, we are up to \(10.2\times\) faster and \(3.1\times\) faster on average.
All the implementations perform favorably on all low-diameter graphs (\(5\)-\(317\times\) faster than SEQ). Conceptually, all parallel implementations first use BFS-like algorithms to find the largest SCC. On all but one low-diameter graph, the largest SCC contains more than 50% vertices. Therefore, using a parallel BFS (with optimizations
such as dense modes) gives decent performance. Even so, using hash bags and VOC still gives good performance on low-diameter graphs, and we are faster (up to 3.8x) than the best baseline on all but one graphs. One interesting finding is that on TW, our implementation, GBBS, and Multi-step are faster than SEQ even running sequentially. Similar trends (running the parallel algorithm sequentially is faster than the classic sequential algorithm) have been observed in other BFS-like graph algorithms [94]. This is mostly due to the dense-mode optimization as described in Sec. 4.2. When the frontier size is large, triggering the dense mode can skip many edges, so the number of visited edges can be fewer than \(\Theta(m)\) as in the standard sequential solution. Another reason is that our implementation (and GBBS's) using BFS is more I/O-friendly than Tarjan's DFS-based algorithm.
Our algorithm has dominating advantages on large-diameter graphs. On \(k\)-NN and lattice graphs, existing parallel implementations are slower than the sequential algorithms in 24 out of 36 tests. If we take the average time of the baseline parallel algorithms for \(k\)-NN and lattice graphs, all of them are slower than SEQ (see the "MEAN" columns in Fig. 1). In comparison, our implementation is 5.3\(\times\) better than SEQ on \(k\)-NN graphs and 9.1\(\times\) better on lattice graphs. We believe the high performance is from good parallelism. We study the scalability of our algorithm in the next paragraph.
**Scalability Tests.** We show the speedup of four algorithms (ours, GBBS, iSpan, Multi-step) over the sequential Tajan's algorithm on six representative graphs in Fig. 7. We vary the number of processors from 1 to 96h (192 hyperthreads). The red horizontal dot lines represent the running time of Tarjan's algorithm (SEQ), above which means faster than Tarjan's algorithm.
On low-diameter graphs (TW, SD, and CW), all algorithms show reasonably good speedup. On large-diameter graphs (SQR', GL5, and COS5), our algorithm achieves significantly better scalability than the baselines. Our algorithm is the only one that achieves almost linear speedup on all the six graphs. For all the other algorithms, their performance stops increasing (dropped or flattened) with more than 24 threads on one or more graphs. Multi-step shows good performance on SD, and has better performance than our algorithm
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c|c} \hline \multicolumn{1}{c}{} & \multicolumn{6}{c|}{**Graph Information**} & \multicolumn{3}{c|}{**Ours**} & \multicolumn{3}{c|}{**GBBS**} & \multicolumn{3}{c|}{**Other Benchmarks**} & \multicolumn{1}{c|}{**Tbest**} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{\(n\)} & \multicolumn{1}{c}{\(m\)} & \multicolumn{1}{c}{\(D\)} & \(|SCC_{1}|\) & \(|SCC_{1}|\) & \(\%\) & \(\sigma\)SCC & \(\sigma\)**par.** & **seq.** & **spd.** & **par.** & **seq.** & **spd.** & **iSpan** & **MS** & **SEQ** & **/ ours** \\ \hline \multirow{6}{*}{**SQR**} & **IJ** & 4.85M & 69.0M & 16 & 3,828,682 & 78.985 & 971,232 & 0.038 & 1.06 & 27.7 & 0.118 & 1.44 & 12.1 & 0.050\({}^{\circ}\) & 0.141 & 2.90 & 1.30 \\ & TW & 41.7M & 1.47B & 65 & 33,479,734 & 80.385 & 8,044,729 & 0.226 & 14.3 & 63.2 & 0.387 & 19.7 & 50.9 & c & 1.32 & 71.7 & 1.71 \\ \cline{2-15} & SD & 89.2M & 2.04B & 241 & 47,965,727 & 53.745 & 39,205,039 & 1.96 & 104 & 46.6 & 5.25 & 110 & 21.0 & 4.78\({}^{\circ}\) & 1.86 & 104 & 0.95 \\ & CW & 978M & 42.6B & 666 & 774,373,029 & 79.155 & 135,223,661 & 17.6 & 1189 & 67.4 & 40.4 & 1,166 & 28.9 & n & n & 589 & 2.29 \\ & HL14 & 1.72B & 64.4B & 793 & 320,754,363 & 18.60\% & 1,290,550,195 & 20.6 & 1,622 & 78.8 & 67.3 & 2,041 & 30.3 & n & n & 620 & 3.27 \\ & HL12 & 3.56B & 128B & 5,275 & 1,827,543,757 & 51.28\% & 1,279,696,892 & 95.5 & 852 & 89.3 & 361 & 7,022 & 19.5 & n & n & 1822 & 3.78 \\ \cline{2-15} & **HH5** & 2.05M & 10.2M & 980 & 257,914 & 12.59\% & 94,010 & 0.208 & 3.10 & 14.9 & 3.95 & 3.51 & 0.89 & 0.791 & 2.21 & 0.449 & 2.16 \\ & CH5 & 4.21M & 21.0M & 4.550 & 497,331 & 11.82\% & 248,227 & 0.557 & 5.83 & 10.5 & 8.39 & 5.84 & 0.70 & 2.15 & 17.6 & 0.427 & 0.77 \\ & GL2 & 24.9M & 49.8M & 4,142 & 5,368 & 0.02\% & 9,705,931 & 0.598 & 39.1 & 65.3 & 3.00 & 82.4 & 27.5 & t & 8.36 & 3.39 & 5.01 \\ & GL5 & 24.9M & 124M & 12,059 & 860,403 & 3.46\% & 3,198,626 & 0.86 & 45.8 & 53.0 & 10.5 & 91.0 & 8.69 & t & 1.91 & 4.83 & 5.58 \\ & GL10 & 24.9M & 249M & 4,531 & 3,042,330 & 12.23\% & 326,811 & 1.49 & 61.6 & 41.4 & 12.3 & 76.5 & 6.24 & 35.2 & 7.14 & 9.30 & 4.79 \\ & GL15 & 24.9M & 373M & 5,491 & 3,239,156 & 13.02\% & 187,646 & 2.09 & 75.5 & 36.1 & 13.7 & 84.5 & 6.15 & 29.4 & 10.6 & 11.3 & 5.06 \\ & GL20 & 24.9M & 498M & 5,275 & 3,336,963 & 13.41\% & 128,021 & 2.38 & 86.0 & 36.1 & 14.5 & 96.6 & 6.68 & 27.3 & 12.3 & 13.3 & 5.18 \\ & COS5 & 321M & 1.61B & 1,148 & 301,413,787 & 93.88\% & 2,273,690 & 3.22 & 284 & 88.2 & 12.0 & 367 & 30.7 & t & 57.4 & 189 & 3.72 \\ \cline{2-15} & **SQR** & 100M & 300M & 10,002 & 99,101,606 & 99.10\% & 829,495 & 0.577 & 24.7 & 42.8 & 11.1 & 28.5 & 2.57 & 4.45\({}^{\circ}\) & 12.6 & 15.5 & 7.72 \\ & REC & 10M & 30M & 5,946 & 9,890,647 & 98.91\% & 101,059 & 0.117 & 2.08 & 17.8 & 3.82 & 2.14 & 0.56 & 1.19\({}^{\circ}\) & 5.24 & 1.57 & 10.2 \\ & SQR\({}^{\circ}\) & 100M & 120M & 51 & 58 & 0.00\% & 78,052,793 & 1.38 & 105 & 76.3 & 4.76 & 243 & 51.0 & 26.4\({}^{\circ}\) & 3.19 & 6.90 & 2.31 \\ & REC & 10M & 12M & 80 & 42 & 0.00\% & 7,819,050 & 0.159 & 9.38 & 59.0 & 1.00 & 18.7 & 18.8 & 0.851\({}^{\circ}\) & 0.645 & 0.60 & 3.75 \\ \hline \end{tabular}
\end{table}
Table 2. The running times (in seconds) of all tested algorithms on SCC. \(n\) = number of vertices. \(m\) = number of edges. \(D\) = estimated diameter (a lower bound of the actual value). \(|SCC_{1}|=\) largest strongly connected component (SCC) size. \(|SCC_{1}|\%=|SCC_{1}|/n\) ratio of the largest SCC. \(\sigma\)SCC = number of SCCs. \("\)Spam = iSpan algorithm [57]. “MS” = Multi
especially on a small number of threads. However, Multi-step does not scale well to more processors on most of the graphs.
We also show the self-speedup of our algorithms on six graphs in Fig. 8. We vary the number of processors from 1 to 96h (192 hyperthreads). Due to page limitation, we do not show the curves for all graphs, but the self-speedup on all graphs on 96h (192 hyperthreads) is given in Tab. 2. Our self-speedup is more than 35 except for some very small graphs. This indicates that high parallelism is a crucial factor contributing to the high performance of our code. Compared to GBBS, the fastest previous parallel SCC implementation, our self-speedup is 1.2-32x better. With limited parallelism, GBBS can be slower than SEQ on 8 out of 14 large-diameter graphs--the BGSS SCC algorithm has \(O(m\log n)\) work compared to \(O(m)\) of Tarjan's sequential algorithm, so with poor self-speedup, the parallelism cannot make up the factor of \(O(\log n)\) loss in the total work.
We believe that our good performance comes from using hash bags (saving work on processing sparse frontiers) and VGC (reducing the number of rounds in reachability searches and improving parallelism). We will discuss more details by comparing the performance breakdown with GBBS in Sec. 6.2, and studying the benefit brought up by VGC in Sec. 6.3.
### Performance Breakdown
To better understand the performance of our algorithm, we compare the performance breakdown with GBBS in Fig. 9 since GBBS is also based on the BGSS algorithm and we have similar framework. We compare the running time in six parts (see Sec. 4): 1) _Trimming_: trimming vertices with no in- or out-degrees; 2) _First SCC_: finding the first SCC using two single-reachability searches; 3) _Multi-search_: all multi-reachability searches; 4) _Hash Table Resizing_: resizing the hash table storing reachability pairs; 5) _Labeling and Others_: assigning labels to vertices and other costs. We show the breakdown figure for all graphs in Fig. 9. We tested three versions of our algorithm: the _plain_ version uses parallel hash bags without VGC, the "_VGC1_" version enables VGC in single-reachability to find the first SCC, and the "_final_" version fully enables VGC in both single- and multi-reachability search. We note that some graphs requires more time on _First-SCC_ while the others spent more time on _Multi-search_ because of different graph patterns, which is indicated by the value of \(|SCC_{1}|\%\) as shown in Tab. 2.
One straightforward improvement of our algorithm is from our better heuristic to estimate the hash table size (see details in Sec. 4), which avoids frequent size predicting and hash table resizing. This can be seen by comparing the time of "hash table resizing" (green bars) for GBBS and our versions. This optimization saves much time on almost all graphs. In the following, we use the breakdown results to illustrate the performance improvement from our two main techniques: the hash bag and VGC.
**Evaluating hash bags.** Parallel hash bags improve the performance by maintaining the frontiers without the edge-revisiting scheme. Note that both our algorithm and GBBS use the BGSS algorithm and perform the same computation in each round, but GBBS uses edge-revisiting and our algorithm avoids that by using the hash bag. Therefore, we compare our _plain_ version (i.e., disabling VGC) with GBBS to evaluate the improvement from hash bags, because the major difference between them is the use of hash bags. We also exclude the hash table resizing time (the green bars) for fair comparison. On all but one graphs, using hash bags greatly improve the performance in single- and/or multi-reachabilty searches. Comparing the total running time of reachability searches (red and blue bars), our algorithm is up to 4x faster than GBBS (2x faster on average), and the major improvement is from hash bags.
**Evaluating VGC.** On top of our _plain_ version, VGC improves the performance on almost all graphs. Note that for low-diameter graphs, since the number of needed rounds is small, there is sufficient parallelism to explore. Therefore, using VGC does not improve the performance too much. As mentioned, on SD, the performance drops slightly using local search with \(\tau=2^{9}\), but using smaller values of \(\tau\) can still improve the performance (see Sec. 6.3). To keep the parameter setting simple, we still report the numbers with \(\tau=2^{9}\) in Tab. 2 and Fig. 9. The large-diameter graphs with the largest SCC as 50% of the graph (e.g., COS5, REC, and SQR) greatly benefit from _VGC1_ (using VGC in the single-reachability search to find the first SCC). Comparing "first-SCC" of _plain_ and _VGC1_, VGC makes the performance 2.2-17x faster in the single-reachability search on COS5, SQR, and REC. All the other large-diameter graphs get significant improvement from _VGC1_ to _final_ (using VGC also in multi-reachability searches). For all large-diameter graphs, the "multi-search" time in _final_ is smaller than that in _VGC1_ (1.43-14.7x improvement). As we will show in Sec. 6.3, this is because VGC reduces the number of rounds in reachability searches by 3-200x.
In summary, comparing our _plain_ version with GBBS, we can see that hash bag and our heuristic on hash table resizing improves the performance over GBBS by about 1.5-4.3x. Comparing _plain_ with _VGC1_ and _final_, we can see that VGC improves the performance in both single- and multi-reachability queries by up to 14.7x.
### In-depth Performance Study of VGC
**Reduced Number of Rounds.** We study the improvement of VGC by reporting the number of rounds in the reachability searches with or without VGC (see Fig. 10). In a given graph, for all single- and multi-reachability searches in the SCC algorithm, we record the number of rounds \(y\) needed in plain BFS and the number of rounds \(x\) with VGC enabled. We then plot all such points \((x,y)\) on a 2D plane to illustrate the effectiveness of local search, shown in Fig. 10. We also report the average ratio of \(y/x\) on the top of each figure. The conceptual "slope" indicated by the points illustrates the factor in the reduction of the number of rounds by using local search. For most of the graphs, especially the \(k\)-NN graphs, thousands of rounds were needed in each multi-reachability search using BFS. With VGC, the number of rounds is mostly within 100. Even for the cases where BFS only needs a few (10-100) rounds, VGC still reduces the number of rounds to be within 10 rounds (e.g., LJ, TW, COS5, SQR', REC'). On all graphs, the number of rounds is reduced by 3-200x. As a result, the scheduling and synchronization overhead is greatly reduced.
**Choice of Parameter \(\tau\).** To understand the impact of \(\tau\) values on performance, we record the speedup over our _plain_ version (i.e., no VGC) with different values of \(\tau=1\) to \(2^{17}\), and under different number of processors from 96h (192 hyperthreads) to 4. For page limitation, we show six graphs in Fig. 11 (at least one in each graph type). All the other graphs showed similar trends as one of the six examples. We start from the curves of 192 hyperthreads
on different graphs. On all graphs except for IJ, TW and SD, we observe improvement as long as VGC is used (compared to plain BFS where \(\tau=1\)) for any \(1<\tau\leq 2^{16}\). Overall, the performance is not sensitive (and is always better than \(\tau=1\)) in a large parameter space \(2^{6}\leq\tau\leq 2^{12}\) on almost all graphs. Based on the results, we set \(\tau=2^{9}\) as it gives the best overall performance across all graphs. Using \(\tau=2^{9}\), SD is the only graph that has worse performance than \(\tau=1\). We note that SD still benefits from VGC with \(\tau\leq 2^{8}\). Note that using larger \(\tau\) suppresses parallelism, and for dense graphs with sufficient parallelism, a smaller \(\tau\) may perform better. Although we choose the best parameter by using experiments on 96h, we also test how different numbers of processors \(P\) affect the choices of \(\tau\). Interestingly, the trends are usually similar regardless of the number of threads used. With a smaller value of \(P\), the performance is less sensitive to the \(\tau\) value. This is because \(\tau\) trades off between scheduling overhead and load balancing, and both affect the performance more when \(P\) is large.
We believe that an interesting future work is to set \(\tau\) dynamically to achieve the best benefit from VGC, possibly based on the sparseness of the graph and the potential parallelism, e.g., the edge-vertex ratio \(m/n\), the number of processors \(P\), or the frontier size.
### Experiments on Connectivity and LE-Lists
**Experiments on Connectivity.** We implement the LDD-UF-JTB algorithm for graph connectivity in ConnectIt using our parallel
Figure 11. Relative running time to \(\tau=1\) on six graphs with \(\tau\) range \(2^{0}\) to \(2^{17}\), 4 to 192 hyperthreads (96h). IJ has similar trends as TW. HL12 and HL14 show similar trends as CW. All \(k\)-NN and lattice graphs show similar trends as GL5, COS5 and SQR’.
Figure 10. Number of rounds with and without local search for each batch. All settings are the same as Tab. 2. Each data point (\(x\), \(y\)) means that in one reachability search, \(y\) rounds are needed using local search, and \(x\) rounds are needed without local search. The number “avg” for each graph is the average number of \(k=y/x\) for all data points, which means, on average, using local search reduces the number of rounds needed by a factor of \(k\).
Figure 9. SCC breakdown time (in seconds). \(y\)-axis is the running time in seconds. All settings are the same as Tab. 2. “Plain”= our implementation with hash bags but not local search. “VGC1”= adding local search to the single-reachability search. “Final”= our final implementation with local search enabled on both single- and multi-reachability searches. The numbers on the top show the speedup of our implementations over GBBS (the first bar).
hash bags (Sec. 3.3) to maintain the frontiers and the local search optimization (Sec. 3.1). Both optimizations are applied to the sparse rounds in LDD. In Tab. 3, we compare our algorithm to the same algorithm in ConnectIt.
On social networks with low diameters, our algorithm is slightly slower than ConnectIt, but is generally comparable. This is because most of the vertices are visited in the dense mode, which is implemented similarly in both algorithms. The slowdown in our algorithm on social networks seems to be that VGC brings more work to the first several sparse rounds, which reduces the benefit of using dense modes. For other graph instances where dense modes do not significantly dominate the cost, our algorithm generally performs well. On web graphs, our code is 1.21\(\times\) faster than ConnectIt on average. On the large-diameter graphs, our implementation is 1.98\(\times\) faster than ConnectIt on average. Since parallel hash bags and VGC only apply to sparse rounds, the speedup of ours compared to ConnectIt has a correlation with the diameter of the graph. Note that LDD is guaranteed to finish in \(O(\log n)\) rounds, as opposed to \(O(D)\) for diameter \(D\) in SCC. Therefore, the improvement of our implementation over ConnectIt is not as significant as the improvement of our SCC over existing work. However, our implementation still outperforms ConnectIt on 16 instances out of 20, and is 1.67\(\times\) faster than ConnectIt on average. We believe that the experiments on connectivity provide additional evidence to show that our hash bags and VGC are general and practical.
**Experiments on LE-lists.** We compare our LE-lists implementation with ParlayLib (Kumar et al., 2017) in Tab. 3. Their implementation is the state-of-the-art and released in 2022. Note that, unlike CC and SCC, here we can only use parallel hash bags for LE-lists but not VGC since the BFS traverse orders need to be preserved.
On low-diameter graphs, our LE-list algorithm is 1.20\(-\)3.91\(\times\) faster (2.73\(\times\) on average) than ParlayLib's implementation. On large-diameter graphs, the speedup increases to 2.49-10.1\(\times\) (5.36\(\times\) on average). We believe this is because hash bags maintain frontier more efficiently, and processing large-diameter graphs involve more rounds (frontiers). Both our and ParlayLib's implementation are unable to compute the LE-lists of the three largest graphs CW, HL14, and HL12, because the output size of LE-lists is \(O(n\log n)\), which is larger than the memory of our machine. We also report the size of the LE-lists on each graph, and compare it to both ParlayLib's implementation and Cohen's sequential algorithm (Kumar et al., 2017). ParlayLib's implementation does not report the correct numbers on REC, SQR', and REC', and this is probably why they have poor performance on these graphs.
Overall, our algorithm is faster than ParlayLib's implementation (the state-of-the-art implementation) on all graphs. On average, our version is 4.34\(\times\) faster than ParlayLib's implementation on graphs with correct answers. We note that it remains an interesting question on how to apply a similar local search to LE-lists. We plan to study it in future work.
## 7. Related Work
Parallel SCC has been widely studied. Prior to the BOSS algorithm based on (multi-)reachability searches, there had been other approaches. The first type of approach is based on parallelizing DFS (Krishnan et al., 2016; Krishnan and Krishnan, 2016; Krishnan and Krishnan, 2016). However, since DFS is inherently sequential (Krishnan and Krishnan, 2016) and hard to be parallelized, these algorithms are shown to be slower than existing reachability-based solutions (Krishnan and Krishnan, 2016). Another widely-adopted approach is based on single-reachability search (aka. the _forward-backward search_, or _Fw-Bw_) (Krishnan and Krishnan, 2016; Krishnan and Krishnan, 2016; Krishnan and Krishnan, 2016; Krishnan and Krishnan, 2016; Krishnan and Krishnan, 2016; Krishnan and Krishnan, 2016). However, _Fw-Bw_ does not provide sufficient parallelism to find all SCCs. Hence, these systems only use _Fw-Bw_ to find large SCCs and use other techniques such as coloring and trimming to find small SCCs, which do not have good theoretical guarantees. For this type of approach, we compared the two newest ones with the released code: Multi-step (Krishnan and Krishnan, 2016) and iSpan (Krishnan and Krishnan, 2016). They perform well on graphs with a small diameter and a large \(SCC_{1}\) (\(SCC_{1}\) is the largest SCC in the graph), but do not work well on graphs with a large diameter or a small \(SCC_{1}\) (e.g., the \(k\)-NN and lattice graphs in our tests).
Parallel SCC has also been studied on other platforms such as GPUs (Krishnan and Krishnan, 2016; Krishnan and Krishnan, 2016; Krishnan and Krishnan, 2016; Krishnan and Krishnan, 2016; Krishnan and Krishnan, 2016; Krishnan and Krishnan, 2016; Krishnan and Krishnan, 2016; Krishnan and Krishnan, 2016) and distributed systems (Krishnan and Krishnan, 2016; Krishnan and Krishnan, 2016; Krishnan and Krishnan, 2016; Krishnan and Krishnan, 2016). Comparing the wall-clock running times reported in the papers, it seems that shared-memory algorithms are much faster, but we note that different platforms have their own use cases.
**Related Work of Parallel Hash Bag.** There exist other variants of hash tables designed for parallel algorithms (Krishnan and Krishnan, 2016; Krishnan and Krishnan, 2016; Krishnan and Krishnan, 2016). The _parallel bag_(Krishnan and Krishnan, 2016) supports similar interfaces as our hash bag, but uses a very different design. Parallel bags are organized using pointers, causing additional cache misses in practice. Our hash bag uses flat arrays and is practical and I/O-friendly. The \(k\)-level hash table designed for NVRAMs (Krishnan and Krishnan, 2016) requires allocating memory when resizing, while one of the goals of hash bags is to avoid explicit resizing. Our work is also the first to formalize the interface of maintaining frontiers in
\begin{table}
\begin{tabular}{c c|c c c|c c c|c} & & \multicolumn{3}{c|}{**Connectivity**} & \multicolumn{3}{c|}{**LE-Lists**} & \multicolumn{3}{c}{**New**} \\ & & **Ours** & **DHS’21 Spd.** & & **Ours** & **Parlay** & **Spd.** & **Graphs** \\ \hline \multirow{5}{*}{**O**} & OK & 0.010 & 0.008 & 0.76 & 0.577 & 1.52 & 2.63 & **OK** [110] \\ & IJ & 0.013 & 0.012 & 0.91 & 0.502 & 1.96 & 3.91 \\ & TW & 0.093 & 0.099 & 1.05 & 4.88 & 16.6 & 3.41 \\ & FT & 0.197 & 0.150 & 0.76 & 24.9 & 30.0 & 1.20 \\ \hline \multirow{5}{*}{**F**} & SD & 0.222 & 0.271 & 1.22 & 13.9 & 49.3 & 3.56 \\ & CW & 2.425 & 2.844 & 1.17 & out of memory & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & Friendster \\ & HL14 & 3.694 & 4.463 & 1.21 & out of memory & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } \\ & HL12 & 8.446 & 10.39 & 1.23 & out of memory & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } \\ \cline{1-1} \cline{5-8} & HH5 & 0.017 & 0.041 & 2.40 & 2.06 & 15.5 & 7.55 \\ & CH5 & 0.035 & 0.026 & 0.76 & 5.38 & 54.2 & 10.1 \\ & GL2 & 0.045 & 0.132 & 2.97 & 2.89 & 14.1 & 4.87 \\ & GL & 0.074 & 0.177 & 2.40 & 12.4 & 6.81 & 5.49 \\ & GL10 & 0.123 & 0.210 & 1.71 & 11.0 & 56.6 & 5.15 \\ & GL15 & 0.147 & 0.236 & 1.61 & 11.7 & 59.2 & 5.06 \\ & GL20 & 0.166 & 0.242 & 1.46 & 11.9 & 63.7 & 5.37 \\ & COS5 & 1.310 & 2.697 & 2.06 & 132 & 329 & 2.49 \\ \hline \multirow{5}{*}{**G**} & USA & 0.045 & 0.092 & 2.07 & 14.9 & 101 & 6.74 \\ & GE & 0.040 & 0.129 & 3.23 & 5.98 & 32.4 & 5.42 \\ \cline{1-1} \cline{5-8} & SQR & 0.161 & 0.290 & 1.80 & 45.4 & 184 & 4.05 \\ \cline{1-1} & REC & 0.023 & 0.037 & 1.66 & 7.28 & 520\({}^{2}\) & 71.4 \\ \cline{1-1} & SQR’ & 0.134 & 0.275 & 2.06 & 46.8 & 202\({}^{2}\) & 4.32 \\ \cline{1-1} & REC’ & 0.021 & 0.043 & 2.08 & 8.57 & 648\({}^{2}\) & 75.7 \\ \end{tabular}
\end{table}
Table 3. Running time (in seconds of connectivity and LE-lists implementations): DH5’21-the LDD-UF-JTB connectivity implementation in ConnectIt (Krishnan and Krishnan, 2016). Parlay=the LE-lists implementation in ParlayLib (Krishnan and Krishnan, 2016). Spd=Baseline_time / our time. “\(\tau\)”=results different from our parallel and sequential version, and the running time may not be accurate.
parallel reachability search and proposes a practical data structure (the hash bag) with theoretical analysis.
**Parallel BFS.** There exist other implementations of parallel BFS, and some of them also consider reducing synchronization costs (Han et al., 2016; Wang et al., 2017; Wang et al., 2018). However, these implementations only consider a single source, and we are unaware of how to directly apply to the multi-reachability search needed in SCC.
## 8. Discussions and Future Work
In this paper, we show that using faster algorithms on reachability queries can significantly accelerate the performance of SCC and related algorithms, especially for large-diameter graphs. We tested our SCC algorithm on large-scale graphs with up to hundreds of billions of edges. On average, our SCC algorithm is \(6.0\times\) faster than the best previous parallel implementation (GBBS), \(8.1\times\) faster than Multi-step, and \(12.9\times\) faster than Tarjan's sequential algorithms.
We believe that the two key techniques in this paper, the hash bag and vertical granularity control, are general and of independent interest. In this paper, we apply them to graph connectivity and LE-lists. The experimental results show that they lead to improved performance than prior work. We believe that they also apply to many other applications.
Hash bags are used to maintain frontiers (a subset of vertices) in graph algorithms. Many state-of-the-art graph libraries (e.g., GBBS (Han et al., 2016) and Ligra (2017)) use the abstract data type (ADT) called VertexSubset to maintain frontiers on many graph algorithms. Hash bags can be used to implement this ADT by replacing the current data structure (fixed-size array). With careful engineering, we believe hash bags can potentially improve the performance of these implementations. We leave this as future work.
The high-level idea of VGC applies to traversal-based graph algorithms, such as BFS, algorithms for connectivity, biconnectivity, single source shortest paths (SSSP), and some others in (Han et al., 2016; Wang et al., 2017; Wang et al., 2018). VGC can potentially accelerate them on large-diameter graphs. Our specific "local-search" idea does not directly apply as is. When the traversing order does not matter (e.g., reachability-based algorithms), local search can be applied directly. In a recent paper, we apply local search to graph biconnectivity (Han et al., 2016), which improved the overall performance by up to \(4\times\) on a variety of graphs. For some distance-based algorithms, we need additional designs on top of local-search, such as supporting revisiting certain vertices (e.g., in BFS, SSSP, LE-lists) for relaxation, or some wake-up strategies to find the next frontier (e.g., in \(k\)-core). We believe that this is an interesting research direction, and plan to explore it in the future.
## Acknowledgement
This work is supported by NSF grants CCF-2103483, CCF-2238358, IIS-2227669, and UCR Regents Faculty Fellowships. We thank anonymous reviewers for the useful feedbacks.
## Appendix A Proof of Thm. 3.1
Proof.: (sketch) We first show the algorithm is correct (i.e., each chuck will not be full), and then analyze the cost bounds. We consider the sampling in insertions as a coin-tossing process. Let \(k\) be the chunk size, and the sample rate is \(\sigma/\alpha k\). We first consider the sequential case (\(P=1\)). We are interested in the number of insertions performed before the number of samples reaches \(\sigma\). This is equivalent to the number of coins we toss before we see \(\sigma\) heads (sampled insertions). This can be analyzed using the Chernoff bound\(-\) assuming the \(\sigma=\Theta(\log n)\) and \(k=\Omega(\sigma/\epsilon^{2})\), the number of tossed coins \(e\) satisfies \((1-\epsilon)\alpha k\leq e\leq(1+\epsilon)\alpha k\) for \(0<\epsilon<1\)_whp_. If we set \(\alpha=\epsilon=0.5\), the load factor for this chunk when resizing is between \(1/4\) and \(3/4\)_whp_.
Now consider the parallel case, where \(P\) processors execute insertions asynchronously. In addition, when an insertion sampled, it first increments the counter using compare_and_swap, which can be delayed by other processors. Based on the assumption given in Sec. 3.3, between two consecutive compare_and_swaps, each of the other processors can only execute a constant number of instructions, so a total of \(O(P)\) insertions can be done. Since compare_and_swap is atomic, one processor has to win and proceed, so an unsuccessful compare_and_swap can happen for at most \(\sigma\) times before resizing. As compared to the sequential setting, \(O(P\sigma)\) more elements can be inserted. When \(k=\Omega(P\sigma+\sigma/\epsilon^{2})\), each chunk of the hash bag will not be overfull _whp_.
We now show that for \(s\) insertions, all elements in the hash bag will be in the first \(O(s+\lambda)\) positions in the _bag_ array _whp_, which bounds the work and span for listing and packing. As discussed above, the "wasted" space due to parallelism is upper bounded by \(O(P\sigma)\), which is asymptotically bounded even for the first chunk with \(\lambda=\Omega(\sigma(P+\log n))\). Since the load factor for each chunk is a constant when resizing, the total size of all chunks in use is \(O(s+\lambda)\).
Finally, we show the costs for insertions. An insertion to a hash table with linear-probing and a constant load factor uses \(O(1)\) expected work and \(O(\log n)\) span _whp_. For updating the sample count, there can be at most \(O(\log s\log n)\) samples _whp_, which is also the longest possible dependence chain in the algorithm. If we assume \(\lambda=\Omega(\sigma(P+\log n))\), for each insertion, the probability of it to be picked as a first chunk sample is \(O(1/(P+\log n))=O(1/\log n)\), and in the worst case we need to wait for \(\sigma=\Theta(\log n)\) retries until the counter is incremented. Hence, the amortized work is \(O(1)\). Then the sample rate halves for each of the next chunks, so taking the sum of a geometric series, the total amortized work to maintain counters is \(O(1)\) for an insertion.
Guided by the theory, we set \(\lambda=2^{14}\) and \(\sigma=50\). We note that if the frontier size is smaller than \(\lambda\), this may cause \(O(\operatorname{diam}(G)\cdot\lambda)\) work (which can be more than \(m\)) for a reachability search, since the work in each round is \(\Omega(\lambda)\). To maintain the work bound of reachability searches, we want to guarantee the frontier size is \(\Omega(\lambda)\) except for possibly the last frontier. This can be guaranteed by setting the local queue size (see Sec. 3.1) to be \(\Omega(\lambda)\).
|
2307.07712 | Visual Analytics For Machine Learning: A Data Perspective Survey | The past decade has witnessed a plethora of works that leverage the power of
visualization (VIS) to interpret machine learning (ML) models. The
corresponding research topic, VIS4ML, keeps growing at a fast pace. To better
organize the enormous works and shed light on the developing trend of VIS4ML,
we provide a systematic review of these works through this survey. Since data
quality greatly impacts the performance of ML models, our survey focuses
specifically on summarizing VIS4ML works from the data perspective. First, we
categorize the common data handled by ML models into five types, explain the
unique features of each type, and highlight the corresponding ML models that
are good at learning from them. Second, from the large number of VIS4ML works,
we tease out six tasks that operate on these types of data (i.e., data-centric
tasks) at different stages of the ML pipeline to understand, diagnose, and
refine ML models. Lastly, by studying the distribution of 143 surveyed papers
across the five data types, six data-centric tasks, and their intersections, we
analyze the prospective research directions and envision future research
trends. | Junpeng Wang, Shixia Liu, Wei Zhang | 2023-07-15T05:13:06Z | http://arxiv.org/abs/2307.07712v1 | # Visual Analytics For Machine Learning:
###### Abstract
The past decade has witnessed a plethora of works that leverage the power of visualization (VIS) to interpret machine learning (ML) models. The corresponding research topic, VIS4ML, keeps growing at a fast pace. To better organize the enormous works and shed light on the developing trend of VIS4ML, we provide a systematic review of these works through this survey. Since data quality greatly impacts the performance of ML models, our survey focuses specifically on summarizing VIS4ML works from the **data perspective**. First, we categorize the common data handled by ML models into five types, explain the unique features of each type, and highlight the corresponding ML models that are good at learning from them. Second, from the large number of VIS4ML works, we tease out six tasks that operate on these types of data (i.e., data-centric tasks) at different stages of the ML pipeline to understand, diagnose, and refine ML models. Lastly, by studying the distribution of 143 surveyed papers across the five data types, six data-centric tasks, and their intersections, we analyze the prospective research directions and envision future research trends.
Machine learning, explainable AI, VIS4ML, visualization, visual analytics, taxonomy.
## 1 Introduction
The recent success of machine learning (ML) [1], especially deep learning (DL) [2, 3], has received significant interest from researchers. ML has witnessed a general trend towards increasingly powerful models, however, often at the cost of being less and less interpretable. With growing concerns about the safety and reliability of ML models, their poor interpretability has started to prevent them from being adopted in many safety-critical applications, such as medical diagnosis [4, 5] and autonomous driving [6, 7]. To mitigate this problem, enormous visualization (VIS) efforts have been devoted to explainable artificial intelligence (XAI [8]) recently, e.g., perturbing data instances to probe ML models' decision boundary [9, 10], training interpretable surrogates to mimic ML models' behavior [11, 12], externalizing intermediate data from ML models to open the _black-boxes_[13, 14], etc. These works constitute a new research field, i.e., VIS4ML, and an increasing number of papers are being published every year in this booming field. This survey targets to structurally review them and shed light on their growing trend.
In the meantime, there is a rising tendency of shifting ML models' developments from model-centric to data-centric [15]. Although we live in the era of big data, there are many quality issues rooted in the data, such as noisy labels [16], missing items [17], and imbalanced data distributions [18]. As the modeling techniques get more and more mature, it becomes increasingly obvious to ML developers that more performance gains could be achieved from the improvement of data rather than models. So, along with the fast and steady evolution of ML models, improving data quality for ML models attracts more research attention recently [15]. This also echoes the famous proverb "Garbage In, Garbage Out", i.e., we can never get a satisfactory ML model without quality input data. The shift towards data-centric modeling from the ML field has also inspired many pioneering VIS works on inspecting and improving data quality through data curation, correction, and purification [16, 19, 20]. To promote this emerging and prospective direction, we revisit and structurally review existing VIS4ML works from a **data perspective** to disclose what efforts have been conducted and what opportunities remain open. Such a review will help to inspire more VIS4ML ideas and drive more data-oriented innovations.
Our data-centric survey aims to systematically review the latest VIS4ML works by disclosing **what** data they have been focused on and **how** the data have been operated to interpret, diagnose, and refine ML models. It is carried out from the following three aspects. First, we identify the most common _data types_ processed by ML models, their unique features, and how ML models have been tailored to better learn from them (Sec. 4). Second, focusing on the operations applied to the identified data types, we elicit _data-centric VIS4ML tasks_ serving the general goal of model understanding, diagnosis, and refinement [21, 22] (Sec. 5). Third, by studying the distribution of the surveyed papers across different data types, VIS4ML tasks, and their intersections, we summarize the ongoing research trend and disclose prospective VIS4ML research directions (Sec. 6).
In essence, the contributions of our survey are twofold. First, we provide a data-centric taxonomy for VIS4ML and comprehensively review the latest works following the taxonomy. The taxonomy and review help researchers better understand the fast-growing number of VIS4ML works, re-examine them in a new angle, and unblock researchers from proposing more data-centric VIS4ML works. Second, from the coverage of the surveyed papers across different taxonomy sub-categories, we reveal what data types, VIS4ML
tasks, or data-task combinations have not been sufficiently explored, pointing the way to promising research directions and nourishing new ideas in this flourishing field. An interactive webpage for the survey has been developed using SurVis [23], which is available at: [https://vis4ml.github.io/](https://vis4ml.github.io/).
## 2 Related Works
**Existing VIS4ML Surveys.** As the number of VIS4ML works keeps growing at a fast pace, there have been multiple surveys [21, 22, 24, 25, 26] and conceptual frameworks [27, 28] trying to organize and review them. We discuss these works and highlight the unique perspective that we have taken to differentiate our survey from them.
_Task-Centric._ Based on the tasks that VIS works try to accomplish when serving ML, researchers have categorized VIS4ML works into understanding, diagnosing, and refining ML models [21, 22, 27]. These three tasks have been well-recognized by the VIS community and referred to in many latest papers [29, 30]. We also advocate this categorization and consider these tasks as three high-level goals of VIS4ML. Our survey further distills six low-level tasks that are often performed when accomplishing these goals (Fig. 2(d)). For example, refining a model can be achieved by generating new data or improving existing data (two low-level tasks). Moreover, although model refinement can be conducted from both the model side (e.g., architecture pruning [31, 32]) and data side, we limit ourselves on the data side to present unique data-level insights.
_Procedure-Centric._ Following the building process of ML models and existing ML pipelines, Yuan et al. [26] separated VIS works into groups that interpret ML models before, during, and after their building process. Likewise, by tracing ML models' execution, Chatzimparpas et al. [24] reviewed how VIS enhances the trust-level in five key ML pipeline stages. The predictive visual analytics framework followed similar stages to review VIS works for predictive models [25]. Our survey also considers the ML construction pipelines. Following our unique focus of ML data, we identify the "operational data" from ML pipelines as input, intermediate, and output data (Fig. 2(b)) and explain what data-centric tasks are often conducted on each of them.
_Human/User-Centric._ By analyzing human involvements in different ML model-building stages, Sacha et al. [28] introduced a human-centric VIS4ML ontology, where VIS assists humans to prepare-data, prepare-learning, model-learning, and evaluate-model. Similarly, there are multiple attempts trying to exploit the role of _users_ in exploratory model analysis [33] and active learning [34]. This user-centric viewpoint diverges significantly from our data-centric perspective, resulting in distinct paper categorizations and unique insights from respective standpoints.
There is a great overlap of the covered papers between our survey and the earlier VIS4ML surveys. However, the reviewing perspective and paper categorizations of our survey are very different from those of the earlier ones, so as to the disclosed insights and identified research opportunities. For example, Yuan et al. [26] took a procedure-centric perspective and discussed the input data-quality issues in their before model-building category. In our survey, the same issue is discussed in the assess task (Sec. 5.4.1). Due to this overlap, both surveys identify the research opportunity of improving data quality, echoing its importance. On the other hand, our assess task also covers the output data assessment (Sec. 5.4.2), which (partially) corresponds to the after model-building category of Yuan et al. [26]. Although the studied VIS4ML papers may largely overlap, different perspectives organize them into different groups, disclosing unique insights from respective perspectives. For example, the research opportunities identified from Yuan et al. [26] are what can be further improved before/after model-building. In contrast, our survey will provide insights into _what_ data types have been under-explored and _how_ the data can be further assessed.
**Existing VIS Task Taxonomies.** As we introduce a taxonomy for data-centric VIS4ML tasks, the existing VIS task taxonomies are also related to our work. Amar et al. [35] summarized 10 low-level tasks to accomplish the high-level goal of data understanding. In analogy to their rationale, we summarize six low-level tasks to accomplish the three high-level VIS4ML goals of understanding, diagnosing, and refining ML models [21]. Brehmer and Munzner [36] organized VIS tasks into a multi-level topology, which answers _why_ a task is performed, _how_ it is performed, and _what_ the task's input and output are. As emphasized by the authors, their tasks are abstract with no target applications, so that they can compare them across applications. In contrast, our tasks here are specific to VIS4ML and they all focus on ML operational data. The tasks introduced by Shneiderman [37] are designated for data exploration (e.g., zoom/filter) but are not able to cover the diverse aspects in ML model analysis, such as data assessment and data improvement. Given these differences, we cannot directly reuse existing taxonomies. With iterative explorations and progressive refinements (detailed in Sec. 3.2), we came out with the methodology of deriving data-centric VIS4ML tasks by carefully examining the requirement/task analysis section of individual VIS4ML papers, and finally elicited six tasks (Sec. 5). Note that there are definitely overlaps between our tasks and the tasks from the existing VIS literature, as VIS4ML is a subdomain of VIS. For example, the essence of our present task is similar to the present task in [36] and the overview task in [37]. Nevertheless, our identified tasks are always data-centric and specific to the VIS4ML domain.
## 3 Survey Landscape and Taxonomy
We have seen an increasing number of VIS4ML works since 2016, and thus, set the temporal coverage of our survey to be 2016-2022. Within this temporal range, we identify related VIS4ML works by screening research papers from major VIS conferences and journals, including:
* _IEEE Visualization & Visual Analytics Conference (VIS),_
* _Eurographics Conference on Visualization (EuroVis),_
* _IEEE Pacific Visualization Symposium (PacificVis),_
* _IEEE Trans. on Visualization and Computer Graphics (TVCG),_
* _Computer Graphics Forum (CGF),_
* _Computer Graphics & Applications (CG&A)._
### _Paper Selection_
From the covered venues and specified temporal range, we extract the related VIS4ML papers in four steps (Fig. 1).
**First,** an initial screening of all papers in the range is conducted, focusing majorly on the papers' title to decide if they are related to VIS+ML or not. This process filters out 591 papers and we have included them all in our Supplementary Material. **Second**, for a more careful screening of the filtered papers, we read their Abstract and Introduction to exclude papers that are actually not related to ML (though their title includes some related words, such as "Learning" or "Deep"). This step reduces the number of papers down to 555. **Third**, we read the methodology sections of the papers and check their included figures to exclude ML4VIS works. These works use ML to solve traditional VIS problems or facilitate data analysis, but present less model interpretation effort. A large number of papers belong to this category and excluding them reduces the number of papers down to 180. **Lastly**, for the remaining papers, we further exclude works that (1) focus solely on the interpretation of ML models' architectures or hyperparameters where data is not their focus, or (2) introduce conceptual frameworks (or positional papers) that do not perform any data operations. For (1), Net2Vis [38] introduces a grammar to easily extract CNN architectures and visualize them as publication-tailored figures. DNN Genealogy [39] summarizes the evolution trend of DNN architectures and conducts visual analytics on the trend. Both papers present great VIS4ML contributions. However, since they focus solely on models' architecture and no data is involved, we exclude them from this data-centric survey. The papers in (2) organize and review existing VIS4ML works from different angles, e.g., [24, 27, 28]. However, since they do not conduct concrete data operations, they have also been excluded.
Finally, 143 closely related VIS4ML papers have been identified. Among them, 81 focus specifically on the interpretation of DL models, whereas the rest 62 interpret classic ML models (e.g., decision trees and SVMs) or their proposed solution is general enough for any ML models. The papers' distribution across years is shown in Fig. 2. An increasing trend is clearly observed (for both _DL_ and _classic ML_).
### _Categorization Rationales and Iterations_
Our data-centric review was conducted from two aspects: (1) _what_ types of data the VIS4ML works focus on; and (2) _how_ those data have been operated to interpret, diagnose, or refine ML models. The categorizations of these two aspects have undergone many iterations. We briefly summarize some key iterations here to explain our survey rationales.
For the _"what"_ part, we first identified the operational data of ML models as _input_, _intermediate_, and _output_ data [40] following the ML execution pipeline (Fig. 3(a, b)). Then, we tried to label VIS4ML papers based on their interpretation focus across the three data types. However, with some initial labeling, we found that almost all VIS4ML papers covered the _input_ and _output_ data, some of them used the _intermediate_ data while others did not. This categorization quickly degenerated into two categories that essentially reflect if a work is model-specific (using _intermediate_ data) or model-agnostic (not using _intermediate_ data). As this taxonomy has been introduced in earlier surveys, we did not continue this attempt. Later, we tried to borrow the data categorization from the database field and classified data into _structured_ and _unstructured_. With some labeling practices, however, we noticed that most data in VIS4ML works are _unstructured_ (e.g., images, texts, and graphs). Using this categorization could not disclose the unique features (e.g., spatial or sequential) of each data type and resulted in a very unbalanced data type distribution. After more explorations and inspired by the underlying data features that ML models are tailored to handle (e.g., CNNs/RNNs are good at processing spatial/sequential data), we eventually came up with our current data categorization (detailed later in Sec. 4).
For the "how" part, our initial categorization was to group papers based on the VIS techniques they have adopted (e.g., node-link diagrams and scatterplots). This seemed to be the most straightforward choice. However, we soon realized that the identified VIS techniques would be general to any data analysis topics and could not reflect the uniqueness of VIS4ML, nor did they align with our data-centric perspective. Inspired by Munzner's nested model [41], we then shifted our focus to the requirement analysis section of VIS4ML papers. Here, we found that the requirements were mostly task-oriented. Therefore, we turned to examine existing VIS task taxonomies, as summarized in Sec. 2. Nevertheless, most of those task taxonomies are not specific to VIS4ML but rather general to any data analysis applications. After several more categorization iterations, we realized that the sentences describing the requirements in individual VIS4ML papers revealed how VIS should serve ML. From those sentences, we extracted the verbs, i.e., operations applied to ML data, and merged similar operations to identify the most representative ones. In the end, we derived six tasks that are specific to VIS4ML (detailed later in Sec. 5). Moreover, these tasks are also data-centric, as the objects of the requirement analysis sentences always pertain to the three types of ML operational data. To explicitly establish the connections between the identified data and tasks, we connect them with green, orange, and blue arrows between Fig 3(b) and Fig 3(d).
Fig. 1: Four-step paper selection process. The covered 143 papers include 81 DL-specific ones and 62 ones for the interpretation of classic ML.
Fig. 2: The surveyed 143 papers (81/62 for DL/classic ML) over years.
### _Survey Taxonomy and Overview_
Our data-centric taxonomy reviews VIS4ML papers based on _what_ types of data the corresponding ML models focus on and _how_ the data have been operated (i.e., VIS4ML tasks) to understand, diagnose, and refine ML models, i.e.,
* **Data Types** (Sec. 4). We identify the common types of data fed into ML models, describe their unique characteristics, and explain how ML models have been tailored to better learn from them. These data types include: tabular, sequential, multi-dimensional array, graph, and multi-modality data (Fig. 3(c)).
* **Data-Centric Tasks** (Sec. 5). Focusing on the operations applied to the five data types, we elicit six data-centric VIS4ML tasks: present, explore, assess, compare, generate, and improve data. The first five are commonly used for model understanding / diagnosis. The generate task, together with the improve task, is also used for model refinement (Fig. 3(d)).
**Overview.** Sec. 4/Sec. 5 illustrate our data/task taxonomy in detail, with each sub-category being exemplified by one or multiple representative VIS works. As it is impossible to exemplify all the 143 papers, we summarize them in Tabs. 1 and 2. Sec. 6 presents the distributions of the papers across data types, data-centric tasks, and their intersections, disclosing the current research trend and prospective future directions. Finally, we discuss some inherent limitations of our survey in Sec. 7 before concluding it in Sec. 8.
## 4 Data Types
This section categorizes ML operational data from the input side, as the input data preserve the original characteristics and modality of the data. While we have also considered data categorization using the intermediate or output data, it is important to note that the format of intermediate data is predominantly influenced by the specific ML models employed. For example, in DNNs, the intermediate data are activations and weights, whereas in tree-based models, they become feature-splitting criteria and decision rules. On the other hand, the format of the output data is primarily determined by the addressed applications. For instance, classification and clustering models consistently produce class labels and cluster IDs as outputs, regardless of the input data modalities. Categorizing VIS4ML papers using the focused ML models/applications has also been covered in early surveys [22, 42]. Our data-centric survey tries to minimize the overlap with them and categorizes data from the input side. Moreover, depending on the specific models and applications, the intermediate and output data can be more complex and diverse compared to the input data. Categorizing papers based on them will require defining and distinguishing numerous subcategories, leading to increased complexity and potential ambiguity in the categorization.
Note that the input data here are the _direct_ input to ML models but they may not be the raw data generated from different applications. For example, Tam et al. [43] studied facial dynamics data to analyze the difference between four facial emotions, anger, surprise, sadness, and smile. The raw data are videos captured from the face of different people, but these videos cannot be directly used to train ML models. The authors pre-processed individual video frames first to extract 14 numerical measurements for different facial features, e.g., the vertical displacement of the chin. Tracking the values of these measurements across frames forms a time-series that can be fed into ML models. In this case, the input data are the time-series rather than the raw videos.
Based on a comprehensive review of the 143 papers, we categorize the input of ML models into the following five types: tabular, sequential, multi-dimensional array, graph, and multi-modality data. All data types come with a collection of instances, and each instance may have some annotation information associated with it. Mathematically, a dataset \(\mathcal{D}\) can be described as:
\[\begin{split}\mathcal{D}=<& X,Y>,\ where\\ X=&\{x_{1},x_{2},...,x_{n}\}\ and\ Y=& \{y_{1},y_{2},...,y_{n}\}.\end{split} \tag{1}\]
\(X\) is the feature part of \(\mathcal{D}\), which is the input of ML models. The term "feature" has the same meaning as in ML, i.e., it denotes "an individual measurable property" [44], e.g., the
Fig. 3: The data-centric taxonomy. The _input data_ of ML models consists of five different types (c). The six _data-centric tasks_ (d) are applied to the three types of operational data (b) from different stages of the ML pipeline (a) to help people _understand_, _diagnose_, and _refine ML models.
age, gender_, or _annual income_ of an individual. \(Y\) (if exists) is the annotation part that supervised ML models should target on during training, e.g., the class labels of image data. As \(X\) and \(Y\) have the one-to-one correspondence, a single instance of \(\mathcal{D}\) can be denoted as \((x_{i},y_{i})\). In cases where \(\mathcal{D}\) does not contain \(Y\), ML models will have to learn from \(X\) in an unsupervised or semi-supervised manner.
The differences of the five data types reside in the \(X\) part. We explain them in the following subsections by (1) providing their definition, (2) listing some typical examples, and (3) discussing the challenges when learning from them.
### _Tabular Data_
_Definition._Tabular data comes as a data table, where each row is a data instance and each column is an attribute of the instance. Mathematically, a/an row/instance can be denoted as:
\[x_{i}=(v^{D_{1}},v^{D_{2}},...,v^{D_{m}}), \tag{2}\]
where \(v^{D_{j}}\) (\(j\)\(\in\)\([1,m]\)) is a possible value of the \(j\)th attribute defined in the corresponding domain \(D_{j}\), and the value can be either categorical or numerical. The annotation information \(Y\), if exists, usually appears as a column in the table.
_Examples._ The U.S. Census Income dataset used in [45] is a typical tabular data. Each row of the dataset is a person and each column reflects the value of one feature, e.g., _age, gender_, and _capital-gain_. Similar examples also include the Bank Marketing dataset used in [11], and the Criminal Recidivism dataset used in [46]. Individual features of these tabular data usually represent human-understandable semantics, e.g., _age, race_, and _income_, which contribute significantly to the interpretation of the corresponding ML models. Moreover, new features can also be generated through feature engineering to horizontally extend the table.
_Challenges._ The key challenge for ML models in handling tabular data is to manage the large number of features and learn information out of their complicated collaborative effects, i.e., feature interactions [8]. Both traditional ML models (e.g., SVMs, logistic regressions, decision trees) and DL models (e.g., multi-layer perceptions) have been applied to this type of data. VIS4ML works have covered all these models' interpretations [47, 48, 10, 45, 46] with varying visualization focuses, such as interpreting these models by better presenting individual instances [5], more intuitively disclosing the importance of features [10], and steering the feature engineering process to refine these models [49].
### _Sequential Data_
_Definition._Sequential data comes with a collection of sequences that may have varying lengths. Each sequence \(x_{i}\) is composed of \(k\) tokens organized in order. For example, a sentence with \(k\) words is a sequence of \(k\) tokens. Each token \(t_{i}\) is a feature vector, e.g., the embedding vector of a word. Mathematically,
\[x_{i}=(t_{1},t_{2},...,t_{k}). \tag{3}\]
Note that we used \(x_{i}\) in both Eq. 2, Eq. 3, and later equations, to denote a single instance of the dataset \(X\). However, it has different representations when the data type is different.
_Examples._ The two most common sequential data are text data (each word/character is a token) and time-series data (each time step is a token). For example, the Penn TreeBank [50] dataset used in [51, 52] is a famous English corpus of sentences. Each sentence is a sequential instance and the parts of speech for individual words/tokens have been well-annotated in the dataset. Weather forecasting data [53], sleep signals [54], and musical chord progression sequences [52] are examples of time-series data, in which, tokens are ordered into sequences chronologically.
_Challenges._ The main challenge of learning from sequential data is to capture the sequential information propagation inside a sequence and find how preceding and succeeding tokens influence each other. RNNs and their variants (e.g., LSTMs and GRUs [51]) that maintain multiple hidden states to recursively pass on the sequential information from token to token demonstrate superior performance on this data type. Recently, Transformers [55] have also been introduced for sequential data learning. Instead of processing the tokens sequentially one-by-one, Transformers consume all tokens at once and use the self-attention mechanism to learn pair-wise attentions between all tokens. Most VIS4ML works for this data type focus on presenting the sequential data and relating them with their latent representations inside ML models to reveal what the models have captured, e.g., RNN hidden state interpretations [52, 56]. Explaining how Transformers' self-attentions work so well on sequential data has also been extensively conducted [57, 58].
### _Multi-Dimensional (MD) Array Data_
_Definition._Multi-dimensional array data is composed of a set of instances, each is an array of scalar values organized spatially into a regular grid structure. For example, a gray-scale image can be considered as a 2D array storing the image's pixels along the width and height dimensions. Using _multi-dimensional (MD) array_ to name this type of data follows the terminology from the ML domain, i.e., LeCun et al. [2] and Goodfellow et al. [3] referred to this type of data as "multiple arrays" and "multidimensional arrays," respectively. Mathematically, each instance can be denoted as (assuming a 2D case),
\[x_{i}=\begin{pmatrix}s_{1,1}&s_{2,1}&\cdots&s_{w,1}\\ s_{1,2}&s_{2,2}&\cdots&s_{w,2}\\ \vdots&\vdots&\ddots&\vdots\\ s_{1,h}&s_{2,h}&\cdots&s_{w,h}\end{pmatrix}.\]
_Examples._ Image and volume data are representative examples for this data type. For instance, the MNIST dataset [59] used in [60, 61] is a famous benchmark, consisting of 70,000 gray-scale images of hand-written digits. Each image/instance is a 2D array with individual scalar values (pixels) ranging from 0 to 255. The CIFAR10 [62] used in [14, 63] and the ImageNet [64] used in [65, 66] are RGB image datasets with higher-resolution images in more classes (each image is a 3D array of scalar values).
_Challenges._ Preserving spatial continuity and extracting localized features are the essential challenges for ML models when learning from MD-array data. CNNs [67] are often
the ideal choices in handling MD-array data, as they can chain layers of convolutional filters to extract varying features hierarchically (e.g., the basic shape/color features from lower CNN layers and the complicated objects/concepts from higher layers). Lately, vision Transformers [68] and their combinations with CNNs have also demonstrated outstanding performance on this type of data. VIS4ML works strive to better demonstrate the spatial features of MD-array data [69], highlight important features impacting ML models' behaviors, (e.g., salience map visualizations [70]), and externalize the internal representation of the data inside ML models (e.g., feature map visualizations [71]).
### _Graph Data_
_Definition._ A graph is usually represented by a set of nodes and a set of edges. The nodes contain feature information and the edges record the connections between nodes. Formally, a graph can be denoted as,
\[\mathcal{G}=\mathcal{N},\mathcal{E}>, \tag{4}\]
\[where\ \mathcal{N}{=}\{n_{1},n_{2},...,n_{n}\},\mathcal{E}{=}\{e_{i,j}|i{<=}n,j{<=}n\}.\]
Each node is further represented by a feature vector, i.e.,
\[n_{i}=(f_{1},f_{2},...,f_{k}). \tag{5}\]
In general, graph data are often categorized into _homogeneous_ and _heterogeneous_ graphs. For the former, all graph nodes represent instances of the same type and all graph edges denote the same relationship between nodes. For the latter, however, the graph nodes have varying types and the graph edges could represent multiple relationships.
_Examples._ A social network is a typical homogeneous graph, where each node is a person and each edge reflects the friendship between persons. Each person will also have multiple features, e.g., _gender, age, number of friends_, etc., constituting the feature vector of the corresponding graph node. More homogeneous graph examples include publication citation graphs [72] and molecular compound structure graphs [73]. For heterogeneous graphs, the User-Movie data used in [74] is a good example, where a graph node could either be a user or a movie, and an edge between two nodes represents the user has watched the corresponding movie.
_Challenges._ ML models can be trained to learn from both the node-related features and the edge-related structures of graphs. Often, the training instances fed into ML models are individual nodes, each is represented by a feature vector (Eq. 5). The ML models learn from these nodes' features, as well as the features from their neighboring nodes through edge connections, to predict the properties of certain nodes or the existence of specific edges. Accordingly, the challenge in handling graph data is to not only learn from the features of individual nodes, but also leverage their neighbors' features that can be propagated to them through connected edges (i.e., learning from both the feature and structure information). GNNs [75] are introduced to take care of the message passing between nodes, as well as the aggregation of information received from a node's neighbors. Their power has been demonstrated across all types of graph-related learning tasks, e.g., node classification, node ranking, edge prediction, and community detection. The difficulties that VIS4ML faces with this type of data are to effectively present the multivariate features of graph nodes (e.g., glyph visualization [76]), disclose the sophisticated connections between nodes, and more importantly, address the scalability issues when the graphs become large.
Note that there are also ML models designed to learn from multiple graphs. In this case, each training instance is a graph (rather than a graph node). Individual graphs have their independent sets of nodes and edges. For example, a chemical compound can be represented as a graph (node: atom, edge: bond). Researchers have developed many DL models (i.e., binary classifiers) to predict if a compound is cancer-related or not [77]. Uniformly handling the varying graph sizes and efficiently extracting information out of individual graphs are the key learning challenges.
### _Multi-Modality Data_
_Definition._ Multiple of the aforementioned data types could be learned together by ML models. These data may come from different data sources, be in different formats, and present different modalities. We call them multi-modality data, and their modalities could be nested or interwoven.
_Examples._ Video data can be considered as a hybrid of MD-array and sequential data. Each frame of the video is an image encoding spatial features. A consecutive sequence of these frames constitute a sequential data instance. The spatial modality is nested inside the sequential modality. Most of the deep reinforcement learning (DRL) agents trained to play video games use this type of multi-modality data as training instances (i.e., game episodes) [70, 78]. Dynamic graphs hybrid sequential data with graph data, and the graph modality is nested under the sequential modality, e.g., an evolving social network with varying numbers of nodes (users) and edges (users' relationships) over time. Different modalities can also be interwoven at the same level. For example, the data used in \(M^{2}\)Lens [79] include three types of sequential data with different modalities, (1) facial expressions (video data), (2) voices of speakers (acoustic data), and (3) verbal transcripts (text data). Different ML models can be trained to take care of the respective modalities and their outcomes can be fused together for comprehensive learning.
_Challenges._ The challenges of learning from this type of data come from choosing the best ML models to handle individual data modalities and effectively fusing the learned outcomes. Different ML models are good at handling different data types. For example, tree-based models take good care of the feature interactions of tabular data; CNNs are good at extracting spatial features from MD-array data; RNNs show superior performance in managing data with sequential structures; GNNs demonstrate advantages in capturing the structure-level information of graphs. How to integrate these ML models and maximally leverage their respective advantages to process the multi-modality data is a challenging problem and of paramount importance. VIS4ML strives to better visualize individual modalities of the data and effectively reveal the underlying connections between modalities. Furthermore, the complicated relationship between varying modalities also challenges VIS4ML works to take advantage of the hidden information between modalities to refine and improve ML models [80, 81].
## 5 Data-Centric VIS4ML Tasks
As summarized in earlier works [21, 29], VIS has served ML in model understanding, diagnosis, and refinement. To analyze how these goals are achieved from the data side, we investigate the concrete VIS tasks that have been conducted on the _input_, _intermediate_, and _output_ data (Fig. 3(b)). The six elicited tasks are: present, explore, compare, assess, generate, and improve data. Their relationship with model understanding, diagnosis, and refinement is reflected in Fig. 3(d). Note that some of the tasks have been covered in earlier surveys, e.g., present and compare. Here, we focus on illustrating how they have been applied to the operational data in the VIS4ML context. There are also tasks that are not well-covered in earlier surveys, e.g., generate and improve. These are specific tasks identified from our data-centric review of the literature.
### _Present Data_
Presenting data is to map the operational data into different visual channels to externalize the information in the data. It is a fundamental VIS operation that every VIS4ML work conducts, but different works may focus on the data from different ML pipeline stages. As the data to ML models is a collection of instances (Eq. 1), the visual mappings focus either on individual data instances or on the aggregation of a group of instances (instance/group-level). We thus explain the present task from these two levels. Inside each, we use some typical VIS4ML works to explain how individual _input_, _intermediate_, and _output_ data instances/groups have been presented. For a full list, please refer to Tabs. 1 and 2.
#### 5.1.1 Instance-Level Data Presentation
Instance-level presentation visually encodes the information of individual data instances. Users can directly interact with each instance (if needed) to examine ML models' behaviors.
_Presenting Input Data._ Individual input instances carry data features/semantics that are important to understand the behavior of ML models. Presenting input instances of interest is therefore the starting point of many VIS4ML works. For example, DeepVID [9] presents the MD-array input of a classification model as a grid of images (Fig. 4(a\({}_{2}\))). From the visual appearance of the images, users can select the ones that are more likely to confuse the classifier to diagnose the model. As directly visualizing all input images in the grid will have a severe scalability issue, the authors use the images' extracted features to present an overview of them first before the grid layout. Specifically, a pre-trained CNN is used as a feature-extractor to extract the essential features of the input images. These HD features are then reduced to 2D through dimensionality reduction [84] and visualized as a scatterplot (Fig. 4(a\({}_{1}\))). Each point in the plot represents one input image and it is colored by its class label. From such an overview, images that are similar to both digit 4 and 9 can be easily selected to probe the classifier's decision boundary between these two classes. Note that the extracted features of the input images (from the third-party CNN) are not the intermediate data of the interpreted classifier and DeepVID is a model-agnostic interpretation method.
_Presenting Intermediate Data._ Intermediate data is the key to opening ML black-boxes [85, 86]. Heatmap is commonly used for its visualization, which presents data through a 2D matrix and uses the color of each matrix cell to encode the information. For example, DynamicsExplorer [82] adopts a heatmap (Fig. 4(b)) to investigate an LSTM-based DRL agent trained for the "ball-in-maze" game (Fig. 4(b\({}_{1}\))). To better handle the high-dimensionality of the intermediate hidden states, PCA is applied onto the hidden states first. In Fig. 4(b\({}_{2}\)), the horizontal and vertical axes of the heatmap represent time and individual principle components, respectively. Users can brush horizontally to select the interested temporal range and examine the hidden states (Fig. 4(b\({}_{3}\))).
_Presenting Output Data._ Parallel coordinates plots (PCPs) have been used widely to present the output of ML models. For example, Ren et al. [83] employ a PCP to visualize the prediction probabilities from a classification model. As shown in Fig. 4(c), each parallel axis denotes one class and the values on it show the predicted probabilities for the corresponding class. A polyline connecting the probabilities across classes shows the entire output probability distribution for an instance. Multiple instances are presented as multiple superimposed polylines, and their collective behaviors reveal the model's performance over classes. In Fig. 4(c), four MNIST images with similar probabilities to be digit '3' and '5' are shown as four polylines in the PCP.
#### 5.1.2 Group-Level Data Presentation
Group-level presentation first aggregates data instances into groups and then visually encodes them. It focuses more on revealing group-level data patterns, instead of disclosing individual instances' local behaviors.
_Presenting Input Data._ Histogram is a popular VIS technique to present data distribution across input feature val
Fig. 4: Instance-level presentation. (a) The _input_ images are presented through a scatterplot, one point for one image. Image courtesy of Wang et al. [9] © 2019 IEEE. (b) The _intermediate_ hidden states are externalized through a heatmap, each row is an instance (principle component) and each column is a time step. Image courtesy of He et al. [82] © 2020 IEEE. (c) The _output_ probabilities are presented through a PCP, one polyline per instance. Image courtesy of Ren et al. [83] © 2016 IEEE.
ues. For example, DECE [10] uses a big table of histograms to present the tabular data fed into ML models. As shown in Fig. 5(a), each row of the table is a subgroup of instances and each column is a data feature. The upward histogram in each table cell presents the distribution of the corresponding feature values (for the subgroup of instances). Based on the binary prediction results of the instances, the upward histogram is further divided into two juxtaposed ones, colored by blue and orange. Moreover, the counterfactual examples for the subgroup of instances are also generated and their feature value distributions are presented as a symmetric but downward histogram. The side-by-side comparison helps users formulate/verify hypotheses on different features.
_Presenting Intermediate Data._ Matrix visualization aggregates data instances into a 2D matrix and uses colors, sizes, or glyphs to encode the aggregated data inside each cell. For example, ActivVis [13] enables users to flexibly define data subgroups, e.g., by class labels. Aggregating the instances' activations from a DNN inside individual subgroups and comparing the aggregated activations across subgroups disclose the functionality of different DNN neurons. As shown in Fig. 5(b), each row/column of the matrix represents a subgroup/a neuron, and the circle inside a cell represents the aggregated response-level of the corresponding neuron (darker colors indicate stronger aggregated responses).
_Presenting Output Data._ Sankey-diagram can effectively illustrate how data instances are divided or merged into groups (often over time) and is a common technique for group-level data presentation. For example, VISTB [86] employs a Sankey-diagram to disclose the evolution of predictions over the training of a tree-boosting model. As shown in Fig. 5(c), each column of nodes presents the confusion matrix of the model at a time step. The color and filling pattern denote the predicted class and prediction correctness (solid: true positive (TP); strip: false positive (FP)), respectively. The bands between neighboring columns illustrate the flowing of instance groups between time steps. Their color reflects if the predictions of the corresponding groups are improved (green: from a FP to a TP cell), degenerated (red: from a TP to a FP cell), or not changed (gray). Such a visualization effectively monitors the model's performance evolution.
### _Explore Data_
Visual data exploration is "an undirected search for relevant information within the data" [87], in which users may not have a clear goal while playing with the data but rely on highly interactive interfaces and intermediate insights to drive the exploration. In VIST4ML, when data gets too large and/or contains multiple facets, explorations will have to come into the picture. Based on the exploration directions, we organize works into _vertical_ and _horizontal_ explorations.
#### 5.2.1 Vertical Exploration
Vertical exploration refers to the process of exploring data by following the order of either global-to-local (top-down) or local-to-global (bottom-up). The former starts by providing users with a succinct data overview, from which, users can drill down to low-level data details on-demand. In contrast, the latter first investigates part of the data locally with sufficient details. Based on the knowledge obtained from some representative data instances/features, the users then expand the exploration to the entire dataset.
The _top-down_ exploration follows Shneiderman's information seeking mantra [88] to present data through _overview +details_. For example, DeepVID [9] diagnoses incorrect predictions of image classifiers by first laying out all images using tSNE+scatterplot. The layout provides an overview of all images, guiding users to drill down to individual images of interest for detailed diagnosis. As shown in Fig. 4(a\({}_{1}\)), the user selects the instances between the purple and brown clusters (e.g., images with similar probabilities to be digit '4' and '9') through a lasso selection. Fig. 4(a\({}_{2}\)) presents the details of these images and enables the user to further investigate individual ones. Similarly, VATLD [6] lays out all images through a performance landscape, i.e., _TileSave_, for an overview. Each tile aggregates similar images and uses the instance with the median score to represent the tile. Interactive zooming empowers users to explore the space and drill down to finer data granularities on-demand.
The _bottom-up_ exploration inspects individual instances first, and then, expands the inspections to all instances to augment the findings. For example, LSTMVis [52] allows users to interactively define the active pattern of different LSTM hidden states through an on-off curve defined over a single instance. The pattern is then used as a template to match with all instances. From the semantics augmented by all matched instances, the authors confidently interpret what has been captured by different hidden states. DQN-Viz [89] closely examines how a DRL agent plays an Atari game in one game episode and uses a regular expression
Fig. 5: Group-level presentation. (a) The tabular _input_ data in DECE [10] are divided into subgroups and presented as histograms. Image courtesy of Cheng et al. [10] © 2020 IEEE. (b) The _intermediate_ DNN activations from subgroups of instances are aggregated in ActivVis [13] and presented as circles, whose color denotes the active level. Image courtesy of Kahng et al. [13] © 2017 IEEE. (c) A Sankey-diagram based temporal confusion matrix is used to present the _output_ prediction over the training of a tree-boosting model. Image courtesy of Wang et al. [86] © 2021 IEEE.
to define its playing strategy. The regular expression is then applied to all game episodes to search when and where the same strategy was used to understand the agent's behaviors.
#### 5.2.2 Horizontal Exploration
Horizontal exploration explores data across multiple stages of the ML pipeline, multiple temporal iterations, or multiple data spaces to relate data and derive insights. For example, Rauber et al. [61] employ tSNE+scatterplot to visualize the activations of all data instances from early and later layers of a DNN, as shown in Fig. 6(a\({}_{1}\), a\({}_{2}\)). The two layouts clearly disclose how the forward-propagation separates data instances into different classes. Similarly, Fig. 6(b\({}_{1}\), b\({}_{2}\)) show the layouts for the DNN's last-layer activations from two training stages. Exploring these visualizations helps to understand the model's temporal evolution. DGMTracker [63] explores deep generative models layer-by-layer through statistics presented by line-chart snapshots to diagnose the model training process. The exploration traces data across neural network layers sequentially, which is considered a horizontal exploration. EmebeddingVis [90] simultaneously explores multiple graph embedding spaces generated for the same set of graph nodes by using different embedding algorithms. As shown in Fig. 7(c), the original graph space and three embedding spaces are presented as four juxtaposed scatterplots. Explicit links are used to connect the same graph nodes across spaces for coordinated explorations, which facilitates the comparison of the underlying embedding algorithms. Specifically, the DeepWalk and Node2vec algorithms perform similarly well in separating the selected nodes (in the red dashed line) into two subgroups, whereas the Stru2vec algorithm disperses them.
### _Compare Data_
Data comparisons in VIS4ML identify the similarity and difference of the operational data to support model understanding or diagnosis. They focus either on individual data _instances_ or _groups_ of instances, and the comparisons are often conducted either _within_ or _between_ instance(s)/group(s).
#### 5.3.1 Intra-Instance Comparison
The intra-instance comparison compares the same data instance before and after some modifications applied to either the data instance or the studied ML model.
For the first case (fix model, modify data), researchers modify a single data instance and examine how the modification impacts the ML model to probe its behavior. For example, SCANViz [91] uses a PCP to present the latent dimensions of a \(\beta\)VAE trained on images (Fig. 7(a)). By perturbing the value of a latent dimension and interactively decoding the perturbed latent representations back as images, users can conclude what the dimension has encoded. Specifically, the six images in Fig. 7(a) show six reconstructions of the same input image, but with different values on dimension 20. By comparing them, we can see this latent dimension majorly controls the _floor color_ of the 3D scene. More intra-instance comparisons include the works built upon what-if analyses and counterfactual examples [10, 45], which often perturb the input features of tabular data.
For the second case (fix data, modify model), the data instance is intact but its intermediate/output representations become different due to model modifications. Comparing the instance's intermediate/output representations reveals the corresponding model's evolution. For example, Attention Flows [58] introduces a radial layout to compare the self-attention of a Transformer model on a sentence (a sequential data instance) before and after the model's fine-tuning. The comparison helps to understand how the fine-turning process adapts the model to the data.
#### 5.3.2 Inter-Instance Comparison
Inter-instance comparison compares two or more instances, generating model insights based on the model's dissimilar behaviors on them. For example, AEVis [92, 93] interprets how an adversarially generated _panda_ image was incorrectly predicted as a _monkey_ by comparing the datapths of a normal _panda_ image, its adversarial counterpart, and a normal _monkey_ image. As shown in Fig. 7(b), the three colors, blue, orange, and purple, correspond to the neurons that are activated by the three images, respectively. Connecting neurons of the same color across layers forms the datapath for the corresponding image. The authors also design a new visualization to effectively present these datapths and their evolution patterns over time (Fig. 7(b), bottom). Comparing the datapths of the three images, especially where the datapath of the adversarial image diverges from the _panda_ and merges into the _monkey_, helps to locate where the adversarial attack happens. Similarly, GANViz [60] compares a pair of real and generated images from a GAN model to study how its discriminator works in the adversarial settings.
Note that some interpretation methods may fall into both intra-instance and inter-instance comparison based on how the comparison was conducted. For example, when interpreting ML models with counterfactual examples, the examples could be generated by perturbing a single data instance of interest. Only one instance is involved in this case and the work belongs to our "intra-instance" comparison category. Nevertheless, there are also works generating counterfactual examples by searching from the existing data instances. In this case, two or more instances will be involved and it falls into our "inter-instance" comparison category.
Fig. 6: Explore data across different DNN layers (a1, a2) or training iterations (b1, b2). Image courtesy of Rauber et al. [61] © 2016 IEEE.
#### 5.3.3 Intra-Group Comparison
The intra-group comparison in VIS4ML either (1) compares different models' performance using the same group of instances for a fair evaluation; or (2) compares the same group of instances at different stages of a model to understand its evolution. For case (1), EmbeddingVis [90] compares different embeddings of the same set of graph nodes generated from different embedding algorithms. As shown in Fig. 7(c), each scatterplot shows the dimensionality reduction result for the embedding generated by one algorithm. Embeddings from different algorithms are comparable since they are for the same set of instances. Also, there are one-to-one correspondences across the embeddings, as reflected by the curves connecting the instances across plots. For case (2), Xiang et al. [16] propose DataDebugger to interactively correct input data with incorrect labels over multiple iterations. In each iteration, the distribution of data instances and their prediction statistics are presented through the proposed incremental tSNE. Comparing the distributions and statistics for the same group of instances across iterations discloses the data quality improvement over time.
#### 5.3.4 Inter-Group Comparison
Inter-group comparison divides data into subgroups and compares the behavior discrepancy among the subgroups to interpret ML models. For example, ActiVis [13] interprets DNNs by allowing users to flexibly define instance groups (e.g., misclassified instances with common features) and aggregate the activations of the same group for cross-group comparisons (explained in Fig. 5(b)). FairVis [48] compares the performance across subgroups of instances with different features to disclose the biases hidden in predictive models. As demonstrated in Fig. 7(d), each row of strip plot presents the studied model's performance with one metric (e.g., accuracy, precision, and recall), and each strip bar (inside a row) represents one subgroup. In the top row, the red _Female_ group has 10% more accuracy than the blue _Male_ group, indicating potential gender discrimination. To investigate how adversarial attacks work in CNNs, Bluff [66] divides the input images into three groups: images of the original class, images of the target class, and original class images that have been successfully attacked. By comparing the active neurons from the three groups and their pathways across neural layers, the authors disclose what alternative pathways were exploited to make the attacks successful.
### _Assess Data_
The VIS4ML efforts in data assessment come from three major directions: (1) monitor the quality of input data to detect data deficiencies; (2) assess the output from ML models for their evaluations; (3) diagnose ML models' input and output to disclose biases rooted in both data and models.
#### 5.4.1 Assess Input Data - Data Quality
As input data define the performance upper bound of ML models [95, 15], it is crucial to guarantee their quality before training. VIS can help to expose data deficiencies or reveal the drift of data distributions, and thus, has been adopted widely in input data assessment [96, 20].
ConceptExplorer [20] uses a line chart (with glyphs) to monitor the drift level of time series data. Specifically, the sequential data are first fed into a predictive model and concept drifts are detected based on the model's error rate in a sliding time window. The error rate remains stable when there is no drift, but increases abnormally when drift happens. Based on this, the line chart uses strip glyphs to highlight suspicious regions. As shown in Fig. 8(a\({}_{1}\)), \(p_{i}\) denotes the prediction error at step \(i\) and \(1-p_{i}\) is the accuracy. \(p_{min}\) denotes the minimum error rate in the time window ended at step \(i\). The strip glyphs present the magnitude of accuracy drops in the suspicious drift regions. Based on the drop level, different glyphs (e.g., empty circles, filled circles/triangles with a cross) are used to mark important steps in Fig. 8(a\({}_{2}\)). Similarly, DriftVis [96] also monitors the drift level of time-series data with a line chart, in which the drift level is measured through the energy distance between
Fig. 7: Comparison: (a) Intra-instance: SCANViz compares the reconstructions of the same image. Image courtesy of Wang et al. [91] © 2020 IEEE. (b) Inter-instance: AEVis compares the datapaths of three images to diagnose adversarial attacks. Image courtesy of Cao et al. [92] © 2020 IEEE. (c) Intra-group: EmbeddingVis compares the embeddings for the same group of instances from different models. Image courtesy of Li et al. [90] © 2018 IEEE. (d) Inter-group: FairVis compares model performance across instance groups. Image courtesy of Cabrera et al. [48] © 2019 IEEE.
the new-coming and existing data. OoDAnalyzer [7] detects out-of-distribution (OoD) samples in test data, whose features are not well-covered by the training data. Superior to conventional methods that only offer an OoD score for a sample, OoDAnalyzer visualizes the sample together with its similar neighbors as a context for investigation. An efficient grid layout algorithm has also been introduced to hierarchically explore enormous data samples and detect the OoD ones.
#### 5.4.2 Assess Output Data - Performance Analysis
Evaluating ML models' performance is a fundamental ML task and multiple numerical metrics have been proposed. However, these metrics are often overly aggregated, preventing ML practitioners from gaining performance insights in a finer data granularity. Many novel visualizations have been proposed to address this issue, which visualize models' performance either _after_ or _over_ their training.
For evaluations _after_ model training, Squares [83] is a typical example that improves the confusion matrix visualization for multi-class classifiers. As shown in Fig. 8(b), each square represents a data instance and its vertical position reflects the probability for the corresponding class (i.e., \(C3\) here). The squares on the left of the axis (outlined boxes) are \(C3\) instances but mis-predicted as other classes (i.e., false negatives). Their color reflects the predicted class. The squares on the right are instances being predicted as \(C3\), the solid ones are true positives and the striped ones are false positives (with their color reflecting the true class label). For scalability concerns, the squares can be aggregated into strips/stacks and multiple such visualizations can be presented in parallel for multiple classes (Fig. 4(c)). The design presents not only the confusion matrix but also the prediction confidence, and enables users to interact with individual instances for diagnosis. Similar examples in this group include Confusion Wheel [97] and ModelTracker [98].
The second group of evaluations tracks ML models' performance _over_ training to monitor their evolution. For example, Wang et al. [86] propose a Sankey-diagram based temporal confusion matrix, as we have explained in Fig. 5(c). The visualization not only reflects the model's quality, but also tracks the improved and degenerated data instances (through the green and red bands between neighboring Sankey nodes) for model diagnosis. There are multiple other visualizations revealing the temporal performance evolution for different ML models, e.g., [99, 100, 101].
#### 5.4.3 Assess Fairness - Bias Analysis
With the rising concerns about fairness in ML, bias analysis becomes increasingly important. Biases can stem from the input data, undesirable trainings (e.g., feature intersections), or the way that data were presented (e.g., content biases).
To study _input data biases_, CoFact [46] divides input tabular data into three groups based on a feature condition: (1) instances satisfying the condition; (2) instances that do not satisfy the condition but are similar to those in (1) in other features; (3) instances that do not satisfy the condition and are not similar to (1). By comparing the three groups and their feature value distributions, the authors successfully expose the confounding factors in the tabular data. For image data, DendroMap [69] uses treemaps to hierarchically explore a large number of input images. From the exploration, the authors notice that _sunscreen_ images often come with lighter skin colors. This feature co-occurrence misleads ML models from learning the right features of _sunscreen_, and should be exposed before model training.
To expose _intersectional biases_ hidden in well-trained predictive models, FairVis [48] compares models' performance across feature combinations. It has been noticed that an ML model with fair performance on individual features may yield unfair performance on feature combinations. For example, a loan eligibility model can generate similar approval rates for _Male_ and _Female_ applicants, and similar approval rates for _White_ and _Black or African American_ applicants. However, its approval rates for _Male + White_ applicants may be much higher than those of the _Female + Black or African American_ applicants. To disclose this, FairVis uses multiple strip plots (Fig. 7(d)) to compare ML models' performance in subgroups defined by different feature combinations.
_Content biases_, where similar contents were not treated equivalently, have also been examined in VIS4ML. For example, graph nodes with similar ranking scores may not be given similar exposures due to their ranking positions. FairRankVis [94] (Fig. 8(c)) addresses this problem by clustering nodes (squares in blue or orange) based on their ranking scores and organizing nodes of the same cluster into a horizontal rectangle (with black strokes) for equal exposure. In Fig. 8(c), the bottom cluster from the left side has 10 nodes with very similar scores (0.123\(\sim\)0.124) and they are organized into the same rectangle to reduce the content bias that may position them far apart. The system can also compare the rankings from two models (i.e., the "Base Model" and "Target Model" in the figure).
Fig. 8: (a) Glybs designed to identify concept drift. Image courtesy of Wang et al. [20] © 2020 IEEE. (b) Each square represents one instance and its vertical position shows the class probability. The square glybps and their position also encode the prediction correctness. Image courtesy of Ren et al. [83] © 2016 IEEE. (c) Graph nodes (in orange and blue) are clustered by their ranking score and nodes of the same cluster are presented in a rectangle for similar exposure. Rankings from two models can also be compared. Image courtesy of Xie et al. [94] © 2021 IEEE.
### _Generate Data_
Data generation extends the dataset \(X\) in Eq. 1 by introducing new instances with desired features. These features can be used to probe ML models' behaviors for better understanding/diagnosis (e.g., "what-if" analyses) or refine ML models to better cover some corner cases (e.g., adversarial training). This task is very specific to VIS4ML and it is not well-covered in earlier VIS surveys. The essence of data generation is feature augmentation, which can be conducted (1) _directly_ in the data space or (2) _indirectly_ in a latent space.
#### 5.5.1 Augment Data Directly in the Data Space
The features of individual instances are often interpretable, e.g., the _age_ and _capital-gain_ fields of a tabular census dataset. Their semantics enable users to directly perturb their values and probe ML models' behaviors. For example, the What-If Tool [45] provides a Datapoint Editor View to allow users to directly modify instances' feature values (e.g., increasing the _capital-gain_). By feeding the new instances back to the ML models and checking their performance discrepancy, the users can verify different hypotheses on the models. VIS4ML works based on counterfactual examples, e.g., [10, 103], are along the same line and they may rely on automatic algorithms to generate new instances.
Besides tabular data, MD-array data (e.g., images) are also frequently perturbed to probe ML models' behaviors. For example, Bilal et al. [104] generate new image instances by decoloring (i.e., from RGB to gray-scale) or rotating existing ones. By feeding those new images into CNNs, they identify color-invariant and rotation-invariant classes where the CNNs perform well regardless of the corresponding images' color/rotation. Wang et al. [105] synthesize two controlled datasets from the original dataset by adding: (1) extra information about a studied concept; (2) random noises that are not related to the concept. The two datasets are then used to train two ML models with the same architecture and configurations, separately. Based on the models' performance discrepancy under the controlled settings, different hypotheses can be verified through statistical significance.
Apart from model understanding and diagnosis, the generated data can also be used to refine ML models. For example, ConceptExtract [102] trains a light-weighted ML model to extract image concepts (e.g., stripe and shadow) learned by a large CNN. Using the system, the users identified a weakness of the CNN in detecting objects with shadows, and overcame it by reinforcing the model to learn more from images with shadows. As shown in Fig. 9(a), more training images are generated by directly adding artificial shadows to the original ones. The CNN fine-tuned on them demonstrated considerable performance improvement.
#### 5.5.2 Augment Data Indirectly in a Latent Space
New data instances can also be generated by encoding the existing instances into a latent space, modifying their latent representations, and decoding them back to the data space. Instances generated in this way often present smooth features with fewer artifacts, as the modifications on their latent representations will impact the reconstructions globally.
For example, DeepVID [9] interprets how a CNN differentiates digit '4' and digit '9' images by generating new images smoothly transferring from '4' to '9' to probe the CNN's decision boundary. A VAE encoder is used to transform the two images into a 10D latent space, presented by the PCP in Fig. 9(b\({}_{1}\)). The orange and blue polylines denote the 10D latent representations of the two images. Then, the two polylines are linearly interpolated inside individual latent dimensions (i.e., within the cyan band). Lastly, by sampling polylines from the interpolated regions and feeding them into the corresponding VAE decoder, semantically meaningful images are generated. As shown in Fig. 9(b\({}_{2}\)), the generated images present features smoothly transferring from the digit '4' (the top-left one) to '9' (the bottom-right one). Using them, a binary surrogate model can be trained to mimic the original CNN and delineate the decision boundary between the two classes.
VATLD [6] and VASS [7] are similar works along this line, which use \(\beta\)VAE and CVAE to extract visual concepts from images and encode them into orthogonal latent dimensions. VIS4ML helps to interpret those dimensions and facilitates adversarial training algorithms in manipulating the latent representations. By decoding the new latent representations back to the image space, the authors obtain images with augmented features that can be used to further fine-tune and improve the corresponding ML models.
### _Improve Data_
Refining ML models can be accomplished by optimizing the architectures/hyper-parameters of the models or improving the quality of their input data. As techniques for the former continue to mature, model developers are increasingly recognizing that achieving greater performance gains from the latter is comparatively easier. This results in the rising popularity of data-centric AI [15, 95], recently. As the data contain two parts, i.e., \(X\) and \(Y\) in Eq. 1, their improvements also come from two aspects, the _features_ and _supervision_.
Fig. 9: (a) Images are augmented by adding artificially generated shadows. Image courtesy of Zhao et al. [102] © 2021 IEEE. DeepVID generates images between the to-be-interpreted digit ‘4’ and ‘9’ (b2) by interpolating their latent vectors (b1). Image courtesy of Wang et al. [9] © 2019 IEEE.
#### 5.6.1 Improve Features
The features encoded in individual data instances are what the ML models learn from. Improving them can thus be conducted by curating instances with desired features or selecting/synthesizing better features.
_Instance curation_ has been conducted by (1) selecting instances with more desired features, (2) excluding instances with undesired features, and (3) matching the feature coverage in training and test data. For case (1), Ye et al. [17] introduced an interactive data curation system to guide the training of GANs in generating intended features (e.g., _happy faces_). The system progressively trains multiple binary classifiers to predict if an image includes the contents to be generated or not. These classifiers form a committee to vote out the most disagreed instances, which are then presented to users for manual labeling. For case (2), DGMTracker [29] diagnoses deep generative models by disclosing the training details of individual instances, from which, the authors identified training failures caused by outlier instances, e.g., a _plane_ image with a large portion of blue sky. They tried to exclude those outliers from training for a quick fix and also proposed theoretical solutions to fix the issue. For case (3), as ML models are trained and tested on separate datasets (to avoid over-fitting), ensuring the features of test instances are well covered by the training instances is crucial. For example, a _cat-dog_ classifier trained on black-_cat_ and white-_dog_ images will perform badly on a white-_cat_ image, which is an OoD sample to the classifier. OoDAnalyzer [19] visually identifies such samples from test data through an ensemble OoD detection method and an efficient \(k\)NN-based grid layout of images. After identifying them, model developers can add the images with the missing features into the training data to fine-tune the ML models. Assessing and improving data happened sequentially in this work.
_Feature selection/synthesis_ improves data by adding/excluding/transforming features. The operation differs from instance curation, as it affects all instances rather than some of them. For example, FeatureEnVi [49] helps users generate, transform, and select features to train XGBoost models. The system first ranks the features of tabular data using multiple automatic feature-importance metrics. The rankings then guide users to exclude less important ones. A radial hierarchical graph is introduced to convey the importance of features in different data slices. With this graph, the users can decide if a moderately important feature should be excluded or not. The hierarchical graph and embedded glyph visualizations also present statistics (e.g., correlation, mutual information) between features, assisting users in transforming and combining existing features to generate new ones. Similar feature selection and composition works have also been proposed for logistic regressions [106], deep sequence models [107], and ensemble models [18, 99].
#### 5.6.2 Improve Supervision
Supervision is the annotation information associated with the data that guides the training toward the learning goal. Therefore, clearer and more explicit supervision often leads to easier model training and better model performance.
Training data from various sources often suffer from noisy/missing/incorrect label information. Interactive VIS tools are very effective to incorporate human input and improve the label quality in these cases. For example, to correct the mis-labeled training instances, DataDebugger [16] proposes a hierarchical layout, enabling users to explore a large number of training samples in a top-down manner. The higher hierarchy levels present fewer samples for an overview, and users can drill down to lower levels for more samples' details. This scalable layout provides users an interface to select data instances of interest and they can interactively correct their labels and convert them into trusted instances. An automatic label-error detection algorithm is then applied on them to propagate their labels and further identify other mis-labeled instances for iterative correction.
Despite labels, the supervision can also be other types of annotation. For example, Bilal et al. [104] explore the hierarchy of classes in the ILSVRC 2012 dataset (e.g., both _cat_ and _dog_ are _manual_, which is a subclass of _animal_). Integrating the class hierarchy into the training of a CNN, the authors successfully accelerate the training and improve the model's accuracy. GenNI [108] introduces an interactively defined constraint graph to guide the text-generation process. Following the constraint graph, the ML model first generates/forecasts several output sentences, based on which, the users examine individual outputs and refine the constraint graph. This _Refine-Forecast_ paradigm, combining the efforts from both humans and AI, iteratively supervises the model's generative behavior and improves the outputs' quality. There are also works that leverage the information from different modalities of multi-modality data to mutually reinforce the supervision in respective modalities. For example, MutualDetector [81] integrates caption supervision with object detection to improve both the noisy captions and imprecise bounding box information. The work extracts labels from image captions, which are then used to supervise the training of the object detector. The objects extracted from the detector, in return, further improve the captions' quality.
## 6 Research Opportunities
This section examines the distributions of the 143 papers across the 5 data types, 6 VIS4ML tasks, and their intersections. The distributions reveal which parts of the taxonomy that existing works focus on and which parts have not been sufficiently explored, unveiling potential opportunities.
### _Opportunities From Data Types_
From the data type distribution (Fig. 10, left), it is very obvious that existing VIS4ML works focus more on tabular, sequential, and MD-array data, whereas the graph and multi-modality data are less covered.
_Opportunity 1: interpreting ML models for graph data._ Compared to the first three data types, graph data is more
Fig. 10: Paper distributions across 5 data types and 6 VIS4ML tasks.
irregular and difficult to handle, especially heterogeneous graphs. However, we envision more VIS4ML works will come for this type of data for the following reasons. First, graph is a powerful way to structurally organize data and convey their relational information, e.g., a citation graph connects discrete papers and builds relationships among them. Its unique merits keep the amount of graph data consistently growing. Second, advanced graph learning models, e.g., GNNs, are also evolving fast, so as to the demanding need for their understanding, diagnosis, and refinement.
_Opportunity 2: coordinated analysis of multiple data modalities with multiple ML models._ We have observed an increasing number of ML works that integrate the learning outcomes from different modalities of multi-modality data for better performance. For example, the sentiment analysis model in [79] is trained on facial expressions (video), voices of the speakers (audio), and the corresponding textual transcripts (text). Multiple ML models are often involved in these works to take their respective advantages in handling different data modalities. Given the popularity of these ML works, two VIS directions are very promising for this data type. _First, exploring and relating different modalities of multi-modality data with coordinated multiple views._ Coordinated visual explorations have been repetitively verified to be effective in handling multi-faceted data [109], and the techniques are readily transferable to the increasingly complex multi-modality data from ML. _Second, mutual enhancement of the information between different data modalities._ The underlying connections between different data modalities can be used to mutually reinforce the information inside each. This is a good way to improve data by leveraging the implicit information inside a modality as explicit supervision for the other. For example, MultualDetector [81] improves the noisy image captions and imprecise bounding boxes of image objects by borrowing the information from each other as supervisions. VIS plays a critical role here, as it provides the necessary guidance to better bridge different data modalities and facilitates the mutual enhancement between them.
### _Opportunities From Data-Centric Tasks._
From the paper distribution over the six tasks (Fig. 10, right), presenting and exploring data are the most fundamental tasks conducted by most VIS4ML works. Only a few short papers with static visualizations do not involve data exploration. Comparing and assessing data are also commonly performed for model understanding/diagnosis. However, fewer works cover the tasks of generating and improving data. These two majorly contribute to model refinements but have not been sufficiently explored.
_Opportunity 3: model refinement with data generation and improvement._ With the rapid evolution of XAI, researchers are no longer satisfied with works that only help to understand or diagnose ML models, but are eager to see how VIS can help to further refine the corresponding models. In practice, model _understanding and diagnosis_ are often the prerequisites for model _refinement_. In the era of data-centric AI [15], we believe many model refinement opportunities reside in data generation and improvement. These two tasks generate data with desired features or improved supervisions to refine ML models from the data perspective. They help to convert insights obtained from model _understanding_ and/or model _diagnosis_ into direct model _refinement_ actions, demonstrating the very practical role that VIS can play.
_Opportunity 4: more smartly involving humans into the data-centric VIS4ML tasks, but minimizing their labor effort._ For all six tasks, especially the last two, the inputs from humans often play important roles, e.g., label corrections with users' prior knowledge on different classes [16]. In fact, human-in-the-loop analyses have been adopted in many VIS4ML works [17, 102]. Nevertheless, some works still require intensive human interventions, making the explorations or analyses not friendly enough to users. Therefore, it is worth more efforts to better team up humans and AI to leverage humans' intelligence but minimize their labor effort in the meantime. Some seminal VIS4ML works, e.g., [6, 108], have started this kind of explorations, e.g., asking humans to provide key controls only and leaving the heavy-lifting part to automatic AI algorithms. This direction also opens up the opportunity to more effectively combine human-computer interaction (HCI) and VIS techniques to better serve ML.
### _Opportunities From Data-Task Intersections_
Fig. 11 shows the cross-distribution between the five data types and six tasks. From it, more papers are distributed on the top-left, i.e., the intersections between the first three data types and the first four tasks, echoing the two marginalized distributions in Fig. 10. From the lighter color cells, we have identified several more research opportunities.
_Opportunity 5: input data assessment and bias analysis for data-centric AI._ The output-performance analysis currently dominates the assess task in Fig. 11. With more efforts on data-centric AI, we look forward to the increase of input-quality assessment works and they will cover more diverse data types (e.g., the graph and multi-modality data). Also, the input-quality assessment is the prerequisite for further data improvement and/or generation, echoing the earlier _Opportunity 3_. Moreover, with the rising concerns on model fairness, we would also expect the number of bias-analysis works from the assess task to increase. Existing works in this category mostly focus on tabular data, as the semantically meaningful tabular features (e.g., _gender_ and _race_) could be naturally considered as protected features. However, biases do exist in other data types, e.g., undesired feature co-occurrences in MD-array data [69] or unfair node exposures in graphs [94], and more works are waiting to be proposed to fill this gap.
_Opportunity 6: more general and scalable visualizations for heterogeneous and large-scale data._ From Fig. 11, we also find that some data-centric tasks have only been performed on one data type due to the limited generalizability of the corresponding tasks (or VIS techniques). For example, the
Fig. 11: Joint distribution of papers across data and task sub-categories.
indirect data generation has only been explored on MD-array data, mostly images. This is largely due to the success of CNN-based encoder-decoder frameworks. On the other hand, it also indicates a great research opportunity in data generation for other data types using the indirect manner. To better take care of the data heterogeneity, a general solution that can accomplish a data-centric task across all different data types is very preferable. Furthermore, as ML models are often trained on a large number of input data instances and generate a massive amount of intermediate (e.g., activations from DNNs) and output data, extending existing visualizations to make them more scalable is also a promising direction. For example, DendroMap [69] explores large-scale image datasets through hierarchically clustering the HD image representations and enabling users to explore their interested images via interactions. A better understanding of the images leads to a better comprehension of the corresponding ML models' behavior.
## 7 Discussion and Limitations
Our survey has several inherent limitations. _First_, our taxonomy is inevitably impacted by our view of the VIS4ML problem. Although the co-authors all have years of experience working on ML and VIS, certain choices of the papers and categorizations have been influenced by our past experience. This limitation inherits from the subjective nature of a survey paper. However, as our taxonomy well-covers the majority of the VIS4ML literature and our analysis comes with concrete statistics, we are confident to believe that our survey provides valuable insights into this area.
_Second_, there are also subjective decisions over the coding of individual papers. For example, some papers focused on proposing solutions to compare ML models, but also presented brief cases that slightly improve the models. Whether coding the papers with the improve task or not is thus subjective. To mitigate this problem, we provide a spreadsheet in our Supplementary Material, i.e., ReasonCode.xlsx, summarizing all the 143 papers and briefly explaining why we code individual papers into their respective categories. Readers can use the spreadsheet to understand our coding rationales and suggest different codings. We believe the well-documented reasons will help to track and improve our labeling of the papers.
_Lastly_, there are many other venues with VIS4ML works (e.g., CHI, IUI, and ACL) that we could not conduct an exhaustive search on, due to the limited length of this survey. We have considered selectively including some papers from them as they are equivalently important. However, it would involve more subjective decisions and make the survey less self-contained. Furthermore, those venues also have their respective focuses beyond VIS (e.g., ML or HCI). In contrast, the current six venues we have covered all have a dominant focus on VIS. Considering these factors, we made the deliberate choice to confine our survey within the six VIS venues only, rather than inundating readers with VIS4ML contributions from a diverse array of sources. We hope our survey can work as a starting point to pique readers' interest in reexamining VIS4ML papers, even for those outside of the six venues, through a data-centric lens.
\begin{table}
\begin{tabular}{c
## 8 Conclusion
In this paper, we review the latest VIS4ML works (143 papers) from the past seven years and introduce a data-centric taxonomy to organize them. Our taxonomy first identifies the data types that individual works have focused on and categorizes them into five groups. Then, focusing on the VIS operations applied to these data, we elicit six data-centric VIS4ML tasks and explain how individual tasks have been conducted. Lastly, based on our review and the paper distribution, we provide insights into the current VIS4ML endeavors and envision future research directions.
|
2305.09146 | Exotic Hadrons from Scattering in the Diabatic Dynamical Diquark Model | The diabatic framework generalizes the adiabatic approximation built into the
Born-Oppenheimer (BO) formalism, and is devised to rigorously incorporate the
mixing of BO-approximation eigenstates with two-particle thresholds. We
recently applied this framework in a bound-state approximation to the mixing of
hidden-charm dynamical-diquark tetraquark states with open-charm di-meson
thresholds. Since almost all of these states are observed as above-threshold
resonances, we here implement the corresponding scattering formalism to allow
for a study of exotic tetraquark resonances within the diabatic framework. We
calculate elastic open-charm di-meson cross sections (in channels with zero,
open, and hidden strangeness) as functions of center-of-mass energy, and
observe the development of true resonances, near resonances, and various
threshold cusp effects. As an example, $\chi_{c1}(3872)$ can originate in the
$1^{++}$ channel as a diquark-antidiquark state enhanced by the $D^0
\overline{D}^{*0}$ threshold, with or without an additional contribution from
the conventional charmonium $\chi_{c1}(2P)$ state. | Richard F. Lebed, Steven R. Martinez | 2023-05-16T03:52:19Z | http://arxiv.org/abs/2305.09146v2 | # Exotic Hadrons from Scattering in the Diabatic Dynamical Diquark Model
###### Abstract
The diabatic framework generalizes the adiabatic approximation built into the Born-Oppenheimer (BO) formalism, and is devised to rigorously incorporate the mixing of BO-approximation eigenstates with two-particle thresholds. We recently applied this framework in a bound-state approximation to the mixing of hidden-charm dynamical-diquark tetraquark states with open-charm di-meson thresholds. Since almost all of these states are observed as above-threshold resonances, we here implement the corresponding scattering formalism to allow for a study of exotic tetraquark resonances within the diabatic framework. We calculate elastic open-charm di-meson cross sections (in channels with zero, open, and hidden strangeness) as functions of center-of-mass energy, and observe the development of true resonances, near resonances, and various threshold cusp effects. As an example, \(\chi_{c1}(3872)\) can originate in the \(1^{++}\) channel as a diquark-antidiquark state enhanced by the \(D^{0}\overline{D}^{*0}\) threshold, with or without an additional contribution from the conventional charmonium \(\chi_{c1}(2P)\) state.
Exotic hadrons, diquarks, scattering
## I Introduction
Reaching the 20-year anniversary of the first clear experimental evidence for the existence of heavy-quark exotic hadrons--the observation of the charmoniumlike state now called \(\chi_{c1}(3872)\)[1]--the field of hadron spectroscopy now faces the same scientific challenges shared by many other areas of study. Some definitive answers on the nature of these states have been obtained; but many of the original questions remain, and many new questions have arisen. More than 60 heavy-quark exotic candidates have been observed to date, notably some of which that were first seen shortly after the 2003 discovery of \(\chi_{c1}(3872)\) by Belle [1] (_e.g._, \(Y(4260)\) in 2005 by BaBar [2], which has subsequently been determined by BESIII to consist of more than one state [3]). Despite a longstanding need for a theoretical paradigm to describe the structure, production, and decays of these states, no universally predictive model has emerged capable of accommodating all of them [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16]. A number of these exotic candidates (some of which are listed in Table 1) lie remarkably close to some particular di-hadron threshold, the most notable example being \(\chi_{c1}(3872)\):
\[m_{\chi_{c1}(3872)}-m_{D^{0}}-m_{D^{*0}}=-0.04\pm 0.09\ \mathrm{MeV}, \tag{1}\]
using the averaged mass value for each particle provided by the Particle Data Group (PDG) [17]. Clearly, it can be no coincidence that so many of these states appear near a threshold. Some of them, such as \(\chi_{c1}(3872)\), lie close _below_ the corresponding threshold, suggesting a possible description via a di-hadron molecular picture, with the hadron pair (in this case, \(D^{0}\bar{D}^{*0}\) plus its charge-conjugate) being bound in part via \(\pi^{0}\) exchange. In fact, this interpretation has a rich history, in some cases long predating the \(\chi_{c1}(3872)\) discovery [18; 19; 20]. Others, such as the \(Z\) states of Table 1, lie close _above_ a threshold, discouraging the naive meson-exchange molecular description. A complete, self-consistent model must be able to describe the relation between these exotic states and their nearby thresholds, as well as states that lie relatively far from any di-hadron threshold, such as \(Z_{c}(4430)\) or many of the \(J^{PC}=1^{--}\)\(Y\) states.
Adding to the puzzle, \(\chi_{c1}(3872)\) exhibits some behaviors that seem to imply the importance of short-distance components of its wavefunction, such as in its appreciable decays to \(J/\psi\), \(\chi_{c1}\), and \(\gamma\psi(2S)\), with the radiative decays being especially significant in this regard. However, given the tiny binding energy [Eq. (1)] available to a molecular \(\chi_{c1}(3872)\), one would expect its observables to be utterly dominated by long-distance interactions. This contradiction, in part, has led to the long-standing view that \(\chi_{c1}(3872)\) contains at least some component of the fundamental charmonium state \(\chi_{c1}(2P)\)[21]. But an alternate short-range, color-attractive configuration is available to the \(\chi_{c1}(3872)\), in the form of a diquark-antidiquark pair: \((cu)_{\overline{3}}(\bar{c}\bar{u})_{\overline{3}}\).
In fact, one approach using this paradigm, the _dynamical diquark model_[22; 23], has made strides in successfully representing the \(\chi_{c1}(3872)\) as an exotic diquark-antidiquark state, as well generating the full accompanying spectra of both tetraquark and pentaquark exotic multiplets in multiple flavor sectors [23; 24; 25; 26; 27; 28; 29; 30]. These advances include the incorporation of effects such as spin- and isospin-dependent interactions, SU(3)\({}_{\mathrm{flavor}}\) mixing, and most recently, mixing between diquark-antidiquark states and nearby di-hadron thresholds [31].
While the original dynamical diquark model calculations were performed assuming that di-hadron thresholds close in mass to those of the diquark-antidiquark states can be neglected--which imposes the framework of the Born-Oppenheimer (BO) approximation--the incorpora
tion of di-hadron threshold mixing can be accomplished through its rigorous generalization; this so-called _diabatic formalism_ was originally developed for, and has long been used in, molecular physics [32]. First introduced into hadronic physics by Ref. [33] to analyze exotic states produced by the mixing of heavy quarkonium \(Q\bar{Q}\) with di-hadron thresholds, the diabatic framework also provides a method through which diquark-antidiquark states mixing with di-hadron thresholds can be analyzed [31].
Almost all exotic states lie above the energy threshold of the lowest possible open-heavy-flavor hadron pair with the same \(J^{PC}\) and flavor quantum numbers. While these states may, in some cases, be approximated as bound states (which is the assumption of Refs. [31; 33; 34]), the more accurate treatment is to view these states as resonant poles within scattering processes. The unification of the diabatic formalism with scattering theory, again using \(Q\bar{Q}\)/di-hadron mixing, was pioneered in Ref. [35]. Here, we expand upon the work of Ref. [31] by developing the same techniques for \(\delta\)-\(\bar{\delta}\)/di-hadron mixing.
This paper is organized as follows. In Sec. II we define the features of the dynamical diquark model, which generates the spectrum of heavy-quark exotic hadrons studied here. Section III describes the _diabatic_ formalism that generalizes the adiabatic formalism inherent in the BO approximation used by the original dynamical diquark model. The diabatic formalism is incorporated in Sec. IV into scattering theory, particularly in order to study open-flavor heavy-meson elastic scattering processes, in which exotic resonances (ultimately originating as dynamical-diquark states) may occur. In Sec. V, we first reprise our previous bound-state calculations, and then present numerical results for hidden-charm scattering cross sections and discuss the diverse interesting features that arise. Section VI summarizes our conclusions and indicates the next directions for research.
## II The dynamical diquark model
The dynamical diquark _picture_[22] provides key context for the construction of the full scattering model developed in this paper. In the original picture, quark pairs (\(qQ\)) and (\(\bar{q}\bar{Q}\)) in (attractive) color-triplet configurations (\(Q\) being heavy) are produced within relative proximity of each other, and with a high relative momentum with respect to the opposite pair; such a scenario occurs in an appreciable fraction of \(Q\bar{Q}\) production processes. Thus, the diquarks \(\delta\equiv(qQ)_{\bar{\bf 3}}\) and \(\bar{\delta}\equiv(\bar{q}\bar{Q})_{\bf 3}\) can naturally form as compact objects, especially since heavy \(Q\) have less Fermi motion. Due to confinement, \(\delta\) and \(\bar{\delta}\) remain bound to each other via a color flux tube. The kinetic energy associated with the high relative momentum is then converted into the potential energy of the flux tube as the distance between the diquarks increases, the \(\delta\)-\(\bar{\delta}\) separation eventually reaching a maximum as the relative momentum between the compact diquarks drops toward zero. With an appreciable distance now separating the quark-antiquark pairs that can form color singlets, this configuration has difficulty hadronizing, allowing it to persist long enough to be observed as an exotic tetraquark resonance. The analogous process for the pentaquark case [36] can also be described using this mechanism by substituting \(\bar{\delta}\rightarrow\bar{\theta}\), where the color-triplet _triquark_ is defined by \(\bar{\theta}\equiv[\bar{Q}(q_{1}q_{2})_{\bar{\bf 3}}]_{\bf 3}\).
The dynamical diquark _model_ is then constructed from this picture by implementing the BO approximation for QCD, as described in detail in Sec. III. This approximation, which has been extensively used to study heavy hybrid mesons, provides the most natural formalism for describing such a quasi-static system. The end result of applying the BO approximation is the generation of a set of effective static potentials, which in turn are used to produce a full spectrum of state multiplets. These _BO potentials_ may be explicitly calculated on the lattice (see, _e.g._, Refs. [37; 38; 39]). The multiplets of states within these potentials are denoted by a set of five quantum numbers: \(\Lambda_{\eta}^{\epsilon}(nL)\), where \(\Lambda_{\eta}^{\epsilon}\) define the BO potentials through the symmetries of the light degrees of freedom (d.o.f.), and \(n,L\) indicate the familiar radial and angular momentum quantum numbers defining the orbitals of each BO potential. Explicitly, the labels \(\Lambda_{\eta}^{\epsilon}\) designate irreducible representations of the group \(D_{\infty h}\), which describes the symmetries inherent to a cylinder whose axis coincides with the characteristic radial separation vector \(\bf\hat{r}\) of the heavy quasiparticle pair.
A more detailed discussion of these potentials, as well as their application to \(\delta\bar{\delta}\) and \(\delta\bar{\theta}\) systems, may be found in Refs. [23; 24]. For the purpose of this analysis, Refs. [24; 26; 27] are especially important by providing clear numerical indications that these potentials correctly describe multiplet mass averages for heavy-quark exotic states in each light-flavor sector (_e.g._, \(c\bar{c}q\bar{q}^{\prime}\) in \(1S\) and \(1P\) states, \(c\bar{c}qqq\), \(c\bar{c}s\bar{s}\), \(b\bar{b}q\bar{q}^{\prime}\)), and these multiplets are shown to accommodate the \(J^{PC}\) quantum numbers of all known exotics. The multiplet mass averages may then be resolved into a fine-structure spectrum by introducing Hamiltonian spin- and isospin-dependent operators that are expected to be the ones most relevant for describing the fine-structure effects. In general, the number of free parameters in the model is then \((n+1)\), the coefficients of the \(n\) fine-structure operators included in
\begin{table}
\begin{tabular}{c c} \hline \hline Exotic candidate & Di-hadron threshold \\ \hline \(\chi_{c1}(3872)\) & \(D^{0}\bar{D}^{*0}\) \\ \(Z_{c}(3900)\) & \(D\bar{D}^{*}\) \\ \(Z_{c}(4020)\) & \(D^{*}\bar{D}^{*}\) \\ \(P_{c}(4312)\) & \(\Sigma_{c}\bar{D}\) \\ \(P_{c}(4450)/P_{c}(4457)\) & \(\Sigma_{c}\bar{D}^{*}\) \\ \(Z_{b}(10610)\) & \(B\bar{B}^{*}\) \\ \(Z_{b}(10650)\) & \(B^{*}\bar{B}^{*}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Examples of heavy-quark exotic candidates lying particularly close in mass (\(<15\) MeV) to a di-hadron threshold.
the analysis, plus the diquark (triquark) mass \(m_{\delta}\) (\(m_{\theta}\)). A phenomenological fixing of these parameters, where one fits to the numerical value of each so that the best-understood exotic states emerge naturally, is the approach of Refs. [24; 25; 26; 27; 28; 29; 30]; a mass prediction for every member of the complete spectrum of states then immediately follows.
## III The diabatic approach
The incorporation of the diabatic approach into the dynamical diquark model [31] signifies a departure from the strict framework of the BO approximation to its rigorous generalization [32], and we reprise its development for hadronic systems here. To describe a (nonrelativistic) system consisting of two heavy color sources interacting through light (quark and gluon) fields, one begins with the Hamiltonian
\[H=K_{\rm heavy}+H_{\rm light}=\frac{{\bf p}^{2}}{2\mu_{\rm heavy}}+H_{\rm light}, \tag{2}\]
where \(H_{\rm light}\) contains the light-field static energy, as well as the heavy-light interaction. Under the BO framework, one writes the solutions to the corresponding Schrodinger equation as
\[\left|\psi\right\rangle=\sum_{i}\int d{\bf r}\,\tilde{\psi}_{i}({\bf r})\left| {\bf r}\right\rangle\left|\xi_{i}({\bf r})\right\rangle, \tag{3}\]
where \(\left|{\bf r}\right\rangle\) are defined as states of heavy source pairs with separation vector \({\bf r}\), and \(\left|\xi_{i}({\bf r})\right\rangle\) is the \(i^{\rm th}\) eigenstate of \(H_{\rm light}\). Note that the heavy and light states here reference the same value of \({\bf r}\); Eq. (3) is called the _adiabatic expansion_, although the expression at this point remains general. The set \(\{\left|\xi_{i}({\bf r})\right\rangle\}\) forms a complete, orthonormal basis for the light d.o.f. at any given \({\bf r}\), but in general, configuration mixing occurs at different values of \({\bf r}\): \(\langle\xi_{j}({\bf r}^{\prime})|\xi_{i}({\bf r})\rangle\neq 0\) even for \(j\neq i\). Inserting Eq. (3) into the Schrodinger equation and taking inner products with \(\langle\xi_{j}({\bf r})|\), after some manipulations one arrives at
\[\sum_{i}\left(-\frac{\hbar^{2}}{2\mu_{QQ}}[\nabla+\tau({\bf r})]_{ji}^{2}+[V_ {j}({\bf r})-E]\,\delta_{ji}\right)\!\tilde{\psi}_{i}({\bf r})=0, \tag{4}\]
where the functions \(\tau({\bf r})_{ji}\), known as _Non-Adiabatic Coupling Terms_ (NACTs), are defined as
\[\tau_{ji}({\bf r})\equiv\langle\xi_{j}({\bf r})|\nabla\xi_{i}({\bf r})\rangle. \tag{5}\]
If, in addition, the heavy d.o.f.'s are sufficiently heavy compared to the light d.o.f.'s, then one may approximate the light d.o.f.'s as instantaneously (_adiadiatically_) adapting to changes in the heavy-source separation, which in this notation reads \(\langle\xi_{i}({\bf r}^{\prime})|\xi_{i}({\bf r})\rangle\approx 1\) for small changes \({\bf r}^{\prime}\neq{\bf r}\), the _adiabatic approximation_. Additionally, at values of \({\bf r}^{\prime},{\bf r}\) where the light-field eigenstates do not appreciably mix, one has \(\langle\xi_{j}({\bf r}^{\prime})|\xi_{i}({\bf r})\rangle\approx 0\) for \(j\neq i\), which is called the _single-channel approximation_. These two approximations define the full _BO approximation_, and are conveniently summarized by the single condition on the NACTs:
\[\tau_{ji}({\bf r})=\langle\xi_{j}({\bf r})|\nabla\xi_{i}({\bf r})\rangle \approx 0. \tag{6}\]
For systems containing a heavy (hence static) \(Q\bar{Q}\) pair, unquenched lattice-QCD calculations have long found that this approximation works well in regions far from energy thresholds for on-shell di-meson production. Close to these thresholds, the static light-field energies experience an avoided level-crossing, thus demonstrating the explicit breaking of the single-channel approximation [40; 41]. In order to discuss more general mixed states that may have such energies, one may adopt the rigorous generalization of the BO approximation known as the _diabatic formalism_[32]. This method rewrites the expansion of the solution Eq. (3) as
\[\left|\psi\right\rangle=\sum_{i}\int d{\bf r}^{\prime}\tilde{\psi}_{i}({\bf r }^{\prime},{\bf r}_{0})\left|{\bf r}^{\prime}\right\rangle\left|\xi_{i}({\bf r }_{0})\right\rangle, \tag{7}\]
where \({\bf r}_{0}\) is a free parameter. Here again, the completeness of the basis \(\{\left|\xi_{i}({\bf r})\right\rangle\}\), regardless of the choice of \({\bf r}\), is crucial. In analogy to the previous procedure, one inserts the expansion Eq. (7) into the Schrodinger equation and takes inner products with \(\langle\xi_{j}({\bf r}_{0})|\), thus producing
\[\sum_{i}\left[-\frac{\hbar^{2}}{2\mu_{i}}\delta_{ij}\nabla^{2}+V_{ji}({\bf r},{\bf r}_{0})-E\delta_{ji}\right]\!\tilde{\psi}_{i}({\bf r},{\bf r}_{0})=0. \tag{8}\]
Now the object of interest is \(V_{ji}\), which is known as the _diabatic potential matrix_; it is defined as
\[V_{ji}({\bf r},{\bf r}_{0})\equiv\langle\xi_{j}({\bf r}_{0})|H_{\rm light}|\xi_ {i}({\bf r}_{0})\rangle. \tag{9}\]
The NACT method and the diabatic-potential method are rigorously equivalent, as shown in Refs. [32; 33], but the latter is more convenient for our numerical simulations. As discussed in Ref. [33], one may choose \({\bf r}_{0}\) far from potential-energy level crossings, such that the states \(\left|\xi_{i}({\bf r}_{0})\right\rangle\) may be unambiguously identified with pure, unmixed configurations. For the specific application to dynamical-diquark states with a fixed value of \({\bf r}_{0}\), we identify the diagonal elements of this matrix as the static light-field energies \(V_{\delta\tilde{\delta}}\) associated with a pure \(\delta\tilde{\delta}\) state and its corresponding di-meson thresholds \(V_{M_{1}\overline{M}_{2}}^{(i)}\), \(i=1,2,\ldots,N\). Explicitly, \(V_{ji}\) may then be written as
\[{\rm V}=\left(\begin{array}{cccc}V_{\delta\tilde{\delta}}({\bf r})&V_{\rm mix }^{(1)}({\bf r})&\cdots&V_{\rm mix}^{(N)}({\bf r})\\ V_{\rm mix}^{(1)}({\bf r})&V_{M_{1}\overline{M}_{2}}^{(1)}({\bf r})&&\\ \vdots&&\ddots&\\ V_{\rm mix}^{(N)}({\bf r})&&V_{M_{1}\overline{M}_{2}}^{(N)}({\bf r})\end{array} \right), \tag{10}\]
where we ignore direct mixing terms between any two dimeson configurations (_i.e._, the suppressed elements are
zero). For the purposes of this work, we set each pure di-meson energy to be the free energy of the state, _i.e._,
\[V^{(i)}_{M_{1}\overline{M}_{2}}({\bf r})\to T_{M_{1}\overline{M}_{2}}=M_{1}+M_{2 }\,. \tag{11}\]
One could of course instead replace \(V^{(i)}_{M_{1}\overline{M}_{2}}({\bf r})\) with a mildly attractive potential (_e.g._, pion-exchange interactions or the effects of triangle singularities), as suggested in Ref. [31].
## IV Scattering theory
As noted in the Introduction, the diabatic formalism provides a method to study mixed but still formally bound states. In contrast, nearly all of the exotic candidates have been observed solely through their strong-interaction decays, and therefore should properly be treated as resonances in scattering theory, _i.e._, as poles in a scattering \(S\)-matrix.
Here we review the construction of the \(K\)-matrix formalism as a method of retrieving the \(S\)-matrix for coupled-channel eigenstates of the Schrodinger equation, specifically using the method of Ref. [42]. The \(K\)-matrix has several advantages over the \(S\)-matrix, in particular that it can be chosen to be real and symmetric (assuming time-reversal symmetry), and that pole terms induced by distinct resonances, even heavily overlapping ones, may be simply added together in the \(K\)-matrix (unlike for the \(S\)-matrix). In this work, we consider only elastic scattering of asymptotically pure di-meson configurations. As discussed in Ref. [34], this type of scattering, mediated by the short-range mixing of di-meson and \(\delta\bar{\delta}\) states, is the natural physical process in which to study the asymptotic behavior of solutions to Eq. (8). Collecting the set of linearly independent solutions to the Schrodinger equation into a matrix \(\Psi\), one may write the asymptotic behavior as
\[\Psi({\bf r})={\bf J}({\bf r})-{\bf N}({\bf r}){\bf K}, \tag{12}\]
where \({\bf K}\) denotes the \(K\)- (or _reaction_) matrix, and \({\bf J}\) and \({\bf N}\) are the (diagonal) solutions to the Schrodinger equation in the \(r\to\infty\) limit, at which only the centrifugal part of the potential remains significant. Following Ref. [42], we choose the closed-channel elements (channels with thresholds above the total energy \(E\)) of both matrices to be proportional to their corresponding modified spherical Bessel functions \(i_{\ell_{j}},k_{\ell_{j}}\) (\(x_{i}\equiv rk_{i}\), where \(k_{i}\) is the wave number for the \(i^{\rm th}\) channel):
\[{\rm J_{ij}} = x_{i}\!\cdot\!i_{\ell_{j}}(x_{i})\,\delta_{ij},\] \[{\rm N_{ij}} = x_{i}\!\cdot\!k_{\ell_{j}}(x_{i})\,\delta_{ij}, \tag{13}\]
while the open-channel elements (channels with thresholds below the total energy \(E\)) are set to be the Riccati-Bessel functions,
\[{\rm J_{ij}} = x_{i}\!\cdot\!j_{\ell_{j}}(x_{i})\,\delta_{ij},\] \[{\rm N_{ij}} = x_{i}\!\cdot\!n_{\ell_{j}}(x_{i})\,\delta_{ij}. \tag{14}\]
Formally, one may then write \({\bf K}\) as a function of \({\bf J}\), \({\bf N}\), and the log-derivative \({\bf y}\) of the matrix solution \(\Psi\), \({\bf y}\equiv\Psi^{\prime}\Psi^{-1}\):
\[{\bf K}=\left({\bf y}{\bf N}-{\bf N}^{\prime}\right)^{-1}\left({\bf y}{\bf J} -{\bf J}^{\prime}\right). \tag{15}\]
In the sign convention for \({\bf K}\) imposed by Eq. (12) (see Ref. [43] for alternate sign conventions for all of these quantities), the \({\bf S}\)-matrix is obtained as:
\[{\bf S}=({\bf I}-i{\bf K}_{\rm oo})^{-1}({\bf I}+i{\bf K}_{\rm oo}), \tag{16}\]
where \({\bf K}_{\rm oo}\) denotes the sub-matrix of \({\bf K}\) containing _only_ elements that connect open channels to other open channels. That Eq. (16) can be expressed solely in terms of \({\bf K}_{\rm oo}\) relies directly upon the specific forms of Eqs. (13)-(14), as is thoroughly explained in Ref. [44]. Reference [42] also provides a method for numerically calculating Eq. (15) using the reduced Numerov method, which has already been employed extensively for solving dynamical-diquark Schrodinger equations (starting with Ref. [24]).
We now briefly comment on the form of the solutions contained in \(\Psi\). Since this analysis is concerned only with the elastic scattering of asymptotically pure di-meson states, we restrict this discussion to the elements of \(\Psi\) associated with those states. With \(V_{M_{1}\overline{M}_{2}}({\bf r})=T_{M_{1}\overline{M}_{2}}\), the well-known _unmixed_ solutions are:
\[\psi^{(i)}_{J^{PC},m_{J}}({\bf r})=\sqrt{\frac{2}{\pi}\mu^{(i)}p^{(i)}}\,i^{ \ell_{k}^{(i)}}_{\ell_{k}^{(i)}}(p^{(i)}r){\rm Y}^{J,m_{J}}_{\ell_{k}^{(i)}s_ {k}^{(i)}}({\bf\hat{r}}), \tag{17}\]
where \({\rm Y}^{J,m_{J}}_{\ell_{k}^{(i)}s_{k}^{(i)}}\) are irreducible tensors of rank \(J\),
\[{\rm Y}^{J,m_{J}}_{\ell_{k}^{(i)}s_{k}^{(i)}}\equiv\langle{\bf\hat{r}}|\ell,s,J,m_{J}\rangle=\sum_{m_{\ell},m_{s}}C^{m_{\ell},m_{s},m_{J}}_{\ell,s,J}Y^{m_{ \ell}}_{\ell}({\bf\hat{r}})\,\xi^{m_{s}}_{s}, \tag{18}\]
built with the conventional Clebsch-Gordan coefficients \(C^{m_{\ell},m_{s},m_{J}}_{\ell,s,J}\), spherical harmonics \(Y^{m_{\ell}}_{\ell}({\bf\hat{r}})\), and spinors \(\xi^{m_{s}}_{s}\). In addition, \(k\) and \((i)\) denote the \(k^{\rm th}\) partial wave of the \(i^{\rm th}\) di-meson threshold with quantum numbers \(J^{PC}\), while \(j_{\ell}\) is the \(\ell^{\rm th}\) spherical Bessel function of the first kind, \(p^{(i)}=\sqrt{2\mu^{(i)}(E-T^{(i)})}\) is the relative momentum (or wave number) of the di-meson pair, and \(\sqrt{\frac{2}{\pi}\mu^{(i)}p^{(i)}}\) is a factor introduced in Ref. [35] to normalize the full solution in terms of energy \(E\):
\[\langle\Psi_{E^{\prime}}|\Psi_{E}\rangle=\delta(E^{\prime}-E). \tag{19}\]
One may go further by using the large-argument asymptotic expression for spherical Bessel functions,
\[j_{\ell}(pr)\to\frac{1}{pr}{\rm sin}\left(pr-\ell\frac{\pi}{2}\right). \tag{20}\]
This form allows for _mixed_ solutions to be clearly expressed using well-known elastic scattering theory (_e.g._, Eq. (11.17) in Ref. [45]): The effect of mixing with a short-range attractive state, in this case \(\delta\bar{\delta}\), enters as a
channel- and momentum-dependent phase shift \(\delta\) in the unmixed asymptotic wavefunctions of the di-meson configurations. Explicitly,
\[\frac{1}{p^{(i)}r}\mathrm{sin}\left(p^{(i)}r-\ell\frac{\pi}{2} \right)\longrightarrow\\ e^{i\delta_{\ell}^{(i)}}\frac{1}{p^{(i)}r}\mathrm{sin}\left(p^{( i)}r-\ell\frac{\pi}{2}+\delta_{\ell}^{(i)}\right). \tag{21}\]
Summing over all partial waves \(k\) (and adopting the notation of Ref. [35] as closely as possible), we have
\[\psi^{(i)}_{J^{PC},m_{J}}(\mathbf{r})=\frac{1}{r}\sqrt{\frac{2\mu ^{(i)}}{\pi p^{(i)}}}\sum_{k}i_{k}^{(i)}a^{(i)}_{J^{PC};k}\\ \times\frac{1}{p^{(i)}r}\mathrm{sin}\left(p^{(i)}r-\ell_{k}\frac{ \pi}{2}+\delta_{J^{PC};k}^{(i)}\right)\mathrm{Y}^{J,m_{J}}_{\ell_{k}^{(i)}s_{ k}^{(i)}}(\mathbf{f}), \tag{22}\]
where the usual scattering coefficients \(a^{(i)}_{J^{PC};k}\) keep track of the weighted amplitude that each partial wave contributes to the overall \(J^{PC}\) state. Finally, one may now write the asymptotic wavefunction of a di-meson to di-meson scattering state (\(i\gets i^{\prime}\)), in specific partial waves \(k^{\prime}\equiv(\ell^{(i^{\prime})},s^{(i)})\to k\equiv(\ell^{(i)},s^{(i)})\), as
\[\psi^{i\gets i^{\prime}}_{J^{PC};m_{J};k^{\prime}}(\mathbf{r} )\!=\!\frac{1}{r}\sqrt{\frac{2\mu^{(i)}}{\pi p^{(i)}}}\sum_{k}i_{k}^{(i)}\times \\ \left[\delta_{ii^{\prime}}\delta_{kk^{\prime}}\mathrm{sin}\left(p^ {(i)}r-\ell_{k}^{(i)}\frac{\pi}{2}\right)+p^{(i)}f^{i\gets i^{\prime}}_{ J^{PC};k^{\prime}}e^{i(p^{(i)}r-\ell_{k}^{(i)}\frac{\pi}{2})}\right]\\ \times\mathrm{Y}^{J,m_{J}}_{k}(\mathbf{\hat{r}}), \tag{23}\]
with \(f^{i\gets i^{\prime}}_{J^{PC};k,k^{\prime}}\) being the partial-wave scattering amplitude. In the present analysis, \(f^{i\gets i^{\prime}}_{J^{PC};k,k^{\prime}}\) are the objects of interest, since one may extract the elastic-scattering cross sections directly from these scattering amplitudes.
We do so, again following the work of Ref. [35], and thus provide a proof-of-concept calculation of elastic-scattering cross sections for the di-meson configurations (mediated by coupling to \(\delta\bar{\delta}\) states) as discussed in Sec. III. This may be done using the \(S\)-matrix by calculating the scattering amplitude
\[f^{i\gets i^{\prime}}_{J^{PC}}=\frac{(S-\mathbb{I})_{ii^{\prime}}}{2ip^{( i)}}, \tag{24}\]
with which one may calculate the \(J^{PC}\)-specific partial cross section
\[\sigma^{i\gets i^{\prime}}_{J^{PC}}=\frac{4\pi(2J+1)}{(2s_{M_{1}^{(i^{ \prime})}}+1)(2s_{\overline{M_{2}^{(i^{\prime})}}}+1)}\sum_{k,k^{\prime}}|f^{ i\gets i^{\prime}}_{J^{PC};k,k^{\prime}}|^{2}. \tag{25}\]
For the purposes of this calculation, we instead calculate a normalized cross section \(\bar{\sigma}\)[35],
\[\bar{\sigma}^{i\gets i^{\prime}}_{J^{PC}}=\sum_{k,k^{\prime}}|p^{(i)}f^{ i\gets i^{\prime}}_{J^{PC};k,k^{\prime}}|^{2}, \tag{26}\]
which allows for a clearer investigation of the behavior near threshold (where phase space, and hence \(\sigma\), vanishes), as well as providing more unequivocal indications of resonant behavior, in which fully saturated resonances are expected to reach the maximum allowed value of unity for \(\bar{\sigma}\).
## V Results
In this analysis, we assume the mixing elements of the diabatic-potential matrix in Eq. (10) to have the simple Gaussian form [33]:
\[|V^{(i)}_{\mathrm{mix}}(r)|=\frac{\Delta}{2}\exp\!\left\{-\frac{1}{2}\frac{ \left[V_{\delta\bar{\delta}}(r)-T^{(i)}_{M_{1}\overline{M}_{2}}\right]^{2}}{ \Lambda^{2}}\right\}, \tag{27}\]
where \(\Delta\) is the strength of the mixing and \(\Lambda\) is a width parameter, both with units of energy. To produce meaningful results, \(\Delta\) must be large enough to induce sufficient mixing with \(\delta\bar{\delta}\) states that clearly indicates the importance of nearby di-meson thresholds, while \(\Lambda\) must be small enough not to induce excess mixing with thresholds far from the original \(\delta\bar{\delta}\) state; until lattice-QCD simulations are able to provide specific values for these parameters, their magnitudes remain constrained only by these qualitative constraints. One may rewrite \(\Lambda\) as
\[\Lambda=\rho\sigma, \tag{28}\]
where \(\rho\) may be identified as the radial scale of the mixing, while \(\sigma\) is the _string tension_ of the \(\delta\bar{\delta}\) configuration. As discussed in Ref. [33], this particular form of the mixing potential, which is motivated by results of lattice QCD [41], acts as a phenomenological placeholder, in anticipation of future precision lattice simulations. This mixing potential is also a different creature than the one used in the original diabatic works such as Ref. [33]; in the original calculations it refers to \(Q\bar{Q}\to(Q\bar{q})(\bar{Q}q)\) string breaking, while in the present calculations it refers to the rearrangement interaction \((Qq)(\bar{Q}\bar{q})\to(Q\bar{q})(\bar{Q}q)\).
In fact, if one truly wishes to rigorously perform the calculations of this work with the intention of accurate comparison to experiment, then the form of the mixing potential will likely be more complicated. A complete treatment should include every fundamental channel (both open _and_ closed flavor) in the diabatic potential matrix of Eq. (10), transitions between these configurations must be considered, and in order to couple to these channels to allow realistic decays, the mixing potential must must allow for more complicated forms than a simple universal Gaussian function. Additionally, in contrast to the work of Ref. [33], the mixing potential connecting \(\delta\bar{\delta}\) states to meson-meson thresholds may have strong correlations with the particular spin state of the diquarks, and such a dependence should be included in some form as well.
With these caveats in mind, we start by reproducing the results of the bound-state formalism in Ref. [31], with a slight variation of model parameters:
\[\rho = 0.165\ {\rm fm}, \tag{29}\] \[\Delta = 0.295\ {\rm GeV},\] (30) \[m_{c\bar{q}}=m_{q\bar{c}} = 1927.1\ {\rm MeV},\] (31) \[m_{c\bar{s}}=m_{s\bar{c}} = 1944.6\ {\rm MeV}, \tag{32}\]
and, for the ground-state BO potential \(\Sigma_{g}^{+}\),
\[V_{\delta\bar{\delta}}(r)=-\frac{\alpha}{r}+\sigma r+V_{0}+m_{\delta}+m_{\bar{ \delta}}\,, \tag{33}\]
where \(\alpha,\sigma,\) and \(V_{0}\) are \(0.053\ {\rm GeV}\cdot{\rm fm},1.097\ {\rm GeV}/{\rm fm}\), and \(-0.380\ {\rm GeV}\), respectively [46]. Hence, \(\Lambda\) in Eq. (27) is \(0.181\ {\rm GeV}\). We note that applying the hybrid \(Q\bar{Q}\) potential inputs obtained from lattice simulations for the \(\delta\bar{\delta}\) case is reasonable, since both are color \({\bf 3}\)-\(\bar{\bf 3}\) potentials between two heavy sources. For BO potentials above \(\Sigma_{g}^{+}\), which generally tend to mix with each other, the extension of this formalism is straightforward. However, in this work we focus solely on the \(\Sigma_{g}^{+}\) potential, since all exotics found to date appear to be accommodated within its orbitals [24]. These results are presented in Tables 2, 3, and 4. As in Ref. [31], the mixing parameters are retrieved by fitting to the \(\chi_{c1}(3872)\) mass central value \(3871.65\ {\rm MeV}\) reported by the PDG [17], while keeping the same diquark mass \(m_{c\bar{q}}\) [Eq. (31)] found in Ref. [30]. Additionally, the mixing parameters are moderately constrained to reproduce certain behaviors of the mixing angle between the \(\delta\bar{\delta}\) and \(D\bar{D}^{*}\) components: _i.e._, the mixing angle \(\theta(r)\) must smoothly and quickly vary between 0 and \(\pi/2\) as \(r\) decreases/increases away from the _critical radius_\(r_{c}\), which is defined as the separation for which \(V_{\delta\bar{\delta}}(r_{c})\) equals the \(D\bar{D}^{*}\) threshold mass (see Refs. [31; 33]). Again, we note that these mixing parameters \(\rho,\Delta\) are not uniquely defined by this fit, and thus only serve as working values for the present analysis. With these inputs, the diquark mass \(m_{c\bar{s}}\) is then fixed [Eq. (32)] by requiring the \(0^{++}\)\(c\bar{c}s\bar{s}\) state to have mass equal to the central value \(3921.7\ {\rm MeV}\) for \(X(3915)\) given by the PDG [17].
Once these parameters are fixed, the diabatic dynamical-diquark model Hamiltonian (not yet including fine structure) for each tetraquark flavor sector, \(c\bar{c}q\bar{q}^{\prime}\), \(c\bar{c}q\bar{s}\), and \(c\bar{c}s\bar{s}\), is completely specified. This assertion, of course, assumes that the mixing parameters are universal, and not unique to each threshold or flavor sector. Some work towards this end, specifically to include heavy-quark spin-symmetry breaking effects, has been carried out in Ref. [47], where the author calculates transition rates between the elementary state (in that case, \(Q\bar{Q}\)) and its corresponding thresholds. The primary result of this work is a demonstration of how to handle the threshold nonuniversality that occurs between different di-meson thresholds (_e.g._, \(D\bar{D}^{*}\)_vs._\(D^{*}\bar{D}^{*}\)), which constitutes one direction in which one may move past the universality assumption of our mixing potential. Using the formalism described in Sec. IV, we may then directly produce flavor- and \(J^{PC}\)-specific cross sections as functions of center-of-mass energy. In aggregate, these results are presented in Figs. 1-11. Some universal characteristics include the stability of all major functional features in \(\bar{\sigma}\) [Eq. (26)] upon minor variations of the phenomenologically determined parameters \(\rho\), \(\Delta\), and \(m_{\delta}\). Additionally, we find resonant behavior to occur in all but one of the cross sections, which is consistent with the calculations performed under the bound-state framework of Refs. [31; 33]. That is, we find resonances in the near proximity of all predicted bound states.
also be used to extract the corresponding decay width for the \(D\bar{D}^{*}\) channel if one converts the data back to the physical cross section of Eq. (25). We find 0.4 MeV for the width of the peak, which may be compared to the PDG value \(0.44\pm 0.13\) MeV for \(\chi_{c1}(3872)\) decaying to \(D^{0}\bar{D}^{*0}\)[17]. We note that our result is found through extrapolation, since the full peak structure is cut off by threshold itself. Although this assumption must be taken with some caution, the closeness of these two values implies that it is straightforward to find values of \(\rho\) and \(\Delta\) that exactly accommodate exactly both the correct mass and width of \(\chi_{c1}(3872)\).
In fact, the case of \(\chi_{c1}(3872)\) is particularly interesting, because it indicates limitations on the freedom to choose diabatic couplings. While we have noted the stability of basic morphological features of \(\tilde{\sigma}\) under variations of the parameters \(\rho\), \(\Delta\), and \(m_{\delta}\), the fact that the precise mass and width of \(\chi_{c1}(3872)\) are now highly constrained means that the values of the diabatic parameters, given a particular functional form such as in Eq. (27), must be carefully chosen in order to maintain agreement with experiment. These adjustments must be revisited as additional channels [_e.g._, \(J/\psi\,\pi^{+}\pi^{-}\) for \(\chi_{c1}(3872)\)] and spin- and isospin-dependent couplings in the Hamiltonian are incorporated into future iterations of these calculations. Even so, certain features such as the large \(D\bar{D}^{*}\) content and small but significant \(\delta\bar{\delta}\) content of \(\chi_{c1}(3872)\) should remain robust.
Our \(0^{++}\) results in Fig. 3 show a wide, fully saturated peak at 3900 MeV in the process \(D\bar{D}\to D\bar{D}\), with nontrivial modifications from both \(D_{s}\bar{D}_{s}\) and \(D^{*}\bar{D}^{*}\) thresholds as well. This result is consistent with expectations inferred from Table 2, where the bound-state approximation produces significant contributions from the corresponding thresholds to a state with matching energy, 3903.83 MeV. In Fig. 3, the impacts of threshold effects in the lineshapes are clearly visible.
Conversely, for \(2^{++}\) scattering (Fig. 4), we observe a sharp peak in \(D\bar{D}\) near 3910 MeV, which can be unambiguously assigned to the corresponding state (3917.44 MeV) of Table 2. Outside of this peak in the \(2^{++}\) cross section, there are relatively small contributions in all but the \(D^{*}\bar{D}^{*}\) channel. We also observe the same preferential \(2^{++}\) coupling to \(D^{*}\bar{D}^{*}\) in Table 2, despite its threshold (\(\sim\)4014 MeV) being significantly higher in mass than the \(D_{s}\bar{D}_{s}\) threshold (\(\sim\)3937 MeV). In Ref. [31], this enhancement is attributed to the fact that the \(D^{*}\bar{D}^{*}\) threshold coupling to \(2^{++}\) allows an S-wave coupling, which is naturally expected to dominate over \(\ell>0\) configurations (D-wave for \(D_{(s)}\bar{D}_{(s)}\) in \(2^{++}\)) in scattering processes.
While the sharpness of the peak in Fig. 4 suggests the existence of a clear \(\delta\bar{\delta}\) resonance with \(J^{PC}=2^{++}\) that should be immediately detectable by experiment, it is important to remind the reader that these widths arise from calculations using incomplete physical information. A more detailed treatment of the threshold couplings and mixing potential, as discussed at the beginning of Sec. V, is essential before the widths may be compared with experiment.
For example, in the present case, the isoscalar \(2^{++}\) channel is already known to feature the \(c\bar{c}\) candidate \(\chi_{c2}(2P)\) at \(3922.5\pm 1.0\) MeV [17] (which could certainly have been included in this analysis, in the same manner as done in Fig. 1), and this state has a substantial width of about 35 MeV, likely largely due to its observed (D-wave) \(D\bar{D}\) decay mode. A comparison between the calculation of widths through conventional methods (_i.e._, as performed in Ref. [34] for \(c\bar{c}\) states in the diabatic formalism) with those obtained from the scattering formalism will appear in future work.
In this work we now include bound-state results (Table 2) for the \(1^{--}\)\(c\bar{c}\bar{q}\bar{q}^{\prime}\) channel (which did not appear in the results of Ref. [31]), and also present the corresponding cross section (Fig. 5). The energy interval (4.15-4.50 GeV) exhibited for the analysis in this channel is restricted to impose stringent requirements upon which thresholds to include, in order to admit only those expected to generate the most physically significant effects. Thus, we include only thresholds for meson pairs with relatively small individual widths (\(<50\) MeV), and (with the exception of \(D^{*}_{s}\bar{D}^{*}_{s}\)) that couple to \(1^{--}\) in an S-wave. This calculation produces a resonant peak with an extraordinarily small width (only 4.2 MeV), but again we caution the reader that the widths of states appearing in these plots are based upon incomplete physical input. At a mass of about 4240 MeV, this peak is clearly sensitive to the \(D^{*}_{s}\bar{D}^{*}_{s}\) threshold, which again requires an OZI-suppressed amplitude to couple to \(c\bar{c}\bar{q}\bar{q}\). It is natural to identify this peak with \(\psi(4230)\), even though this state's open-charm decay modes are poorly known (only \(\pi^{+}D^{0}D^{*-}\) has thus far been seen [17]). We also
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(J^{PC}\) & \(E\) (MeV) & \(\delta\bar{\delta}\) & \(D\bar{D}^{*}\) & \(D_{s}\bar{D}_{s}\) & \(D^{*}\bar{D}^{*}\) & \(D^{*}_{s}\bar{D}^{*}_{s}\) \\ \hline
0\({}^{++}\) & 3903.83 & 69.8\% & & 22.7\% & 6.9\% & \\
1\({}^{++}\) & 3871.65 & 9.1\% & 90.9\% & & & \\
2\({}^{++}\) & 3917.44 & 86.0\% & & 1.5\% & 10.4\% & 1.5\% \\ \hline & & & \(D\bar{D}_{1}\) & \(D\bar{D}^{*}_{2}\) & \(D^{*}\bar{D}_{1}\) & \\ \hline
1\({}^{--}\) & 4269.58 & 44.0\% & 51.2\% & 2.4\% & 1.5\% & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Calculated eigenvalues and component-state admixtures for the \(c\bar{c}q\bar{q}^{\prime}\) sector obtained from solving Eq. (8) for specific \(J^{PC}\) numbers. Suppressed entries indicate contributions that are individually finite but \(<\)1%, or that give no contribution.
note a nearly 30-MeV shift of the resonant peak from the bound-state energy predicted by the corresponding state in Table 2. While we have yet to explicitly calculate the expected bound-state mass shifts that arise from the perturbative introduction of couplings to open thresholds, Ref. [34] provides a rough estimate of what might be expected through their analogous calculation in \(c\bar{c}\)-\(D_{(s)}^{(*)}\bar{D}_{(s)}^{(*)}\) mixing. A comparison to the largest shift noted in that work, roughly 28 MeV, allows for the reasonable identification of the peak in Fig. 5 with the \(1^{--}\) bound state of Table 2. Beyond this peak, the \(1^{--}\) channel as displayed in Fig. 5 exhibits an abundance of threshold behaviors in all presented cross sections.
### \(c\bar{c}s\bar{s}\) and \(c\bar{c}q\bar{s}\)
The full suite of \(c\bar{c}s\bar{s}\) results is presented in Figs. 6-8, while the \(c\bar{c}q\bar{s}\) (or \(c\bar{c}s\bar{q}\)) results appear in Figs. 9-11. Beginning with our \(0^{++}\) findings for the \(c\bar{c}s\bar{s}\) sector (Fig. 6), we observe further agreement with our bound-state predictions (Table 3) in the appearance of a fully saturated peak at 3920 MeV in \(D\bar{D}\to D\bar{D}\). One may note the similarity of this lineshape with the analogous one in the \(c\bar{c}q\bar{q}^{\prime}\) sector (Fig. 3). Such results are a direct result of the fact that this formalism is currently "blind" to any effects due to strangeness, other than through explicit differences in diquark and meson masses. We expect this effect to diminish as additional SU(3)\({}_{\rm flavor}\) symmetry breaking is incorporated.
In the \(1^{++}\) results for this sector (Fig. 7), we find a relatively wide peak centered at 3925 MeV in the \(D\bar{D}^{*}\to D\bar{D}^{*}\) cross section. While this result may appear to discourage assignment to the \(1^{++}\) state in Table 3 (3968.47 MeV), we note the relatively long tail present in this peak structure, and also recall the up-to-30 MeV downwards shift that may be caused by the introduction of open thresholds. These two facts argue that an assignment of the peak in Fig. 7 to the \(1^{++}\) bound state in Table 3 is not unreasonable, and indeed, show how strong threshold effects can be in certain channels. As threshold structures are abundant throughout the full results of this analysis, we draw attention to their absence in both hidden-flavor \(1^{++}\) resonances [Figs. 2 and 7] at the \(D^{*}\bar{D}^{*}\) threshold. As symmetry forbids an S-wave \(1^{-}1^{-}\to 1^{++}\) coupling, this threshold has only a D-wave coupling to \(1^{++}\). Thus, these results provide further evidence for the dominance of S-wave couplings in scattering processes.
Lastly, we find the \(2^{++}\)-channel scattering (Fig. 8) to yield a sharp (but not fully saturated) peak around 3925 MeV, which falls within the aforementioned 30-MeV interval for reasonable identification with the corresponding bound state of Table 3. In Fig. 8, we observe a uniquely interesting case, in which the state appears to be dragged below the previously open threshold of \(D_{s}\bar{D}_{s}\) [although Table 3 disallows admixture to this state because the bound state (3949.33 MeV) was found to lie above the \(D_{s}\bar{D}_{s}\) threshold (\(\sim\)3937 MeV)]. In the scattering context, \(D_{s}\bar{D}_{s}\) only couples to \(2^{++}\) through a D-wave, and therefore is still expected to be suppressed compared to S-waves.
The \(c\bar{c}q\bar{s}\) sector provides another opportunity to examine the nearly unbroken SU(3)\({}_{\rm flavor}\) symmetry present in this calculation. A near-perfect overlap is observed for \(1^{+}\) elastic \(D^{*}\bar{D}_{s}\) and \(D\bar{D}_{s}^{*}\) scattering processes (Fig. 9). We find no resonant behavior in these results, consistent with the \(1^{+}\) prediction of Table 4, which indicates an eigenstate (3912.73 MeV) below the lowest available di-meson threshold (\(\sim\)3975 MeV). Additionally, we find fully saturated peaks in both the \(0^{+}\) (Fig. 10) and \(2^{+}\) (Fig. 11) results, centered just above and just below 3950 MeV, respectively. Of the two, the peak found in \(2^{+}\)\(D\bar{D}_{s}\to D\bar{D}_{s}\) notably has the smallest apparent width of any appearing in this analysis (but with the same caveats discussed above). In both cases, the location of the peak differs only slightly from the predictions of Table 4, which, interestingly, our calculations show can be attributed to the introduction of the \(D\bar{D}_{s}\) threshold (\(\sim\)3833 MeV), which lies well below the predicted eigenvalues. One may also contrast the contributions of the \(D^{*}\bar{D}_{s}\) and \(D\bar{D}_{s}^{*}\) processes in Figs. 9 and 11. We see that a bound-state calculation in which over 90% of the content is \(\delta\bar{\delta}\) (_i.e._, Fig. 11 but not Fig. 9) produces no obvious structure in \(\bar{\sigma}\) for scattering processes with thresholds far above the resonance. This conclusion is corroborated by the results of Figs. 7 and 10.
In addition, the inputs in this sector are completely fixed by the phenomenological fits to the other flavor sectors, and thus provide useful benchmarks for comparison against experiment. The \(1^{+}\) state of Table 4 (3912.73 MeV) in particular, which is generally unaffected by the changes introduced in the present calculation, may ultimately be associated with the observed \(Z_{cs}(3985)\)[17], once multiplet fine-structure effects are included, especially the mixing of strange states in distinct \(1^{+}\) SU(3)\({}_{\rm flavor}\) multiplets [30]. This assignment works especially well when one compares the admixtures of the Table 4 state with the fact that \(Z_{cs}(3985)\) has been observed as a \(D_{s}\bar{D}^{*}+D_{s}^{*}\bar{D}\) resonance [48]. The difference between these two masses (\(\sim\)70 MeV) is well within the largest fine-structure mass-splitting effect predicted for diquark-antidiquark states in this sector [30].
An additional comparison is available from the \(1^{++}\) state of Table 3: Although the mass difference is much larger [\(\sim\)170 MeV, corresponding to the \(c\bar{c}s\bar{s}\) candidate \(\chi_{c1}(4140)\)], it is not yet known how the fine structure of _diabatic_ dynamical diquark states differs from that of states that are blind to threshold effects, particularly once effects sensitive to the larger strange-quark mass are properly included.
## VI Conclusions
We have reviewed the incorporation of the diabatic formalism, a rigorous extension of the well-known Born
Oppenheimer approximation that is designed to include effects due to the presence of two-particle thresholds, into the dynamical diquark model. While our previous work addresses states formed in the immediate vicinity of these thresholds (the bound-state approximation), this paper develops a scattering framework capable of describing not only exotic states lying close to such thresholds, but also those that lie quite far from them (and thus have no obvious interpretation as a di-hadron molecular state).
Using the bound-state approximation, we first reproduce our previous flavor- and \(J^{PC}\)-specific calculations of energy eigenvalues and fractions of both diquark-antidiquark and di-meson components within the corresponding eigenstates. We then summarize the construction of the K-matrix formalism as a method to retrieve the S-matrix, in order to calculate asymptotic scattering amplitudes of coupled-channel, elastic meson-meson collision processes (the most natural ones in which to study resonance and threshold behaviors). We validate the physical expectation that asymptotically free meson-meson pairs develop resonance structures through their short-range interaction with diquark-antidiquark channels. These scattering amplitudes are calculated numerically for the hidden-charm system (with zero, hidden, and open strangeness), and then are directly used to produce all corresponding CM energy-dependent cross sections, which comprise the main results of this work.
We confirm the expected resonant behavior in all flavor- and \(J^{PC}\)-specific cross sections, and also observe several instances of threshold-induced structures such as cusp effects. In addition, the peak of every resonance is calculated to occur not far from the energy of its corresponding bound-state eigenvalue. We observe shifts of these resonances down from the bound-state energies once the couplings to open thresholds are included, in agreement with expectations that thresholds are generally "attractive." While nearly all of these resonant behaviors reach the maximum value allowed by unitarity, some prominent examples reach as low as \(\sim 75\%\) of this value.
Although this analysis is mostly limited to meson-meson scattering coupled to diquark-antidiquark channels described by the dynamical diquark model, we find evidence that the conventional \(c\bar{c}\) state \(\chi_{c1}(2P)\) may be incorporated separately into the \(c\bar{c}q\bar{q}\)\(1^{++}\) channel, producing two resonant components that may overlap to form \(\chi_{c1}(3872)\). In general, a complete calculation would include all diquark-antidiquark and \(c\bar{c}\) states in every allowed \(J^{PC}\) channel.
While these results are quite promising, they do not yet distinguish explicit spin- and isospin-multiplet members. The incorporation of such fine-structure analysis has been accomplished for multiple flavor sectors in the original (adiabatic) dynamical diquark model, and thus will be straightforward to include in its diabatic form; this extension will be one major thrust of future work. In addition, this analysis does not incorporate SU(3)\({}_{\rm flavor}\) symmetry-breaking effects beyond explicit differences in the diquark masses \(m_{cq}\) and \(m_{cs}\), and in meson masses \(m_{D^{(*)}},m_{D^{(*)}_{s}}\), _etc_. Such additional effects, not to mention OZI suppression, are expected to have substantial impact on the scattering processes discussed here. Lastly, the widths of the resonances implied by these cross-section plots are not always suitable for direct comparison with experiment, as they are calculated using a universal, and hence, incomplete set of couplings to meson-meson thresholds, as well as (aside from the one example in Fig. 1) lacking couplings to closed-flavor channels. Thus, future work will also use well-known techniques to calculate physical strong-decay widths and shifts of energy eigenvalues due to open-threshold di-meson pairs that lie well below the diabatically mixed eigenstates studied here--_i.e._, the pairs that represent their physical decay channels.
###### Acknowledgements.
This work was supported by the National Science Foundation (NSF) under Grants No. PHY-1803912 and PHY-2110278.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(J^{PC}\) & \(E\) (MeV) & \(\delta\bar{\delta}\) & \(D^{*}\bar{D}_{s}\) & \(D\bar{D}^{*}_{s}\) & \(D^{*}\bar{D}^{*}_{s}\) \\ \hline \(0^{++}\) & 3969.04 & 95.2\% & & & 4.5\% \\ \hline \(1^{++}\) & 3912.73 & 71.9\% & 13.7\% & 13.4\% & \\ \hline \(2^{++}\) & 3951.42 & 92.5\% & 1.5\% & 1.5\% & 4.5\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: The same as in Tables 2 & 3, for the \(c\bar{c}q\bar{s}\) sector.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(J^{PC}\) & \(E\) (MeV) & \(\delta\bar{\delta}\) & \(D_{s}\bar{D}_{s}\) & \(D^{*}\bar{D}^{*}\) & \(D_{s}\bar{D}^{*}_{s}\) & \(D_{s}^{*}\bar{D}^{*}_{s}\) \\ \hline \(0^{++}\) & 3921.69 & 55.7\% & 35.4\% & 7.1\% & & 1.2\% \\ \hline \(1^{++}\) & 3968.47 & 90.4\% & & 1.2\% & 7.8\% \\ \hline \(2^{++}\) & 3949.33 & 82.1\% & & 15.5\% & & 2.1\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: The same as in Table 2, for the \(c\bar{c}s\bar{s}\) sector. |
2303.03083 | Cost Sharing under Private Valuation and Connection Control | We consider a cost sharing problem on a weighted undirected graph, where all
the nodes want to connect to a special node called source, and they need to
share the total cost (weights) of the used edges. Each node except for the
source has a private valuation of the connection, and it may block others'
connections by strategically cutting its adjacent edges to reduce its cost
share, which may increase the total cost. We aim to design mechanisms to
prevent the nodes from misreporting their valuations and cutting their adjacent
edges. We first show that it is impossible for such a mechanism to further
satisfy budget balance (cover the total cost) and efficiency (maximize social
welfare). Then, we design two feasible cost sharing mechanisms that incentivize
each node to offer all its adjacent edges and truthfully report its valuation,
and also satisfy either budget balance or efficiency. | Tianyi Zhang, Junyu Zhang, Sizhe Gu, Dengji Zhao | 2023-03-06T12:45:51Z | http://arxiv.org/abs/2303.03083v1 | # Cost Sharing under Private Valuation and Connection Control
###### Abstract.
We consider a cost sharing problem on a weighted undirected graph, where all the nodes want to connect to a special node called source, and they need to share the total cost (weights) of the used edges. Each node except for the source has a private valuation of the connection, and it may block others' connections by strategically cutting its adjacent edges to reduce its cost share, which may increase the total cost. We aim to design mechanisms to prevent the nodes from misreporting their valuations and cutting their adjacent edges. We first show that it is impossible for such a mechanism to further satisfy budget balance (cover the total cost) and efficiency (maximize social welfare). Then, we design two feasible cost sharing mechanisms that incentivize each node to offer all its adjacent edges and truthfully report its valuation, and also satisfy either budget balance or efficiency.
Cost sharing; Mechanism design; Truthfulness +
Footnote †: journal: The authors have equal contributions.
## 1. Introduction
In the classic cost sharing problem, there are a group of agents at different locations and a source. All the agents want to connect to the source via the connections (edges) between the locations, but each connection has a cost (Gardard, 1998; Gardard, 1998; Gardard, 1998). The goal is to allocate the total connection cost among the agents. This problem exists in many real-world applications such as cable TV, electricity, and water supply networks (Gard, 1998; Gard, 1998; Gard, 1998). It has been well-studied and many solutions have been proposed to achieve different properties (Gard, 1998; Gard, 1998; Gard, 1998) (we survey them in Section 2).
However, these solutions do not consider two natural strategic behaviors of the agents. First, to connect to the source, an agent may need to go through some intermediate agents. These agents may block the connection by strategically cutting their adjacent edges if their cost share is reduced by doing so (Gard, 1998) (see an example in Section 2), which will potentially increase the total cost of connecting the agents. Second, each agent has a private valuation for connecting to the source (i.e., the maximum cost that it is willing to share). To maximize social welfare (i.e., the difference between the agents' valuations and the total connection cost), the agents need to report their valuations, but they may misreport for their own interest.
To minimize the total connection cost and maximize social welfare, we design cost sharing mechanisms on general networks that can prevent the two strategic behaviors. One difficulty lies in the conflict that the mechanism designer wants to use all the edges to minimize the total connection cost, but agents have the motivation to cut their adjacent edges to reduce their cost share. This essentially reflects the conflict between the system's optimality and the agents' self-interests. Another difficulty lies in the conflict that the mechanism designer wants to use truthful valuations to select agents with maximum social welfare, while agents have the motivation to misreport their valuations to reduce their cost share.
To combat the challenges, we first show that if we further require efficiency (the set of selected agents has the maximal social welfare) and budget balance (the sum of all agents' cost share equals the total cost), then it is impossible to prevent the above manipulations. However, we could achieve efficiency and budget balance separately.
Therefore, we propose two mechanisms to prevent new manipulations and to achieve either efficiency or budget balance. The first mechanism selects the agents based on their social welfare inspired by the Vickrey-Clarke-Groves (VCG) mechanism (Friedman, 1977; Goyal and Goyal, 1977; Goyal, 1978) and each agent pays the minimum reported valuation that enables it to be selected. The second selects the agents iteratively and the total connection cost in each iteration is shared equally among the agents selected in this iteration. We also show that these mechanisms satisfy other desirable properties studied in the literature (Bergantinos and Vidal-Puga, 2000; Vidal-Puga, 2001; Goyal, 2002; Goyal, 2003).
## 2. Related Work
There is rich literature on the classic cost sharing problem, which did not consider private valuation and connection control. Some studies treated the problem from the perspective of a non-cooperative game. Bergantinos and Lorenzo (Bergantinos and Lorenzo, 2000) studied the Nash equilibrium of the problem and further they (Bergantinos and Lorenzo, 2000) studied the Nash equilibrium with budget restriction. Tijs and Driessen (Tijs and Driessen, 2000) proposed the cost gap allocation (CGA) method, but it only applies to complete graphs. Bird (Bergantinos and Vidal-Puga, 2000), Dutta and Kar (Dutta and Kar, 2000), Norde _et al._(Norde et al., 2000), Tijs _et al._(Tijs and Driessen, 2000) and Hougaard _et al._(Hougaard et al., 2000) provided cost sharing mechanisms based on the minimum spanning tree of a graph.
However, they do not satisfy truthfulness since the agents can change the minimum spanning tree by cutting their adjacent edges to reduce their cost share. We take the Bird rule (Bergantinos and Vidal-Puga, 2000) for an example to show the problem. Under the Bird rule, the cost share of an agent is the cost of the edge that connects it to the (growing) spanning tree by Prim's algorithm (Prim, 1994) starting from the source. Consider Figure 1, when the agent \(b\) does not cut the edge \((a,b)\), its cost share is \(3\). When it cuts the edge, its cost share is \(2<3\).
Other solutions treated the problem from the cooperative game perspective. They are all based on the Shapley value (Sar
In summary, the existing solutions for the classic cost sharing problem on complete graphs do not consider the situation where agents need to report their valuations and adjacent edges, as is shown before, they do not guarantee to satisfy feasibility and truthfulness.
## 3. The model
We consider a cost sharing problem to connect the nodes in a weighted undirected graph \(G=\langle V\cup\{s\},E\rangle\). The weight of the edge \((i,j)\in E\) denoted by \(c_{(i,j)}\geq 0\) represents the cost to use the edge to connect \(i\) and \(j\). All the nodes in set \(V\) want to connect to the source node \(s\). The total cost of the connectivity has to be shared among all connected nodes except for \(s\). Each node \(i\in V\) has a private valuation \(v_{i}\geq 0\), which is the maximum cost that it is willing to share.
Given the graph, the minimum cost of connecting the nodes is the weight of the minimum Steiner tree (Kendal, 1990) (we assume that the graph is connected). The minimum Steiner tree of a set of nodes is a tree with the minimum weight that contains these nodes (it may include the nodes outside the set). The question here is how the nodes share this cost. We also consider two natural strategic behaviors of each node except for the source, i.e., cutting its adjacent edges and misreporting its valuation. An edge \((i,j)\) cannot be used for connectivity if \(i\) or \(j\) cuts it. Our goal is to design cost sharing mechanisms to incentivize nodes to report their valuations truthfully and also offer all their adjacent edges so that we can use all the edges to minimize the total cost of the connectivity.
Formally, let \(e_{i}(i\in V\cup\{s\})\) be the set of \(i\)'s adjacent edges and \(\theta_{i}=(e_{i},v_{i})\) be the _type_ of \(i\). Let \(\theta=(\theta_{1},\cdots,\theta_{|V|+1})\) be the type profile of all nodes including the source \(s\) (the valuation of \(s\) is _null_). We also write \(\theta=(\theta_{i},\theta_{-i})\), where \(\theta_{-i}=(\theta_{1},\cdots,\theta_{i-1},\theta_{i+1},\cdots,\theta_{|V|+1})\) is the type profile of all nodes except for \(i\). Let \(\Theta_{i}\) be the type space of \(i\) and \(\Theta\) be the type profile space of all nodes (which generates all possible graphs containing \(V\cup\{s\}\)).
We design a cost sharing mechanism that asks each node to report its valuation and the set of its adjacent edges that can be used for the connectivity. Let \(\theta^{\prime}_{i}=(e^{\prime}_{i},v^{\prime}_{i})\) be the report of \(i\) where \(e^{\prime}_{i}\subseteq e_{i}\) and \(\sigma^{\prime}_{i}\geq 0\), and \(\theta^{\prime}=(\theta^{\prime}_{1},\cdots,\theta^{\prime}_{|V|+1})\) be the report profile of all nodes. Given a report profile \(\theta^{\prime}\in\Theta\), the graph induced by \(\theta^{\prime}\) is denoted by \(G(\theta^{\prime})=\langle V\cup\{s\},E(\theta^{\prime})\rangle\subseteq \langle V\cup\{s\},E\rangle\), where \(E(\theta^{\prime})=\{(i,j)|(i,j)\in(\theta^{\prime}_{i}\cap\theta^{\prime}_{j })\}\). Finally, let \(r_{i}(\theta^{\prime})\subseteq V\) be the set of \(i\)'s neighbour nodes.
**Definition 3.1**.: A cost sharing mechanism consists of a node selection policy \(g:\Theta\to 2^{V}\), an edge selection policy \(f:\Theta\to 2^{E}\), and a cost sharing policy \(x:\Theta\to\mathbb{R}^{|V|}\). Given a report profile \(\theta^{\prime}\in\Theta\), \(g(\theta^{\prime})\subseteq V\) selects the nodes to be connected, \(f(\theta^{\prime})\subseteq E(\theta^{\prime})\) selects the edges to connect the selected nodes \(g(\theta^{\prime})\), and \(x(\theta^{\prime})=(x_{i}(\theta^{\prime}))_{i\in V}\), where \(x_{i}(\theta^{\prime})\) is the cost share of \(i\), which is zero if \(i\notin g(\theta^{\prime})\).
For simplicity, we use \((g,f,x)\) to denote a cost sharing mechanism. Given a report profile \(\theta^{\prime}\in\Theta\), the utility of a node \(i\in V\) under \((g,f,x)\) is defined as
\[u_{i}(\theta^{\prime})=\begin{cases}v_{i}-x_{i}(\theta^{\prime})&\text{if }i\in g(\theta^{\prime}),\\ 0&\text{otherwise.}\end{cases}\]
In the following, we introduce the desirable properties of a cost sharing mechanism.
Feasibility requires that the cost share of each node is not over its reported valuation.
**Definition 3.2**.: A cost sharing mechanism \((g,f,x)\) satisfies **feasibility** if \(x_{i}(\theta^{\prime})\leq v^{\prime}_{i}\) for all \(i\in V\), for all \(\theta^{\prime}\in\Theta\).
Truthfulness states that each node cannot increase its utility by cutting its adjacent edges and misreporting its valuation. Note that the source does not behave strategically in this setting.
**Definition 3.3**.: A cost sharing mechanism \((g,f,x)\) satisfies **truthfulness** if \(u_{i}((\theta_{i},\theta^{\prime}_{-i}))\geq u_{i}((\theta^{\prime}_{i},\theta^{ \prime}_{-i}))\), for all \(i\in V\), for all \(\theta_{i}\), \(\theta^{\prime}_{i}\in\Theta_{i}\), and for all \(\theta^{\prime}_{-i}\in\Theta_{-i}=\Theta\setminus\Theta_{i}\).
Individual rationality requires that each node's utility is non-negative when it reports its type truthfully no matter what the others do.
**Definition 3.4**.: A cost sharing mechanism \((g,f,x)\) satisfies **individual rationality (IR)** if \(u_{i}(\theta_{i},\theta^{\prime}_{-i})\geq 0\) for all \(i\in V\), for all \(\theta_{i}\in\Theta_{i}\), and for all \(\theta^{\prime}_{-i}\in\Theta_{-i}=\Theta\setminus\Theta_{i}\).
Utility monotonicity states that for each selected node, its utility will weakly decrease if the cost of one of its adjacent edges increases under the same report profile.
**Definition 3.5**.: A cost sharing mechanism \((g,f,x)\) satisfies **utility monotonicity (UM)** if \(u_{i}(\theta^{\prime})\geq u_{i}^{+}(\theta^{\prime})\) for all \(\theta^{\prime}\in\Theta\) and for all \(i\in V\), where \(u_{i}^{+}(\theta^{\prime})\) is \(i\)'s utility when the cost of the edge \((i,j)\in\theta^{\prime}_{i}\) increases.
We also require that the sum of all nodes' cost share equals the total cost of the selected edges for any report profile. That is, the mechanism has no profit or loss.
**Definition 3.6**.: A cost sharing mechanism \((g,f,x)\) satisfies **budget balance (BB)** if \(\sum_{i\in V}x_{i}(\theta^{\prime})=\sum_{(i,j)\in f(\theta^{\prime})}c_{(i,j)}\) for all \(\theta^{\prime}\in\Theta\).
The ranking property requires that for any nodes \(i\) and \(j\) that have the same reported valuations and the same neighbour nodes except for \(i\) and \(j\), if the cost of the edge \((i,k)\) is less expensive than the edge \((j,k)\) for any neighbour node \(k\), then the utility of \(i\) should be larger than \(j\).
**Definition 3.7**.: A cost sharing mechanism \((g,f,x)\) satisfies **ranking** if for all \(\theta^{\prime}\in\Theta\), for all \(i,j\in V\) with \(r_{i}(\theta^{\prime})\setminus\{j\}=r_{j}(\theta^{\prime})\setminus\{i\}\) and \(u^{\prime}_{i}=v^{\prime}_{j}\) (assume \(v^{\prime}_{i}=v_{i}\) and \(v^{\prime}_{j}=v_{j}\)), we have \(c_{(i,k)}\leq c_{(j,k)}\) for all \(k\in r_{i}(\theta^{\prime})\setminus\{j\}\) implies \(u_{i}(\theta^{\prime})\geq u_{j}(\theta^{\prime})\).
Symmetry says nodes that play the same role obtain the same utility.
**Definition 3.8**.: A cost sharing mechanism \((g,f,x)\) satisfies **symmetry** if for all \(\theta^{\prime}\in\Theta\), for all \(i,j\in V\) with \(r_{i}(\theta^{\prime})\setminus\{j\}=r_{j}(\theta^{\prime})\setminus\{i\}\) and \(v^{\prime}_{i}=v^{\prime}_{j}\) (assume \(v^{\prime}_{i}=v_{i}\) and \(v^{\prime}_{j}=v_{j}\)), we have \(c_{(i,k)}=c_{(j,k)}\) for all \(k\in r_{i}(\theta^{\prime})\setminus\{j\}\) implies \(u_{i}(\theta^{\prime})=u_{j}(\theta^{\prime})\).
Finally, each node's cost share should be non-negative.
**Definition 3.9**.: A cost sharing mechanism \((g,f,x)\) satisfies **positiveness** if \(x_{i}(\theta^{\prime})\geq 0\) for all \(i\in V\) and for all \(\theta^{\prime}\in\Theta\).
In the rest of the paper, we design cost sharing mechanisms to satisfy the above properties.
## 4. Impossibility results
In this section, we establish some impossibility results. We first introduce some extra notions.
**Definition 4.1**.: For a given subset \(S\subseteq V\), the social welfare (SW) of \(S\) is
\[SW(S)=\sum_{i\in S}v^{\prime}_{i}-C(S),\]
where \(v^{\prime}_{i}\) is the reported valuation of node \(i\) and \(C(S)\) is the minimum cost of connecting all the nodes in \(S\) (i.e., the weight of the minimum Steiner tree of \(S\cup\{s\}\)).
Intuitively, social welfare represents the profit of the selected nodes.
**Definition 4.2**.: Given \(\theta^{\prime}\in\Theta\), a mechanism \((g,f,x)\) satisfies **efficiency** if it selects \(g(\theta^{\prime})\subseteq V\) such that its social welfare is maximized, i.e.,
\[SW(g(\theta^{\prime}))=\max_{\forall S\subseteq V}SW(S).\]
For simplicity, we use \(\delta(V)\) to denote a subset of \(V\) that has the maximal social welfare, i.e.,
\[\delta(V)=\arg\max_{\forall S\subseteq V}SW(S).\]
The computation of \(\delta(V)\) is described as follows.
```
1:Set: \(\delta(\emptyset)=\emptyset\), \(SW(\emptyset)=0\), and \(P(V)\) to be the power set of \(V\).
2:Sort all the elements of \(P(V)\) in an ascending order by their cardinalities.
3:For \(S\in P(V)\setminus\{\emptyset\}\): * Get \(Q(S)=\{S^{\prime}|S^{\prime}\subset S,|S^{\prime}|=|S|-1\}\). * Let \(\delta(S)=\operatorname*{arg\,max}_{S^{\prime}\in Q(S)}SW(\delta(S^{\prime}))\). * **If**\(\sum_{i\in S}\sigma^{\prime}_{i}-C(S)\geq SW(\delta(S))\), set \(\delta(S)=S\). * Get \(SW(\delta(S))=\sum_{i\in\delta(S)}\sigma^{\prime}_{i}-C(\delta(S))\).
```
**Output**: The subset \(\delta(V)\), the maximum social welfare \(SW(\delta(V))\)
A running example of Algorithm 1 is given in Figure 2. Assume \(c_{(s,a)}=2,c_{(s,b)}=4,c_{(a,b)}=3,v^{\prime}_{a}=3,v^{\prime}_{b}=3\). By Algorithm 1, we have \(\delta(\{a\})=\{a\},\delta(\{b\})=\emptyset\) and \(\delta(\{a,b\})=\{a,b\}\).
**Proposition 4.3**.: _There exists no cost sharing mechanism which satisfies truthfulness, feasibility, efficiency, and budget balance simultaneously._
Figure 2. The \(s\) represents the source, \(a\) and \(b\) represent the nodes, the numbers in the circles represent the reported valuations of nodes, and the numbers on the edges represent the cost for the connectivity.
Proof.: We only need to consider a simple line graph in Figure 3. We show that when feasibility, efficiency, and budget balance are satisfied, truthfulness will be violated. Assume that \(v_{a}>m\) and \(v_{b}>m+n\). By efficiency, when \(a\) and \(b\) truthfully report \(v_{a}^{\prime}=v_{a},v_{b}^{\prime}=v_{b}\), \(a\) and \(b\) are both selected by the mechanism since \((v_{a}+v_{b})-(m+n)>v_{a}-m\).
* When \(x_{a}(\theta^{\prime})>0\), if node \(a\) reports \(v_{a}^{\prime\prime}=0\), by feasibility, we have \(x_{a}(\theta^{\prime\prime})=0\) and the utility of \(a\) increases. Hence, node \(a\) has the motivation to misreport.
* When \(x_{a}(\theta^{\prime})=0\), by budget balance, we have \(x_{b}(\theta^{\prime})=m+n\). If node \(b\) reports \(v_{b}^{\prime\prime}=n\), by feasibility and budget balance, we have \(x_{b}(\theta^{\prime\prime})=n\), and the utility of \(b\) increases. Hence, node \(b\) has the motivation to misreport.
Therefore, node \(a\) or node \(b\) has the motivation to misreport its valuation, i.e., truthfulness is violated.
We further show that when truthfulness, feasibility, and budget balance are satisfied, the maximal social welfare cannot be approximated.
**Definition 4.4**.: A mechanism is \(\alpha^{lb}\)-approximate (\(\alpha^{lb}\in(0,1)\)) to the social welfare if \(SW(g(\theta^{\prime}))\geq\alpha^{lb}\cdot SW^{*}(S)\), where \(SW^{*}(S)\) is the maximal social welfare given \(\forall\theta^{\prime}\in\Theta\) and \(lb\) represents that \(\alpha^{lb}\) is a lower bound of the ratio \(\frac{SW(g(\theta^{\prime}))}{SW^{*}(S)}\).
**Definition 4.5**.: A mechanism is \(\beta^{ub}\)-approximate (\(\beta^{ub}\in(0,1)\)) to the social welfare if \(SW(g(\theta^{\prime}))\leq\beta^{ub}\cdot SW^{*}(S)\), where \(SW^{*}(S)\) is the maximal social welfare given \(\forall\theta^{\prime}\in\Theta\) and \(ub\) represents that \(\beta^{ub}\) is an upper bound of the ratio \(\frac{SW(g(\theta^{\prime}))}{SW^{*}(S)}\).
**Proposition 4.6**.: _There exists no cost sharing mechanism that satisfies truthfulness, budget balance, feasibility, and \(\alpha^{lb}\)-approximation (\(\beta^{ub}\)-approximation) simultaneously._
Proof.: It suffices to consider a simple line graph in Figure 3. Without loss of generality, assume that \(v_{a}>m,v_{b}=n+p(p>0),\epsilon_{(s,a)}=m\) and \(c_{(a,b)}=n\). When \(a\) and \(b\) are both selected by the mechanism, the maximum social welfare is \((v_{a}-m+p)\).
Next, we show when truthfulness, feasibility, and budget balance are satisfied, \(\alpha^{lb}\)-approximation and \(\beta^{ub}\)-approximation will be violated. By Proposition 4.3, any mechanism cannot select both \(a\) and \(b\) and it can only select \(a\). Then the social welfare is \((v_{a}-m)\). Hence, the ratio equals \(\frac{v_{a}-m}{v_{a}-m+p}\). Letting \(p\rightarrow\infty\), then the ratio approaches \(0\). Therefore, the required \(\alpha^{lb}\) does not exist. Again letting \(p\to 0\), then the ratio approaches \(1\), which means that the required \(\beta^{ub}\) does not exist.
Therefore, any cost sharing mechanism cannot satisfy truthfulness, budget balance, feasibility, and \(\alpha^{lb}\)-approximation (\(\beta^{ub}\)-approximation) simultaneously.
We further consider the deficit of any mechanism that satisfies truthfulness, feasibility, and efficiency. We introduce a concept called budget balance ratio to evaluate it.
**Definition 4.7**.: A mechanism has a budget balance ratio (BBR) called \(\gamma\in(0,1]\) if \(\sum_{i\in g(\theta^{\prime})}x_{i}(\theta^{\prime})\geq\gamma\cdot C(g(\theta ^{\prime}))\), \(\forall\theta^{\prime}\in\Theta\).
Figure 3. The \(s\) represents the source and \(a,b\) are the nodes with the valuations \(v_{a},v_{b}\) respectively. The cost of the edges \((s,a)\) and \((a,b)\) are \(m\) and \(n\) respectively.
Proof.: According to Definition 4.7, a cost sharing mechanism having a BBR \(\gamma\in(0,1]\) needs to satisfy the following: \(\forall\theta^{\prime}\in\Theta\), \(\frac{\sum_{i\in\alpha(\theta^{\prime})}x_{i}(\theta^{\prime})}{C(g(\theta^{ \prime}))}\geq\gamma>0\). So, to prove the proposition, it suffices to find a \(\theta^{\prime}\) such that \(\frac{\sum_{i\in\alpha(\theta^{\prime})}x_{i}(\theta^{\prime})}{C(g(\theta^{ \prime}))}=0\).
Without loss of generality, as shown in Figure 4, we assume \(V=\{a,b\}\), \(v_{a}=v_{b}=m\), \(c_{(s,a)}=c_{(s,b)}=m\), \(c_{(a,b)}=0\). By efficiency, the mechanism should select both \(a\) and \(b\). By truthfulness, each node offers all its adjacent edges and reports its valuation truthfully. By feasibility and truthfulness, we have \(x_{a}(\theta^{\prime})=x_{b}(\theta^{\prime})=0\). Thus, we have \(\frac{\sum_{i\in\alpha(\theta^{\prime})}x_{i}(\theta^{\prime})}{C(g(\theta^{ \prime}))}=\frac{0}{m}=0\).
Note that there exists a trivial cost sharing mechanism where each node pays 0 and all the nodes in \(V\) are selected by the mechanism. This mechanism satisfies truthfulness and feasibility but does not satisfy efficiency and budget balance.
By Proposition 4.3, a cost sharing mechanism cannot simultaneously satisfy truthfulness, feasibility, efficiency, and budget balance. Therefore, we propose two feasible mechanisms satisfying truthfulness, respectively together with efficiency and budget balance in the following sections.
We summarize the impossibility results and our mechanisms in Table 1.
## 5. Critical value based mechanism
In this section, we propose a cost sharing mechanism that satisfies truthfulness, feasibility, and efficiency but does not satisfy budget balance. In addition, we show that it also satisfies other desirable properties.
The key ideas of the mechanism are as follows. First, find out the node set which has maximal social welfare. Second, for each node in the set, compute its critical value (CV), i.e., the minimal reported valuation that keeps it in the set.
Finally, let the cost share of the node in the set equal its critical value and the others' cost share is 0.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Truthfulness & Feasibility & Budget Balance & Efficiency & Mechanism \\ \hline ✓ & ✓ & ✓ & ✓ & NULL \\ ✓ & ✓ & ✓ & \(\alpha^{lb}(\beta^{ub})\) & NULL \\ ✓ & ✓ & ✓ & ✓ & NULL \\ ✓ & ✓ & & ✓ & CVM \\ ✓ & ✓ & ✓ & & RSM \\ \hline \end{tabular}
\end{table}
Table 1. ”Null” means that there exists no mechanism satisfying all the marked properties in the column. Our mechanisms RSM and CVM satisfy the marked properties in the column.
Figure 4. The \(s\) represents the source and \(a,b\) are the nodes with the valuation \(m\). The cost of the edges \((s,a)\), \((a,b)\) and \((s,b)\) are \(m,0,m\) respectively.
The computation of the minimum reported valuation of each node \(i\in g(\theta^{\prime})\) is as follows. We first compute \(\delta(g(\theta^{\prime})\setminus\{i\})\), the set of nodes that maximizes the social welfare when node \(i\) is not considered. Then we compute the social welfare of \(g(\theta^{\prime})\) and \(\delta(g(\theta^{\prime})\setminus\{i\})\). Next, we find out the minimum reported valuation of \(i\) that keeps it in \(g(\theta^{\prime})\) and guarantees \(SW(g(\theta^{\prime}))=SW(\delta(g(\theta^{\prime})\setminus\{i\}))\).
The mechanism is formally described as follows. A running example is given after the algorithm.
**Critical Value Based Mechanism (CVM)**
**Input**: A report profile \(\theta^{\prime}\) and a graph \(G(\theta^{\prime})\)
1. Run Algorithm 1 and get \(g(\theta^{\prime})=\delta(V)\).
2. Compute the minimum Steiner tree of \(g(\theta^{\prime})\cup\{s\}\) and set \(f(\theta^{\prime})\) to be the set of edges in the tree.
3. For \(i\in g(\theta^{\prime})\): * Compute node \(i\)'s critical value \[\begin{split} CV_{i}(\theta^{\prime})=&(\sum_{j\in \delta(g(\theta^{\prime})\setminus\{i\})}v_{j}^{\prime}-C(\delta(g(\theta^{ \prime})\setminus\{i\})))\\ &-(\sum_{k\in g(\theta^{\prime})\setminus\{i\}}v_{k}^{\prime}-C( g(\theta^{\prime}))),\end{split}\] (1) where \(\delta(\cdot)\) is defined in Algorithm 1.
4. Set \(x_{i}(\theta^{\prime})=CV_{i}(\theta^{\prime})\).
**Example 5.1**.: The graph \(G(\theta^{\prime})\) generated by a report profile \(\theta^{\prime}\in\Theta\) is shown in Figure 5(1). First, run Algorithm 1 and obtain \(\delta(S)\) for all \(S\subseteq V\). Especially, we have \(g(\theta^{\prime})=\delta(V)=\{a,b,c,d\}\) and \(f(\theta^{\prime})=\{(s,b),(a,b),(a,c),(a,d)\}\). Then we compute each node's cost share. Taking the node \(a\) for an example, we have \(\delta(g(\theta^{\prime})\setminus\{a\})=\{b\}\) and by Equation (1), \(x_{a}(\theta^{\prime})=v_{b}^{\prime}-c_{(s,b)}-(v_{b}^{\prime}+v_{c}^{\prime}+ v_{d}^{\prime}-c_{(s,b)}-c_{(a,b)}-c_{(a,c)}-c_{(a,d)})=9-7-(9+6+7-7-8-6-5)=6\). Similarly, we have \(x_{b}(\theta^{\prime})=5,x_{c}(\theta^{\prime})=6\) and \(x_{d}(\theta^{\prime})=5\). Thus we have \(x(\theta^{\prime})=(6,5,6,5)\).
Figure 5. The left figure is \(G(\theta^{\prime})\) and the right figure is the minimum Steiner tree of \(g(\theta^{\prime})\cup\{s\}\).
### Properties of CVM
Now we show some nice properties of the critical value based mechanism.
**Theorem 5.2**: _The critical value based mechanism satisfies truthfulness._
First, we prove that each node \(i\in V\) will report its valuation truthfully (i.e., \(v_{i}^{\prime}=v_{i}\)). When \(i\) truthfully reports its valuation, there are two cases.
* \(i\notin g(\theta^{\prime})\). We have \(u_{i}(\theta^{\prime})=0\). If node \(i\) reports \(v_{i}^{\prime}<v_{i}\), it is still not selected and the utility does not change. Otherwise (\(v_{i}^{\prime}>v_{i}\)), there are two possibilities.
* It is still not selected and the utility does not change.
* It is selected. By the proposed mechanism, since \(i\)'s critical value is larger than \(v_{i}\), its utility is negative. Thus the utility decreases.
* \(i\in g(\theta^{\prime})\). We have \(u_{i}(\theta^{\prime})\geq 0\). If node \(i\) reports \(v_{i}^{\prime}>v_{i}\), it is still selected and the utility does not change. Otherwise (\(v_{i}^{\prime}\leq v_{i}\)), there are two possibilities.
* It is still selected. The utility does not change.
* It is not selected and its utility is 0. So the utility decreases.
Second, we prove that each node \(i\in V\) will report its adjacent edges truthfully (i.e., \(e_{i}^{\prime}=e_{i}\)). When \(i\) truthfully reports its adjacent edges, there are two cases.
* \(i\notin g(\theta^{\prime})\). We have \(u_{i}(\theta^{\prime})=0\). If node \(i\) reports \(e_{i}^{\prime}\neq e_{i}\), its cost share will weakly increase and thus it is still not selected. Hence, the utility does not change.
* \(i\in g(\theta^{\prime})\). We have \(u_{i}(\theta^{\prime})\geq 0\). If node \(i\) reports \(e_{i}^{\prime}\neq e_{i}\), there are two possibilities.
* It is not selected. Obviously, its utility decreases.
* It is still selected. For simplicity, let \(S_{1}=g(\theta^{\prime}),S_{2}=g(\theta^{\prime\prime}),S_{3}=S_{1}\setminus \delta(S_{1}\setminus\{i\}),S_{4}=S_{2}\setminus\delta(S_{2}\setminus\{i\})\) where \(\theta^{\prime\prime}=((e_{i}^{\prime},v_{i}^{\prime}),\theta_{-i}^{\prime})\) and \(\theta^{\prime}=((e_{i},v_{i}^{\prime}),\theta_{-i}^{\prime})\). Then by Equation (1) we have \[CV_{i}(\theta^{\prime}) =C(S_{1})-C(\delta(S_{1}\setminus\{i\}))-\sum_{j\in S_{3}\setminus \{i\}}v_{j}^{\prime},\] \[CV_{i}(\theta^{\prime\prime}) =C^{\prime}(S_{2})-C^{\prime}(\delta(S_{2}\setminus\{i\}))-\sum_ {j\in S_{4}\setminus\{i\}}v_{j}^{\prime},\] where \(C^{\prime}(\cdot)\) denotes the value function when \(i\) misreports \(e_{i}^{\prime}\). Since \(S_{1}\) maximizes the social welfare under \(\theta^{\prime}\), we have \[\sum_{j\in S_{4}}v_{j}^{\prime}-(C(S_{1})-C(\delta(S_{1}\setminus \{i\})))\] \[\geq\sum_{j\in S_{4}}v_{j}^{\prime}-(C(S_{2})-C(\delta(S_{2} \setminus\{i\}))).\] For the set \(S_{4}\), the increment of SW will decrease since the set of available edges of \(S_{4}\) is reduced. Then we have \[\sum_{j\in S_{4}}v_{j}^{\prime}-(C(S_{2})-C(\delta(S_{2}\setminus \{i\})))\] \[\geq\sum_{j\in S_{4}}v_{j}^{\prime}-(C^{\prime}(S_{2})-C^{\prime}( \delta(S_{2}\setminus\{i\}))).\]
Therefore,
\[\sum_{j\in\mathcal{S}_{3}}v^{\prime}_{j}-(C(S_{1})-C(\delta(S_{1} \setminus\{i\})))\] \[\geq\sum_{j\in\mathcal{S}_{4}}v^{\prime}_{j}-(C^{\prime}(S_{2})-C^ {\prime}(\delta(S_{2}\setminus\{i\}))).\]
Eliminating \(v^{\prime}_{i}\), we have \(CV_{i}(\theta^{\prime})\leq CV_{i}(\theta^{\prime\prime})\).
So the cost share of \(i\) weakly increases when misreporting its adjacent edges. Therefore, \(i\)'s utility weakly decreases when misreporting its adjacent edges.
**Theorem 5.3**: _The critical value based mechanism satisfies feasibility._
According to the mechanism and Algorithm 1, the participation of each selected node can increase social welfare. Because the critical value is the minimum reported valuation that keeps the node being selected, its cost share is less than or equal to its reported valuation. For the other nodes, their cost share is \(0\), which is less than their reported valuation.
**Theorem 5.4**: _The critical value based mechanism satisfies individual rationality._
Given a report profile \(\theta^{\prime}\in\Theta\), for each node \(i\in V\setminus g(\theta^{\prime})\), we have \(u_{i}(\theta^{\prime})=0\). For each node \(i\in g(\theta^{\prime})\), by Theorem 5.3, we have \(x_{i}(\theta^{\prime})\leq v^{\prime}_{i}\). According to Theorem 5.2, \(v^{\prime}_{i}=v_{i}\). Hence, \(u_{i}(\theta^{\prime})=v_{i}-x_{i}(\theta^{\prime})\geq 0\).
**Theorem 5.5**: _The critical value based mechanism satisfies efficiency._
According to the mechanism, it is obvious that the set of selected nodes can maximize the social welfare.
**Theorem 5.6**: _The critical value based mechanism satisfies positiveness._
We prove the statement by contradiction. According to the mechanism and Equation (1), we have
\[x_{i}(\theta^{\prime})=CV_{i}(\theta^{\prime})=(C(g(\theta^{\prime}))-C( \delta(g(\theta^{\prime})\setminus\{i\})))-\Delta.\]
where
\[\Delta=\sum_{k\in g(\theta^{\prime})\setminus\{i\}}v^{\prime}_{k}-\sum_{j\in \delta(g(\theta^{\prime})\setminus\{i\})}v^{\prime}_{j}\]
If \(x_{i}(\theta^{\prime})\leq 0\), then we have
\[C(g(\theta^{\prime}))-C(\delta(g(\theta^{\prime})\setminus\{i\}))\leq\Delta\]
Since
\[C(g(\theta^{\prime})\setminus\{i\})\leq C(g(\theta^{\prime})),\]
we have
\[C(g(\theta^{\prime})\setminus\{i\})-C(\delta(g(\theta^{\prime})\setminus\{i \}))\leq\Delta\]
Therefore, the nodes in \((g(\theta^{\prime})\setminus\{i\}-\delta(g(\theta^{\prime})\setminus\{i\}))\) can be selected by the mechanism. By the definition of \(\delta(\cdot)\), they cannot be selected by the mechanism. This leads to a contradiction.
**Theorem 5.7**: _The critical value based mechanism satisfies symmetry._
Proof.: We need to show that, given \(\theta^{\prime}\in\Theta\) and \(i,j\in V\) with \(r_{i}(\theta^{\prime})\setminus\{j\}=r_{j}(\theta^{\prime})\setminus\{i\}\) and \(v_{i}=v_{j}\), \(c_{(i,k)}=c_{(j,k)}\) (\(\forall k\in r_{i}(\theta^{\prime})\setminus\{j\}\)) implies \(u_{i}(\theta^{\prime})=u_{j}(\theta^{\prime})\). If \(i,j\notin g(\theta^{\prime})\), we have \(u_{i}(\theta^{\prime})=u_{j}(\theta^{\prime})=0\). If \(i,j\in g(\theta^{\prime})\), by the condition of symmetry, we have \(x_{i}(\theta^{\prime})=x_{j}(\theta^{\prime})\). Since \(v_{i}=v_{j}\), we have \(u_{i}(\theta^{\prime})=u_{j}(\theta^{\prime})\).
Theorem 5.8 ().: _The critical value based mechanism satisfies utility monotonicity._
Proof.: For any node \(i\in V\), given \(\theta^{\prime}\in\Theta\), \(j\in V\) such that \((i,j)\in E\), we use \(g^{+}(\theta^{\prime})\) to denote the set of selected nodes when \(c_{(i,j)}\) increases.
If \(i\notin g(\theta^{\prime})\), then \(u_{i}(\theta^{\prime})=0\), and \(i\notin g^{+}(\theta^{\prime})\) according to CVM. So its utility does not change.
If \(i\in g(\theta^{\prime})\), then \(u_{i}(\theta^{\prime})\geq 0\). There are two cases.
* \(i\notin g^{+}(\theta^{\prime})\). Then its utility weakly decreases.
* \(i\in g^{+}(\theta^{\prime})\). Its cost share becomes \[(\sum_{j\in\delta(g^{+}(\theta^{\prime})\setminus\{i\})}v_{j}-C(\delta(g^{+}( \theta^{\prime})\setminus\{i\})))-(\sum_{k\in g^{+}(\theta^{\prime})\setminus \{i\}}v_{k}-C(g^{+}(\theta^{\prime}))).\] By the similar analysis to the second part in the proof of Theorem 5.2, we know the utility of \(i\) weakly decreases.
Theorem 5.9 ().: _The critical value based mechanism satisfies ranking._
Proof.: We need to show that, given \(\theta^{\prime}\in\Theta\) and \(i,j\in V\) with \(r_{i}(\theta^{\prime})\setminus\{j\}=r_{j}(\theta^{\prime})\setminus\{i\}\) and \(v_{i}=v_{j}\), \(c_{(i,k)}\leq c_{(j,k)}\) (\(\forall k\in r_{i}(\theta^{\prime})\setminus\{j\}\)) implies \(u_{i}(\theta^{\prime})\geq u_{j}(\theta^{\prime})\). For nodes \(i\) and \(j\), there are three cases.
* \(i,j\notin g(\theta^{\prime})\). We have \(u_{i}(\theta^{\prime})=u_{j}(\theta^{\prime})=0\).
* \(i\in g(\theta^{\prime})\) but \(j\notin g(\theta^{\prime})\). By individual rationality, we have \(u_{i}(\theta^{\prime})\geq 0=u_{j}(\theta^{\prime})\).
* \(i,j\in g(\theta^{\prime})\). Since \(v_{i}=v_{j}\), it suffices to prove \(x_{i}(\theta^{\prime})\leq x_{j}(\theta^{\prime})\). From Equation (1), we know the last two terms of the expressions of \(x_{i}(\theta^{\prime})\) and \(x_{j}(\theta^{\prime})\) are equal. Next, we compare \(\sum_{j\in\delta(g(\theta^{\prime})\setminus\{i\})}v_{j}-C(\delta(g(\theta^{ \prime})\setminus\{i\}))\) with \(\sum_{i\in\delta(g(\theta^{\prime})\setminus\{j\})}v_{i}-C(\delta(g(\theta^{ \prime})\setminus\{j\}))\). The former represents the maximum social welfare of \(g(\theta^{\prime})\setminus\{i\}\) and the latter represents the maximum social welfare of \(g(\theta^{\prime})\setminus\{j\}\). By the symmetry of \(i\) and \(j\) in the graph, the condition of ranking, and the proof of Theorem 5.8, the latter is larger than or equal to the former. Therefore, we have \(x_{i}(\theta^{\prime})\leq x_{j}(\theta^{\prime})\).
## 6. Repeated Selection Mechanism
The CVM defined in Section 5 satisfies truthfulness, feasibility, and efficiency but does not satisfy budget balance. In this section, we propose another cost sharing mechanism that satisfies truthfulness, feasibility, and budget balance but does not satisfy efficiency. Moreover, we show that it also satisfies other desirable properties.
We use the method of iterative optimization. In the first round (stage) of optimization, we find a subset of nodes and the minimum satisfying the constraints of feasibility and budget balance as the cost share of these nodes. In the following rounds of optimization, we consider the remaining nodes and add an extra constraint that the optimizing variable is larger than or equal to the cost share of the last round, which guarantees the truthfulness of the mechanism. The iterative process is repeated until all the nodes are considered.
The proposed mechanism is called Repeated Selection Mechanism (RSM) formally described in the following.
**Repeated Selection Mechanism (RSM)**
**Input**: A report profile \(\theta^{\prime}\) and a graph \(G(\theta^{\prime})\)
1. For stage \(t\) (\(t=0,1,2,\dots\)), we introduce: * \(S^{t}:\) the set of nodes selected in stage \(t\), * \(M^{t}:\) the union of \(S^{0},S^{1},\cdots,S^{t}\), * \(X^{t}:\) the cost share of every node in \(S^{t}\), * \(W^{t}:\) the set of nodes which will not be considered after stage \(t\), * \(N^{t}:\) the set of remaining nodes after stage \(t\), and * \(\mathcal{E}^{t}\): the set of edges of the minimum Steiner tree of \(S^{t}\). \(\mathcal{E}\) is the set of selected edges during the process, i.e., the union of \(\mathcal{E}^{0},\mathcal{E}^{1},\cdots,\mathcal{E}^{t}\).
2. Stage 0: Set \(X^{0}=0\), \(N^{0}=V\), \(M^{0}=\emptyset\), \(W^{0}=\emptyset\), and \(\mathcal{E}=\emptyset\).
3. For \(t=1,2,\cdots:\) Stage \(t\): \[\begin{array}{c}\text{\it Solve}\\ \text{\it s.t.}\end{array} \begin{array}{c}\text{\it min}\\ \text{\it X}^{t}\geq X^{t-1}\end{array}\] \[\begin{array}{c}\text{\it X}^{t}\cdot|S^{t}|=C(S^{t})\\ \text{\it v}_{i}^{\prime}\geq X^{t},\forall i\in S^{t}\end{array}\] * If there is a solution, set \(x_{i}(\theta^{\prime})=X^{t}\) (\(\forall i\in S^{t}\)). Then, update: \[\begin{array}{c}\text{\it W}^{t}=\{i|\theta_{i}^{\prime}<X^{t}\},\\ \text{\it N}^{t}=\text{\it N}^{t-1}\setminus\{S^{t}\cup\text{\it W}^{t}\},\\ \text{\it M}^{t}=\text{\it M}^{t-1}\cup S^{t}.\end{array}\] Set: \[\begin{array}{c}\text{\it c}_{(i,j)}=0,\forall i,j\in M^{t}\cup\{s\},i\neq j,\\ \text{\it\
\(\mathcal{E}=\mathcal{E}^{1}\cup\mathcal{E}^{2}=\{(s,b),(a,b)\}\). For stage 3, we have \(X^{3}=5\) and \(S^{3}=\{c,d,e\}\). Hence, \(\mathcal{E}^{3}=\{(a,d),(a,c),(b,e)\}\) and \(\mathcal{E}=\mathcal{E}^{1}\cup\mathcal{E}^{2}\cup\mathcal{E}^{3}=\{(s,b),(a,b),(a,d),(c,d),(b,e)\}\). Since there does not exist \(X^{4}\) satisfying the constraints, the proposed algorithm ends and we have \(g(\theta^{\prime})=\{a,b,c,d,e\},x(\theta^{\prime})=(3,4,5,5,5)\) and \(f(\theta^{\prime})=\{(s,b),(a,b),(a,d),(c,d),(b,e)\}\).
### Properties of RSM
We show the properties of RSM in this section.
**Theorem 6.2**.: _The repeated selection mechanism satisfies truthfulness._
Proof.: Firstly, we prove that each node \(i\in V\) will report its adjacent edges truthfully. Denote two report profile by \(\theta^{\prime}=((e^{\prime}_{i},v^{\prime}_{i}),\theta^{\prime}_{-i})\) where \(e^{\prime}_{i}=e_{i}\) and \(\theta^{\prime\prime}=((e^{\prime\prime}_{i},v^{\prime}_{i}),\theta^{\prime}_ {-i})\). When \(i\) truthfully reports its adjacent edges, there are two cases.
1. \(i\notin g(\theta^{\prime})\). Then we have \(u_{i}(\theta^{\prime})=0\). If \(i\) reports \(e^{\prime\prime}_{i}\subset e_{i}\), for any \(S(i\in S)\), \(C(S)\) will increase since the set of available edges shrinks. Hence, \(i\notin g(\theta^{\prime\prime})\) and the utility does not change.
2. \(i\in g(\theta^{\prime})\). Then \(u_{i}(\theta^{\prime})=v_{i}-x_{i}(\theta^{\prime})\geq 0\). Assume that \(i\in S^{t}\). If \(i\) reports \(e^{\prime\prime}_{i}\subset e_{i}\), we first prove the set of selected nodes before stage \(t\) does not change due to \(i\)'s misreporting, i.e., \(M^{t-1}=\hat{M}^{t-1}\), where \(\hat{M}^{t-1}\) is the set of selected nodes before stage \(t\) given \(\theta^{\prime\prime}\). If node \(i\) belongs to the minimum Steiner tree of \(S^{t}\) in the stage \(r\) (\(r<t\)), then it should have been selected in stage \(r\), which leads to a contradiction to \(i\in S^{t}\). Therefore, we know node \(i\) does not belong to the minimum Steiner tree of \(S^{r}\) for any stage \(r\) (\(r<t\)). Then the selected edges of \(M^{t-1}\) and \(\hat{M}^{t-1}\) are the same, i.e., \(M^{t-1}=\hat{M}^{t-1}\). Then we prove the utility of node \(i\) weakly decreases due to \(i\)'s misreporting. Based on the above analysis, there are two cases for node \(i\).
* Node \(i\in g(\theta^{\prime\prime})\). Let \(C^{\prime}(S)\) denote the minimum cost of any set \(S\) under \(\theta^{\prime\prime}\). Since \(C^{\prime}(S)\geq C(S)\), we have \(\frac{C^{\prime}(S)}{|S|}\geq\frac{C(S)}{|S|}\). Hence, \(x_{i}(\theta^{\prime\prime})\geq x_{i}(\theta^{\prime})\) and \(u_{i}(\theta^{\prime\prime})\leq u_{i}(\theta^{\prime})\).
* Node \(i\notin g(\theta^{\prime\prime})\). Then we have \(u_{i}(\theta^{\prime\prime})=0\leq u_{i}(\theta^{\prime})\).
Secondly, we prove that each node \(i\in V\) will report its valuation truthfully. Denote two report profile by \(\theta^{\prime}=((e^{\prime}_{i},v^{\prime}_{i}),\theta^{\prime}_{-i})\) where \(v^{\prime}_{i}=v_{i}\) and \(\theta^{\prime\prime}=((e^{\prime}_{i},v^{\prime\prime}_{i}),\theta^{\prime}_ {-i})\). When \(i\) truthfully reports its valuation, there are two cases.
1. \(i\in S^{t}\subseteq g(\theta^{\prime})\). If \(i\) reports \(e^{\prime\prime}_{i}>v_{i}\), it is still selected in stage \(t\) and \(X^{t}=\hat{X}^{t}\), where \(\hat{X}^{t}\) denotes the cost share in stage \(t\) given \(\theta^{\prime\prime}\). Hence, \(u_{i}(\theta^{\prime\prime})=v_{i}-\hat{X}^{t}=v_{i}-X^{t}=u_{i}(\theta^{ \prime})\). If \(i\) reports \(v^{\prime\prime}_{i}<v_{i}\), there are two possibilities.
* \(X^{t}\leq v^{\prime\prime}_{i}<v_{i}\). Then it is still selected and \(u_{i}(\theta^{\prime\prime})=u_{i}(\theta^{\prime})\).
Fig. 6: The left figure is \(G(\theta^{\prime})\). The red line in the right figure denotes the selected edge in the first stage, the green line denotes the selected edge in the second stage and the blue lines denote the selected edges in the third stage.
* \(v_{i}^{\prime\prime}<X^{t}\). Then it is not selected and \(u_{i}(\theta^{\prime\prime})=0\leq u_{i}(\theta^{\prime})\).
* \(i\notin g(\theta^{\prime})\). If \(i\) reports \(v_{i}^{\prime\prime}<v_{i}\), then it is still not selected and the utility does not change. If \(i\) reports \(v_{i}^{\prime\prime}>v_{i}\), there are two possibilities.
* \(i\notin g(\theta^{\prime\prime})\). Then the utility does not change.
* \(i\in S^{t}\subseteq g(\theta^{\prime\prime})\). Then \(x_{i}(\theta^{\prime})=\frac{C(S^{t})}{|S^{t}|}\). Since node \(i\) cannot be selected given \(\theta^{\prime}\), we have \(v_{i}<\frac{C(S^{t})}{|S^{t}|}\). Hence, \(u_{i}(\theta^{\prime\prime})=v_{i}-\frac{C(S^{t})}{|S^{t}|}<0=u_{i}(\theta^{ \prime})\).
Therefore, \(u_{i}(\theta^{\prime})\geq u_{i}(\theta^{\prime\prime})\), i.e., \(i\)'s utility is maximized when \(i\) reports its valuation truthfully.
**Theorem 6.3**: _The repeated selection mechanism satisfies budget balance._
Given \(\theta^{\prime}\in\Theta\), in stage \(t\), the sum of all nodes' cost share in \(S^{t}\) equals the total cost of connecting all the nodes in \(S^{t}\), i.e., \(X^{t}\cdot|S^{t}|=C(S^{t})\). Then for all the stages, the sum of all selected nodes' cost share in \(g(\theta^{\prime})\) equals the total cost of connecting all nodes in \(g(\theta^{\prime})\), i.e., \(\sum_{t}X^{t}\cdot|S^{t}|=\sum_{t}C(S^{t})=\sum_{(i,j)\in f(\theta^{\prime})}c _{(i,j)}\). Hence, the mechanism satisfies budget balance.
**Theorem 6.4**: _The repeated selection mechanism satisfies feasibility._
Given a report profile \(\theta^{\prime}\in\Theta\), for each node \(i\in V\setminus g(\theta^{\prime})\), we have \(x_{i}(\theta^{\prime})=0\leq v_{i}^{\prime}\). For each node \(i\in g(\theta^{\prime})\), by the proposed mechanism, \(x_{i}(\theta^{\prime})=X^{t}\leq v_{i}^{\prime}\) for the stage \(t\). So we have \(x_{i}(\theta^{\prime})\leq v_{i}^{\prime}\). Therefore, the mechanism satisfies feasibility.
**Theorem 6.5**: _The repeated selection mechanism satisfies individual rationality._
Given a report profile \(\theta^{\prime}\in\Theta\), for each node \(i\in V\setminus g(\theta^{\prime})\), we have \(u_{i}(\theta^{\prime})=0\). For each node \(i\in g(\theta^{\prime})\), by the proposed mechanism, we have \(x_{i}(\theta^{\prime})\leq v_{i}^{\prime}\). By Theorem 6.2, we have \(v_{i}^{\prime}=v_{i}\). Hence, we have \(u_{i}(\theta^{\prime})=v_{i}-x_{i}(\theta^{\prime})\geq 0\). So the mechanism satisfies individual rationality.
**Theorem 6.6**: _The repeated selection mechanism satisfies positiveness._
Given a report profile \(\theta^{\prime}\in\Theta\), for each node \(i\in V\setminus g(\theta^{\prime})\), we have \(x_{i}(\theta^{\prime})=0\). For each node \(i\in g(\theta^{\prime})\), without loss of generality, we assume that it is selected in stage \(t\). Obviously, according to the proposed mechanism, its cost share \(X^{t}(\theta^{\prime})\) is non-negative. Therefore, the mechanism satisfies positiveness.
**Theorem 6.7**: _The repeated selection mechanism satisfies symmetry._
We need to show that, given \(\theta^{\prime}\in\Theta\) and \(i,j\in V\) with \(r_{i}(\theta^{\prime})\setminus\{j\}=r_{j}(\theta^{\prime})\setminus\{i\}\) and \(v_{i}=v_{j}\), \(c_{(i,k)}=c_{(j,k)}\) (\(\forall k\in r_{i}(\theta^{\prime})\setminus\{j\}\)) implies \(u_{i}(\theta^{\prime})=u_{j}(\theta^{\prime})\).
By the proposed mechanism, nodes \(i\) and \(j\) are either both selected in the same stage or they are not selected. If they are not selected, we have \(u_{i}(\theta^{\prime})=u_{j}(\theta^{\prime})=0\). Without loss of generality, if they are both selected in stage \(t\), by the proposed mechanism, we have \(x_{i}(\theta^{\prime})=x_{j}(\theta^{\prime})=X^{t}\). Since \(v_{i}=v_{j}\), we have \(u_{i}(\theta^{\prime})=u_{j}(\theta^{\prime})\). So the mechanism satisfies symmetry.
**Theorem 6.8**: _The repeated selection mechanism satisfies ranking._
We need to show that, given \(\theta^{\prime}\in\Theta\) and \(i,j\in V\) with \(r_{i}(\theta^{\prime})\setminus\{j\}=r_{j}(\theta^{\prime})\setminus\{i\}\) and \(v_{i}=v_{j}\), \(c_{(i,k)}\leq c_{(j,k)}\) (\(\forall k\in r_{i}(\theta^{\prime})\setminus\{j\}\)) implies \(u_{i}(\theta^{\prime})\geq u_{j}(\theta^{\prime})\). For nodes \(i\) and \(j\), there are three cases.
* \(i,j\notin g(\theta^{\prime})\). Obviously, we have \(u_{i}(\theta^{\prime})=u_{j}(\theta^{\prime})=0\).
* \(i\in g(\theta^{\prime}),j\notin g(\theta^{\prime})\). By individual rationality, we have \(u_{i}(\theta^{\prime})\geq 0=u_{j}(\theta^{\prime})\).
* \(i,j\in g(\theta^{\prime})\). Let \(t_{i}\) and \(t_{j}\) denote the stages where nodes \(i\) and \(j\) are selected respectively. For any set \(S\) with \(i,j\notin S\), we have \(C(S\cup\{i\})\leq C(S\cup\{j\})\). So we have \(t_{i}\leq t_{j}\). Since the later the selected stage is, the higher the cost share will be, we have \(X^{t_{i}}\leq X^{t_{j}}\). Since \(v_{i}=v_{j}\), we have \(u_{i}(\theta^{\prime})-u_{j}(\theta^{\prime})=x_{j}(\theta^{\prime})-x_{i}( \theta^{\prime})=X^{t_{j}}-X^{t_{i}}\geq 0\).
Therefore, the mechanism satisfies ranking.
Theorem 6.9 ().: _The repeated selection mechanism satisfies utility monotonicity._
Proof.: Given \(\theta^{\prime}\in\Theta\) and nodes \(i,j\in V\) such that the edge \((i,j)\in E\), there are two cases for node \(i\).
* \(i\notin g(\theta^{\prime})\). Then \(u_{i}(\theta^{\prime})=0\). When \(c_{(i,j)}\) increases, \(i\) cannot be selected and the utility remains unchanged.
* \(i\in g(\theta^{\prime})\). Then \(u_{i}(\theta^{\prime})=v_{i}-x_{i}(\theta^{\prime})\). When \(c_{(i,j)}\) increases, let \(g^{+}(\theta^{\prime})\) denote the set of selected nodes.
* \(i\notin g^{+}(\theta^{\prime})\). Then its utility is 0. So the utility weakly decreases.
* \(i\in g^{+}(\theta^{\prime})\). According to Theorem 6.8, it is easy to show the utility of \(i\) weakly decreases.
Hence, when \(c_{(i,j)}\) increases, the utility of \(i\) weakly decreases, i.e., the mechanism satisfies utility monotonicity.
## 7. Conclusions
In this paper, we study the cost sharing problem under private valuation and connection control on general graphs. We consider two important strategic behaviors of a node (i.e., cutting its adjacent edges and misreporting its valuation). We show that it is impossible for a mechanism to satisfy truthfulness, feasibility, efficiency, and budget balance simultaneously. We also prove that there exists no approximate ratio for efficiency and budget balance. Then we propose two truthful and feasible cost sharing mechanisms that satisfy efficiency or budget balance.
In the future, we try to characterize all possible cost sharing mechanisms that incentivize nodes to share their connections and reveal their valuations.
## Acknowledgments
This work is supported by Science and Technology Commission of Shanghai Municipality (No. 23010503000 and No. 22ZR1442200), and Shanghai Frontiers Science Center of Human-centered Artificial Intelligence (ShangHAI).
|
2307.11170 | UMLS-KGI-BERT: Data-Centric Knowledge Integration in Transformers for
Biomedical Entity Recognition | Pre-trained transformer language models (LMs) have in recent years become the
dominant paradigm in applied NLP. These models have achieved state-of-the-art
performance on tasks such as information extraction, question answering,
sentiment analysis, document classification and many others. In the biomedical
domain, significant progress has been made in adapting this paradigm to NLP
tasks that require the integration of domain-specific knowledge as well as
statistical modelling of language. In particular, research in this area has
focused on the question of how best to construct LMs that take into account not
only the patterns of token distribution in medical text, but also the wealth of
structured information contained in terminology resources such as the UMLS.
This work contributes a data-centric paradigm for enriching the language
representations of biomedical transformer-encoder LMs by extracting text
sequences from the UMLS. This allows for graph-based learning objectives to be
combined with masked-language pre-training. Preliminary results from
experiments in the extension of pre-trained LMs as well as training from
scratch show that this framework improves downstream performance on multiple
biomedical and clinical Named Entity Recognition (NER) tasks. | Aidan Mannion, Thierry Chevalier, Didier Schwab, Lorraine Geouriot | 2023-07-20T18:08:34Z | http://arxiv.org/abs/2307.11170v1 | # UMLS-KGI-BERT: Data-Centric Knowledge Integration in Transformers for Biomedical Entity Recognition
###### Abstract
Pre-trained transformer language models (LMs) have in recent years become the dominant paradigm in applied NLP. These models have achieved state-of-the-art performance on tasks such as information extraction, question answering, sentiment analysis, document classification and many others. In the biomedical domain, significant progress has been made in adapting this paradigm to NLP tasks that require the integration of domain-specific knowledge as well as statistical modelling of language. In particular, research in this area has focused on the question of how best to construct LMs that take into account not only the patterns of token distribution in medical text, but also the wealth of structured information contained in terminology resources such as the UMLS. This work contributes a data-centric paradigm for enriching the language representations of biomedical transformer-encoder LMs by extracting text sequences from the UMLS. This allows for graph-based learning objectives to be combined with masked-language pre-training. Preliminary results from experiments in the extension of pre-trained LMs as well as training from scratch show that this framework improves downstream performance on multiple biomedical and clinical Named Entity Recognition (NER) tasks. All pre-trained models, data processing pipelines and evaluation scripts will be made publicly available.
## 1 Introduction
In recent times, transformer language models Vaswani et al. (2017) have become the most popular and effective sequence modelling framework in almost all areas of applied Natural Language Processing. Unsupervised pre-training on large quantities of text allows transformers to capture rich semantic and syntactic patterns that can be transferred to many specialised language processing objectives. As such, transformer models that use the transfer learning paradigm whereby the model is trained in an unsupervised manner on a large text corpus and then fine-tuned on a downstream supervised-learning task have achieved state-of-the-art results across a wide range of general and domain-specific applications.
The proliferation of textual data in the biomedical domain (Electronic Health Records (EHRs), clinical documents, pharmaceutical specifications, etc) has precipitated the broad adoption of deep learning & NLP techniques for information extraction and processing Li et al. (2021); Tiwari et al. (2020); Dubois et al. (2017). Moreover, it has been shown that language models are capable of encoding clinical knowledge to a certain extent Singhal et al. (2022). Biomedical and clinical NLP, however, is widely recognised to present particular challenges that do not apply to the same extent in other domains, in particular the need to incorporate structured domain knowledge into text encodings Chang et al. (2020). In order for neural language modelling to be reliable in a discipline as highly specialised as medicine, there is a more acute need for models to learn directly from domain-specific terminologies, as opposed to relying solely on corpus-based learning. Thus, a significant amount of research effort in the medical NLP community has been directed towards the question of how best to inject information from knowledge graphs (KGs) into LMs He et al. (2022); Naseem et al. (2022); Li et al. (2020). However, a generalisable, widely-accepted approach to this technique that can be easily transferred across different problem settings, models and training corpora has yet to emerge. In addition, research into knowledge graph integration in NLP in the biomedical domain has tended to focus on English-language corpora; the utility and transferability of these techniques for other languages, for which less textual resources are available, as well as for multilingual models, remains therefore an under-explored area.
This paper aims to contribute to the resolution of
these issues by proposing a general framework for training BERT encoders (Devlin et al., 2019) using the UMLS (Unified Medical Language System, Bodenreider (2004)) alongside free-text corpora.
The main contributions of this work are as follows:
* We propose a data-centric method for formulating the KG-based learning objectives of triple classification and entity/link prediction in the language modelling paradigm, and implement a framework for training transformers using the UMLS knowledge base in parallel with masked-language pre-training.
* Pre-training on the UMLS alongside the European Clinical Case Corpus (Minard et al., 2021; Magnini et al., 2020), we show that this method brings improvements to pre-trained models across a range of biomedical entity recognition tasks in three different languages, as well as functioning as a competitive pre-training strategy that requires much less training data in comparison to state-of-the-art transformer models. We release the monolingual and multilingual model weights trained in this way, UMLS-KGI-BERT, as open-source resources for the clinical NLP research community.
* Based on this work, we release the Python library bertify_umls, built mainly on the transformers and pandas libraries, which allows researchers to create custom text datasets and effectively use the UMLS knowledge base as a training corpus for BERT-style LMs.
## 2 Related Work
### Pre-trained LMs for Medical Applications
In general, the standard methodology for adapting neural text encoders to the biomedical domain has been to take a model that has been pre-trained on general-domain text corpora and continue this unsupervised pre-training on a medical corpus (Alrowili and Shanker, 2021; Lee et al., 2020; Alsentzer et al., 2019). However, recent work has suggested that, given enough training data, it is preferable to pre-train these models on large domain-specific corpora only, without starting from a general-domain checkpoint (Gu et al., 2021; Rasmy et al., 2021). In this work we explore both approaches, extending existing biomedical and general-domain models as well as training BERT models from scratch on our own generated datasets.
### Knowledge-enhanced LMs
Techniques for the incorporation of knowledge graph structure into BERT models can, broadly speaking, be divided into three categories, each focusing on one of the three fundamental components of a machine learning system, i.e. 1) the training data, 2) the model architecture and 3) the objective function to be optimised. The first type of approach prioritises the augmentation of BERT's input data with information extracted from a knowledge graph. This extra information can be numerical, e.g. pre-computed graph embeddings (Jeong et al., 2019) or textual, e.g. KG triples linked to input sentences (Liu et al., 2019).
The second type of approach focuses on adapting the architecture of BERT so that its language representations become fused with knowledge graph embeddings (KGEs) (Wang et al., 2021; Peters et al., 2019; Zhang et al., 2019). Knowledge graph fusion techniques such as these have been shown to be beneficial on certain English-language medical NLP tasks (Meng et al., 2021; Roy and Pan, 2021).
Thirdly, the self-supervised pre-training objective of BERT models can be augmented using the kind of knowledge graph reasoning tasks used to build KGE models. This approach is more commonly used for knowledge graph completion (Kim et al., 2020; Yao et al., 2019) but has also been shown to be an effective strategy in the biomedical NLP domain (Hao et al., 2020).
As previously mentioned, given that the medical domain is particularly exacting in terms of requirements for the use of structured facts, the exploration of ways in which ontological knowledge can be integrated into automated text processing is a very active area of research (Khosla et al., 2020; Mondal et al., 2019). In particular, there have been multiple successful efforts to integrate the UMLS knowledge graph into BERT models, notably UmlsBERT (Michalopoulos et al., 2021), which proposes a data-augmentation technique allowing for concept and semantic type information to be linked to input text, and SapBERT (Liu et al., 2021, 2021), which introduced a self-alignment strategy for learning from UMLS synonym pairs via a multi-similarity (MS) loss function to force related concepts closer to one another in BERT's repre
sentation space. Yuan et al. (2022) build on this strategy by applying MS loss to relation triples. In contrast, in this work we show that information from the UMLS can be incorporated into BERT models in a simpler way, using only cross-entropy classification loss, while also balancing this training process with standard masked-language BERT pre-training.
Recent general overviews of the landscape of AI research have highlighted the importance of data-centric approaches to building models (Zha et al., 2023; Hamid, 2022; Jakubik et al., 2022) and in light of these trends this work focuses on types 1) and 3) of knowledge base integration described above, i.e. on improving the performance of standard model architectures by constructing high-quality datasets that can be integrated into the self-supervised language modelling paradigm by modifying the BERT objective function. The motivation for this kind of approach is also to provide a pre-training framework that is more widely transferable and does not rely on any particular transformer-encoder architecture.
## 3 Methodology
In this work, we experiment with training BERT language models with three knowledge graph reasoning tasks derived from the UMLS, in addition to the standard masked-language modelling objective: entity prediction, link prediction and triple classification.
### Dataset Construction
Formally, we consider the UMLS KG in the standard fashion, as a directed graph \(G=(C,E,R)\) where \(C\) is the set of all medical concepts in the KG, \(E\) the set of all edges or relations that link these concepts to one another, and \(R\) the set of possible relation types, i.e. the labels \(r\) for each \(e\in E\). The training sequences are thus generated from the KG dataset of ordered triples \((h,r,t)\) where \((h,r)\in C\times C\) and \(r\in R\). As a compendium of multiple different sources of taxonomic biomedical information, the UMLS metathesaurus contains multiple levels of granularity at which meaning representation can be analysed. We consider three such levels of granularity in our work:
* string descriptors for conceptual enti
Figure 1: Overview of the UMLS-KGI pre-training process.
ties
* the basic unit of meaning representation for the nodes in the knowledge graph, i.e. the elements of the set \(C\).
* these are groupings of concepts that can be considered to define the type of entity a concept represents; e.g. anatomical structure, chemical, disorder etc.
Each concept (CUI) can be associated with multiple terms and multiple semantic groups. Thus, given that the entities \(h\) and \(t\) that make up the knowledge graph triples are represented as CUIs, in order to represent them as input text sequences for BERT models, we use the "preferred term" strings associated with the concepts \(h\) and \(t\), except in the case of synonym relations where we randomly select another of the terms associated with the concept in question to associate with \(t\). We also introduce a set of special tokens to represent the relation types \(R\), of which there are seven (parent, child, synonymy, allowed qualifier, qualified by, broader, narrower). Concretely, the tokenization function for BERT models forms text classification sequences from triples in the following way;
\[\text{Tokenize}(h,r,t)=\texttt{[CLS]}w_{1}^{h}\cdots w_{m}^{h}\] \[\texttt{[REL]}w_{1}^{t}\cdots w_{t}^{t}\texttt{[SEP]} \tag{1}\]
where the \(w_{i}\) represent the token sequences corresponding to the strings \(h\) and \(t\), [CLS] and [SEP] are BERT's standard classification and sequence-separation tokens as defined by Devlin et al. (2019), and [REL] is one of the relation tokens. For link prediction, we construct a dataset of variable-length paths through the KG by iteratively selecting a list of triples \((h_{1},r_{1},t_{1}),\ldots,(h_{n},r_{n},t_{n})\) where \(h_{i+1}=t_{i}\) to form a path \(p=(h_{1},r_{1},h_{2},\ldots,r_{n},t_{n})\).
Entity PredictionThe entity classification task can be trivially integrated into the masked-language objective of BERT, by masking the tokens associated with the concept \(t\).
Link PredictionWe formulate link prediction as a narrow masked-language task by masking the relation tokens in the path dataset with another _hidden relation_ token, for which the model is trained to fill in one of six relation types - as the triple classification and entity prediction tasks already have the partial goal of improving the model's capability to associate synonymous terms with each other, we exclude synonym relations from the path dataset.
Triple ClassificationFollowing the work of Hao et al. (2020), the triple classification objective is formulated as a binary classification problem where the model is tasked with classifying triples as true or false. In order to generate training examples of false triples, we use two different negative sampling strategies. Firstly, to provide directly contrastive examples for existing relations, we sample triples \((h,r,t)\) where \(h\) and \(t\) belong to different semantic groups and construct corresponding false triples with the same relation type and semantic group categories, i.e. \((\hat{h},r,\hat{t})\notin G\) where \(\hat{h}\) and \(\hat{t}\) are of the same semantic group as \(h\) and \(t\) respectively. Secondly, to provide contrastive examples for relation types, we sample triples for which \(h\) and \(t\) are of the same semantic group, and form the negative training example by changing the relation type \(r\). To ensure balance, the triple classification datasets used in this work are made up of 50% positive examples (real triples from the KG), 25% examples generated by the first negative sampling method and the rest by the second.
We perform stratified sampling on the base knowledge graph according to semantic groups, i.e. we ensure that the proportional representation of each semantic group in the knowledge-base triples for each language is maintained in the training datasets.
Mixed Objective FunctionIn order to train BERT models using the UMLS-based reason
\begin{table}
\begin{tabular}{l c c c c c c} & \begin{tabular}{c} **Triple** \\ **Classification** \\ \end{tabular} & \begin{tabular}{c} **Entity** \\ **Prediction** \\ \end{tabular} & \begin{tabular}{c} **Paths** \\ **(num. documents)** \\ \end{tabular} & \begin{tabular}{c} **E3C corpus** \\ **(num. documents)** \\ \end{tabular} & \begin{tabular}{c} **Total Training** \\ **Examples** \\ \end{tabular} &
\begin{tabular}{c} **Memory** \\ **Footprint** \\ \end{tabular} \\ \hline French & 200K & 100K & 64,208 & 25,740 & 389,948 & 604MB \\ Spanish & 200K & 100K & 100K & 1,876 & 401,876 & 162MB \\ English & 200K & 100K & 100K & 9,779 & 409,779 & 174MB \\ \hline Total & 600K & 300K & 264,208 & 37,395 & 1,201,603 & 940MB \\ \hline \end{tabular}
\end{table}
Table 1: Pre-training corpora sizes used in the experiments.
ing tasks described above alongside the masked-language objective, each training example is augmented with an indicator label that tells the model which loss function to apply to the sequence in question. The overall loss function is then calculated as
\[\mathcal{L}=\mathcal{L}_{\text{MLM}}+\alpha_{1}\mathcal{L}_{\text{EP}}+\alpha_{2 }\mathcal{L}_{\text{LP}}+\alpha_{3}\mathcal{L}_{\text{TC}} \tag{2}\]
where the \(\alpha_{i}\) are scalar task-weighting coefficients and \(\mathcal{L}_{\text{MLM}}\), \(\mathcal{L}_{\text{EP}}\), \(\mathcal{L}_{\text{LP}}\), and \(\mathcal{L}_{\text{TC}}\) correspond to the loss values for masked language modelling, entity prediction, link prediction and triple classification respectively. We use the standard cross-entropy classification loss for all tasks.
## 4 Experiments
For the evaluation of the approach described in the previous section, we restrict our attention in this paper to NER tasks. Where possible, we use the datasets and training-evaluation-test splits that are publicly available via the Huggingface datasets library1.
Footnote 1: [https://huggingface.co/datasets](https://huggingface.co/datasets)
### KG-integrated pre-training
Pre-training corporaAs a resource for masked-language pre-training, we utilise the European Clinical Case Corpus (E3C) version 2.0.02, a freely-available multilingual corpus of clinical narratives. We evaluate our method in three different languages; English, French and Spanish. These languages were chosen as they are the three most well-represented languages in the metathesaurus for which we have access to pre-trained clinical BERT models for comparison. The sizes of the combined UMLS-E3C datasets used are shown in Table 1.
Footnote 2: [https://live.european-language-grid.eu/catalogue/corpus/7618](https://live.european-language-grid.eu/catalogue/corpus/7618)
For each language, we compare the performance of 1) a transformer model trained from scratch on each monolingual dataset (KGI-BERT\({}_{EN,FR,ES}\)) against 2) a multilingual version of the same model trained on all three datasets (KGI-BERT\({}_{m}\)), 3) a pre-trained monolingual biomedical model and 4) the same pre-trained model with supplementary training on the corresponding monolingual UMLS-E3C dataset.
The UMLS-KGI models were trained for 64 epochs on each dataset, using the PyTorch implementation of the weighted ADAM optimizer (Loshchilov and Hutter, 2019) with default parameters. We use a maximal sequence length of 256 for the masked-language modelling sequences, an effective batch size of 1500 and a triangular learning rate schedule peaking at \(7.5\times 10^{-4}\). To take into account the varying sizes of the components of the pre-training dataset we set the values of the coefficients of the loss function such that they are inversely proportional to the number of documents available:
\[\alpha_{i}=\frac{\sum_{j=0,j\neq i}^{3}n_{j}}{2\sum_{k=0}^{3}n_{k}}\]
where the \(n_{k}\) correspond to the number of documents in the training set for each UMLS-based task. In this way, the E3C masked-language loss has the same weighting as the UMLS-based task losses.
Pre-trained modelsFor supplementary training, we make use of what are, to the best of our knowledge, the overall best-performing biomedical BERT models of their size (pre-trained using masked-language tasks only) for each language, according to baseline experiments on the NER tasks.
For French, we use DrBERT (Labrak et al., 2023), for Spanish the RoBERTa-based biomedical model released by Carrino et al. (2021), which we refer to as BioRoBERTa-ES, and for English PubMedBERT (Gu et al., 2021). For training from scratch, we use the DistilBERT model configuration (Sanh et al., 2019) with 12 encoder layers and 12 attention heads.
### Evaluation corpora
We evaluate these models on nine different clinical entity recognition tasks; four in French, two in Spanish and three in English. In order to ensure a fair comparison between models and evaluate more directly the knowledge transfer capabilities of the pre-trained models, we restrict ourselves to a _one-shot_ setting for all tasks, i.e. the model is given a single pass over the training data before being evaluated on the test set. For all fine-tuning runs, we use an effective batch size of 4 (we found that very frequent optimizer updates give better results in for few-shot learning), learning rate \(2\times 10^{-5}\) and weight decay of 0.01.
Cas/essaisCAS (Grabar et al., 2018) and ESAIS (Dalloux et al., 2021) are corpora of clinical cases in French for which a subset is annotated with part-of-speech tags as well as semantic biomedical annotations (UMLS concepts, negation, and
uncertainty). We evaluate our models on the two corresponding medical POS-tagging tasks, CAS-POS and ESSAI-POS, as well as formulating a semantic-group token classification task using the CAS corpus annotations (CAS-SG).
QuaeroThe QUaERO French Medical Corpus [21] is a corpus of biomedical documents from EMEA and Medline annotated with UMLS concepts to facilitate entity recognition and document classification tasks. The NER evaluation task we make use of here, QUaERO-MEDLINE, involves semantic group identification in the Medline documents.
PharmaCoNER[19]Designed for the automated recognition of pharmacological substances, compounds and proteins in Spanish-language clinical documents, this is a manually annotated subset of the Spanish Clinical Case Corpus (SPACCC [17]).
MedDocanSimilarly to PharmaCoNER, the MEDDOCAN corpus [14] is an annotated subset of SPACCC, in this case with semantic entity types relevant to clinical document anonymisation, i.e. words and expressions constituting Personal Health Information (PHI).
NCBI-Disease[15] The NCBI disease corpus is made up of PubMed abstracts with annotated disease mentions. In this work, we restrict our attention to token classification at the mention level.
BioRED[18] This corpus is designed for biomedical relation extraction and entity recognition; we focus on the latter in this work. This task can be considered a more semantically general version of the NCBI disease recognition task, in that the BioRED corpus consists of PubMed abstracts annotated with a diverse range of entity types including genes, proteins and chemicals.
JNLPBA04 NER Dataset[10] Developed in the context of a biomedical entity recognition shared task, this corpus consists of Medline documents annotated with mentions of DNA, RNA, proteins, cell types and cell lines.
We report the macro-averaged precision, recall and F1-score for each task. Results for the French, English and Spanish tasks can be seen in Tables 2, 3, and 4 respectively. We find that the best-performing models are in general the pre-trained checkpoints for which training has been extended via knowledge graph integration. This is unsurprising given that these are the models that have undergone the most domain-specific pre-training among all variants. It is important to highlight, moreover, the fact that the KGI-BERT variants are competitive with the pre-trained baselines for many tasks, despite being trained on less data. The largest improvements brought about by the UMLS-KGI training strategy can be seen in the French and Spanish tasks, suggesting that this technique will be more beneficial for lower-resource languages for which there is more room for improvement with respect to existing models.
The number of documents and target label classes for each evaluation task is show in Table 5.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} & \multicolumn{3}{c}{**CAS-POS**} & \multicolumn{3}{c}{**CAS-SG**} & \multicolumn{3}{c}{**QUaERO-MEDLINE**} & \multicolumn{3}{c}{**ESSAI-POS**} \\ \cline{2-11}
**Model** & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline DrBERT-4GB & 90.94 & 91.59 & 90.84 & 65.86 & 64.89 & 62.20 & 68.65 & 69.38 & 66.66 & 94.83 & 95.08 & 94.69 \\ + UMLS-KGI & **93.15** & **93.22** & **92.84** & 70.82 & **69.98** & 67.14 & 71.59 & 72.37 & 69.90 & 94.92 & 94.76 & 94.59 \\ \hline KGI-BERT\({}_{FR}\) & 88.55 & 88.40 & 87.82 & **71.57** & 66.90 & 65.79 & 71.78 & **72.93** & 70.75 & **95.46** & **95.40** & **95.18** \\ KGI-BERT\({}_{m}\) & 90.87 & 90.58 & 90.16 & 71.14 & 69.81 & **67.28** & **72.04** & 72.89 & **70.96** & 94.88 & 94.84 & 94.55 \\ \hline \end{tabular}
\end{table}
Table 2: Results on the French-language NER tasks. **Bold:** best result, underlined: next best.
\begin{table}
\begin{tabular}{l c c c c c c c c c} & \multicolumn{3}{c}{**NCBI-Disease**} & \multicolumn{3}{c}{**BioRED-NER**} & \multicolumn{3}{c}{**JNLPBA04**} \\ \cline{2-11}
**Model** & **P** & **R** & **F1** & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline PubMedBERT & 93.81 & 94.26 & 93.53 & **84.76** & 85.33 & 83.35 & 81.57 & 82.59 & 81.13 \\ + UMLS-KGI & **94.65** & **95.11** & **94.46** & 84.28 & **85.92** & **83.64** & **85.75** & **86.04** & **85.15** \\ \hline KGI-BERT\({}_{EN}\) & 89.33 & 89.43 & 88.99 & 82.98 & 85.89 & 82.99 & 81.82 & 82.90 & 82.02 \\ KGI-BERT\({}_{m}\) & 89.40 & 90.04 & 89.16 & 82.67 & 84.63 & 81.97 & 81.24 & 82.47 & 81.47 \\ \hline \end{tabular}
\end{table}
Table 3: Results on the English-language NER tasks.
### Ablation Experiments
In order to measure the relative effect of the three KG-derived pre-training tasks on downstream performance, we perform ablation experiments with the continually pre-trained models. This involved comparing the downstream performance on the NER tasks of different versions of the UMLS-extended models, each with one of the three KG-based pre-training tasks excluded from the pre-training process. For ablation, we use identical experimental settings to those described previously, except with 32 pre-training epochs rather than 64.
In general, the ablation results, for which the macro F1 scores are shown in Table 6, suggest that the majority of the benefits in terms of NER performance are brought about by the link prediction task, although there are not enough statistically significant differences among the results to fully justify this conclusion.
It is clear also that certain tasks tend to add unhelpful noise to the model with respect to some tasks, in particular the ESSAI-POS task in French and the MEDDOCAN task in Spanish. This may be due to the nature of these entity recognition tasks being more linked to general semantic patterns (i.e. parts-of-speech and identifying information) such that the addition of biomedical knowledge to the models does not improve their representation of the relevant concepts.
## 5 Conclusions and Future Work
This paper introduces UMLS-KGI, a framework for training BERT models using knowledge graphs requiring highly minimal adjustments to the standard language modelling paradigm. We show the potential of this method to increase the performance of BERT models on various NER tasks. The results presented in this paper suggest that for clinical NER tasks, high-quality small-scale datasets derived from structured information, alongside alongside relatively small clinical text corpora, can be as effective as large-scale corpora for pre-training BERT models. We make our models and data-processing pipelines freely available online.
Future work in this direction will involve the incorporation of more diverse graph-based reasoning tasks in the pre-training strategy with more fine-grained representation of relation types, as well as intrinsic evaluation of the UMLS-KGI-BERT language representations via embedding visualisation and interpretability studies.
## Limitations
The work presented in this paper is subject to a number of limitations which will be addressed in future work. Firstly, we evaluate UMLS-KGI-BERT on a very narrow range of tasks limited to token classification - a broader range of information extraction and reasoning tasks would be necessary for a more complete picture of the utility of our pre-training methods. In addition, we only train models for mid-to-high-resource languages; to properly validate the applicability of this approach, in particular the lessening of the need to rely on large training corpora, it will be necessary to train and evaluate such models in more low-resource settings.
\begin{table}
\begin{tabular}{l c c c c c} & \multicolumn{3}{c}{**PharmaCoNER**} & \multicolumn{3}{c}{**MEDDOCAN**} \\ \cline{2-6}
**Model** & **P** & **R** & **F1** & **P** & **R** & **F1** \\ \hline BioRoberta-ES & 81.11 & 81.99 & 80.41 & 91.41 & 93.15 & 91.84 \\ + UMLS-KGI & 83.52 & 84.30 & 83.90 & **93.65** & **95.32** & **91.99** \\ \hline KGI-BERT\({}_{ES}\) & 79.95 & 80.14 & 78.11 & 92.28 & 92.93 & 92.17 \\ KGI-BERT\({}_{m}\) & **85.05** & **85.95** & **85.49** & 92.32 & 92.65 & 91.98 \\ \hline \end{tabular}
\end{table}
Table 4: Results on the Spanish-language NER tasks. **bold**: best result, underlined: next best.
\begin{table}
\begin{tabular}{l c c c c}
**Dataset** & **Train** & **Dev** & **Test** & **N. Classes** \\ \hline CAS-POS & 2,652 & 569 & 569 & 31 \\ CAS-SG & 167 & 54 & 54 & 15 \\ QUAERO-MEDLINE & 788 & 790 & 787 & 11 \\ ESSAI-POS & 5,072 & 1,088 & 1,087 & 34 \\ NCBI-Disease & 5,433 & 924 & 941 & 3 \\ BioRED-NER & 387 & 98 & 97 & 7 \\ JNLPBA04 & 16,619 & 1,927 & 3,856 & 11 \\ PharmaCoNER & 500 & 250 & 250 & 5 \\ MEDDOCAN & 500 & 250 & 250 & 22 \\ \hline \end{tabular}
\end{table}
Table 5: Number of documents and target classes in the NER evaluation datasets |
2306.11259 | CoNi-MPC: Cooperative Non-inertial Frame Based Model Predictive Control | This paper presents a novel solution for UAV control in cooperative
multi-robot systems, which can be used in various scenarios such as
leader-following, landing on a moving base, or specific relative motion with a
target. Unlike classical methods that tackle UAV control in the world frame, we
directly control the UAV in the target coordinate frame, without making motion
assumptions about the target. In detail, we formulate a non-linear model
predictive controller of a UAV, referred to as the agent, within a non-inertial
frame (i.e., the target frame). The system requires the relative states (pose
and velocity), the angular velocity and the accelerations of the target, which
can be obtained by relative localization methods and ubiquitous MEMS IMU
sensors, respectively. This framework eliminates dependencies that are vital in
classical solutions, such as accurate state estimation for both the agent and
target, prior knowledge of the target motion model, and continuous trajectory
re-planning for some complex tasks. We have performed extensive simulations to
investigate the control performance with varying motion characteristics of the
target. Furthermore, we conducted real robot experiments, employing either
simulated relative pose estimation from motion capture systems indoors or
directly from our previous relative pose estimation devices outdoors, to
validate the applicability and feasibility of the proposed approach. | Baozhe Zhang, Xinwei Chen, Zhehan Li, Giovanni Beltrame, Chao Xu, Fei Gao, Yanjun Cao | 2023-06-20T03:25:35Z | http://arxiv.org/abs/2306.11259v2 | # CoNi-MPC: Cooperative Non-inertial Frame Based Model Predictive Control
###### Abstract
This paper presents a novel solution for UAV control in cooperative multi-robot systems, which can be used in various scenarios such as leader-following, landing on a moving base, or specific relative motion with a target. Unlike classical methods that tackle UAV control in the world frame, we directly control the UAV in the target coordinate frame, without making motion assumptions about the target. In detail, we formulate a non-linear model predictive controller of a UAV, referred to as the agent, within a non-inertial frame (i.e., the target frame). The system requires the relative states (pose and velocity), the angular velocity and the accelerations of the target, which can be obtained by relative localization methods and ubiquitous MEMS IMU sensors, respectively. This framework eliminates dependencies that are vital in classical solutions, such as accurate state estimation for both the agent and target, prior knowledge of the target motion model, and continuous trajectory re-planning for some complex tasks. We have performed extensive simulations to investigate the control performance with varying motion characteristics of the target. Furthermore, we conducted real robot experiments, employing either simulated relative pose estimation from motion capture systems indoors or directly from our previous relative pose estimation devices outdoors, to validate the applicability and feasibility of the proposed approach.
Motion Control, Non-Inertial Model, Non-Linear MPC, Leader-Follower, Autonomous Landing
## I Introduction
Recently, quadrotors or drones, due to their agility and lightweight nature, have been widely used in surveillance, search-and-rescue, and cinematography. The rapid development has led to a growing demand for multi-robot systems such as UAV-UGV pairs [1], leader-follower systems [2], multi-agent formation [3], autonomous landing [4, 5], etc. This paper focuses on an air-ground robot system in which the UAV (referred to as the agent/quadrotor/drone/follower) is actively controlled to fulfill a task along with an independently controlled UGV (referred to as the target/base/leader). State estimation, planning, and controllers are crucial components in developing versatile systems for interactive or cooperative tasks. In classical pipelines, relative state, typically obtained through direct mutual measurements or subtraction from global state estimations, is used as a feedback to control the motion of UAVs in the world frame [6]. In these pipelines, controllers require a complete system model of the quadrotor. Furthermore, in complex tasks such as in [7, 8], appropriate trajectories and continuous re-planning are needed to achieve good performance.
To achieve good performance for the air-ground system, current state-of-the-art air-ground (agent-target) collaborative planning-control systems such as [7, 8] have to face the following challenges:
* Accurate **absolute state estimations** for both the agent and the target to achieve demanding high-precision relative motion planning and control, which is hard to be guaranteed in challenging environments (GPS denied, feature-less) or for long-term tasks (accumulated drifts);
* A **prior kinematic model** of the target is necessary to be known by the agent to predict the target's movements,
Fig. 1: A quadrotor orbits a UGV by applying CoNi-MPC controller with a pre-computed circular trajectory in the UGV non-inertial frame. (a) is the accumulated shots of the quadrotor from the view of a camera on the UGV, which shows the relative circular trajectory of the quadrotor. (b) shows the experiment from a third-person view in the world frame, in which the flight trajectory appears chaotic along with the UGV S-shape trajectory.
which may fail if the given model is not accurate or the assumptions of the target model do not hold;
* **Continuous trajectory re-planning** of the agent is needed to be responsive and adaptive to the target's motions, which can lead to heavy computation loads.
Therefore, we propose CoNi-MPC that directly controls the agent in a target's body frame utilizing relative estimations and target's IMU data, eliminating all dependencies in absolute world frame.
Typically, a full SLAM stack that fuses multiple sensors (vision, lidar, GPS, IMU, etc.) is used to acquire accurate state estimation. However, SLAM algorithms generally demand high computational cost and rely on good environment features to achieve robust estimation. Maintaining long-term SLAM for a system that only requires interactive actions may be also considered redundant. At the same time, having prior knowledge of the kinematic model of the moving target is necessary for the agent to predict the target's following trajectory accurately. However, this could be difficult to be guaranteed considering the target's individual tasks and motion. Even with accurate state estimation and a precise kinematic model, the dynamic evolution of both the target's state and the agent's state demands continuous trajectory re-planning.
To overcome these challenges and directly control the agent in the target's body frame, we design CoNi-MPC, a novel systematic solution that formulates a non-linear model predictive controller of a UAV within a non-inertial frame, specifically the target frame. The system only requires the relative states (pose and velocity), the angular velocity and the accelerations of the target, which can be obtained by relative localization methods and ubiquitous MEMS IMU sensors, respectively. This solution eliminates the dependency on state estimation information in the world frame, and only requires relative estimation. We directly controls the UAV in the target coordinate frame without making any motion assumptions about the target. Additionally, the system does not require trajectory re-planning even for some complex tasks.
This CoNi-MPC framework can be directly applied to various application tasks, such as leader-following, directional landing, and complex relative motion control. All these tasks can be implemented by changing the reference within the model. In the leader-follower control, a single fixed relative point within the leader's frame serves as an input to guide the follower. For landing or more complex inter-robot interactive tasks, the agent control only requires one pre-computed trajectory relative to the target, without any re-planning requirement. Fig. 1 shows snapshots of a drone circling over a ground vehicle (orbit flight), where the drone is controlled in the ground vehicle's frame (non-inertial frame) and the vehicle follows an S-shape trajectory (unknown to the drone). The drone's trajectory traces a circle, as viewed from the vehicle's perspective, while the trajectory of the drone in the world frame is rather complex.
To the best of our knowledge, this is the first work realizing complex interactions between an agent and a target that only requires relative position estimation and the target's IMU data. The contributions of our work are:
* We propose a systematic framework for drone-target relative motion control using MPC by fully modeling the drone in the target's non-inertial body frame. The system does not need to know the absolute pose and motion of target in the world frame.
* With the relative motion model, we group the target-dependent elements together and substitute it with IMU information from target body frame. This operation eliminates the dependency on the data in the world frame and makes this method feasible in real-world setup.
* The proposed MPC controller works as a unified framework supporting various UAV-target interaction tasks (eg. leader-following, aggressive directional landing, dynamic rings crossing, and orbit flight) with high tracking accuracy while eliminating continuous trajectory re-planning.
## II Related Work
Autonomous landing [6, 9], leader-following systems [10, 11], and tracking [12] have been extensively investigated individually, considering their unique task characteristics and challenges. Niu et al. [6] introduce a vision-based autonomous landing method for UAV-UGV cooperative systems. They employ multiple QR codes on the landing pad of the UGV to obtain estimations of relative distance, velocity and direction between the two vehicles, as well as UAV state from Visual Inertial Odometry (VIO) or GPS. Based on these estimations, a velocity controller utilizing a control barrier function (CBF) and a control Lyapunov function (CLF) are designed for the quadrotor landing on the moving UGV. Wang et al. [9] propose a systematic approach for vision-based autonomous landing that utilizes EKF and VIO for pose estimation and a similar marker detection method obtaining the relative information between the UAV and the UGV. Han et al. [10] utilize a complex Laplacian based similar formation control algorithm over leader-follower networks with the actively estimated relative position. Giribet et al. [11] propose a tracking controller based on dual quaternion pose representations and cluster-space in a leader-follower task, with the objective of minimizing steady-state error. The UAVs in these works are all controlled in the world frame and therefore the global state estimation is essential for the system. In our system, we formulate the model in a pure relative motion control in the target frame. The system avoids involving the state estimation in the world frame, which can be difficult or fragile under specific conditions.
While relative motion based control is more widely used in the field of space technology, such as target tracking and docking for approaching operations [13], a limited number of works in robotics explore modeling and controlling relative motion for UAV-UGV cooperation, particularly in the context of quadrotor control. Marani et al. [14] investigate the dynamics of a quadrotor in a non-inertial frame without rotation, assuming the referencing non-inertial frame only performs translational movement, and they use a sliding mode controller for trajectory tracking. Jin et al. [15] investigate the relative motion model of a quadrotor in a non-inertial frame and propose two controllers (relative position and attitude controllers) for a quadrotor landing on a moving vessel. Although they
directly address relative motion constraints, the control input remains reliant on attitude in a world frame, necessitating global state estimation. Li et al. [16] propose a robocentric model-based visual servoing method for hovering and obstacle avoidance for a single drone, employing model predictive control. Their method constructs the "relative" states in the drone's body frame by using the RGB-D camera to detect targets, which eliminates the state dependency on the world frame. DeVries et al. [17] proposed a distributed formation controller in a non-inertial reference frames but still map the agent's states and control inputs to an inertial frame.
The model predictive control framework serves as the base of many works related to direct control and trajectory planning in the literature. Falanga et al. [12] propose a non-linear MPC (PAMPC) method for quadrotors, combining perception and action terms into the optimization. A VIO estimator and the PAMPC method are applied to allow the quadrotor to follow a trajectory while maintaining a point of interest in its field of view. Ji et al. [18] propose a disturbance-adaptive receding horizon low-level replanner for autonomous drones, which can generate collision-free and temporally optimized local reference trajectories. Similarly, Romero et al. [19] handle the problem of generating temporal optimized trajectories for quadrotors. The proposed MPCC also integrates temporal optimization into the standard MPC formulation to solve the time allocation problem online. In [20], a stochastic and predictive MPC (SNMPC) is proposed to minimize the total amount of uncertainty in the target observation and the robot state estimation, to effectively maintain the desired pose of the robot relative to the moving target.
The aforementioned works or their applications in multi-robot cooperation still require global state estimation and frequent global path re-planning. Our work, based on relative estimation, is inherently suitable for cooperation tasks without requiring global state estimation. Moreover, costly global path re-planning can be avoided thanks to the system model in the non-inertial frame.
## III Problem Formulation and CoNi-MPC
We consider a cooperative system consisting of a UGV as the target and a UAV as the agent. The objective is to regulate the motion of the UAV in conjunction with the UGV for multi-robot cooperation tasks, such as leader-follower, landing, orbit flight, etc.
### _Notations_
As shown in Fig. 2, we define the agent frame as \(B\) attached to the body frame of the quadrotor, the target frame as \(N\) attached to the body frame of the UGV, and frame \(W\) is the inertial world frame. We denote scalar numbers with lower-case letters, vectors with bold lowercase letters, and matrices with bold uppercase letters. The left superscript indicates the coordinate system where the variable is expressed. If no other specification, values without any left superscript are expressed in the world frame \(W\). For example, we denote the relative position of frame \(B\) w.r.t. frame \(N\) (non-inertial) by \({}^{N}\mathbf{p}_{B}\), the relative velocity by \({}^{N}\mathbf{v}_{B}\), and the relative orientation by \({}^{N}\mathbf{q}_{B}\). The right superscript \(x\), \(y\), or \(z\) on a vector means the element of the vector, e.g., \(\mathbf{t}^{x}\) is the \(x\) element of \(\mathbf{t}\). \(\odot\) means quaternion Hamilton product. The skew-symmetric matrix of a vector \(\mathbf{t}\) is denoted as \([\mathbf{t}]_{\times}\). The measured vector \(\mathbf{v}\) from sensors is denoted as \(\widehat{\mathbf{v}}\). Table I lists the main notations used in this paper.
### _Quadrotor System Model in Non-Inertial Frame_
In this section, we derive the quadrotor system model in the non-inertial frame \(N\) by introducing an intermediate inertial world frame \(W\). Our derivation shows that all the dependencies on this world frame are eliminated at the end. Fig. 2 shows the relationship among the three frames. The relative position is:
\[{}^{N}\mathbf{p}_{B}={}^{N}\mathbf{R}_{W}\mathbf{t}_{\overline{\times}\overline{B}} \tag{1}\]
where \(\mathbf{t}_{\overline{\times}\overline{B}}=\mathbf{t}_{B}-\mathbf{t}_{N}\) is the translational vector pointing from the origin of frame \(N\) to frame \(B\). Then we get the relative
\begin{table}
\begin{tabular}{r c l} \hline \hline \({}^{N}\mathbf{p}_{B}\) & \(\triangleq\) & Relative position of the agent in the target’s frame \\ \({}^{N}\mathbf{v}_{B}\) & \(\triangleq\) & Relative velocity of the agent in the target’s frame \\ \({}^{*}\mathbf{q}_{\#}\) & \(\triangleq\) & Unit quaternion from \(\#\) to \(*\) \\ \({}^{*}\mathbf{R}_{\#}\) & \(\triangleq\) & Rotation matrix from \(\#\) to \(*\) \\ \({}^{*}\mathbf{t}_{\overline{\times}\overline{B}}\) & \(\triangleq\) & Translation vector from \(N\) to \(B\) expressed in \(*\) \\ \(\mathbf{t}_{\#}\) & \(\triangleq\) & Translation vector from \(W\) to \(\#\) expressed in \(W\) \\ \(\mathbf{g}\) & \(\triangleq\) & Gravitational acceleration \\ \({}^{B}\mathbf{T}_{B}\) & \(\triangleq\) & Normalized collective thrust of \(B\), system input \\ \({}^{B}\mathbf{\Omega}_{B}\) & \(\triangleq\) & Body rate of \(B\), system input \\ \({}^{N}\mathbf{a}_{N}\) & \(\triangleq\) & Linear acceleration of \(N\) expressed in \(N\) \\ \({}^{N}\mathbf{\Omega}_{N}\) & \(\triangleq\) & Body rate (angular velocity) of \(N\) \\ \({}^{N}\mathbf{\beta}_{N}\) & \(\triangleq\) & Angular acceleration of \(N\) \\ \((r,v,\omega)\) & \(\triangleq\) & Experiment parameter configuration \\ \hline \hline \end{tabular}
\end{table} TABLE I: Table of Notations
Fig. 2: The transformation relationship among the quadrotor (the agent) body frame, the non-inertial (the target) frame, and the world frame. The target is controlled externally, and the agent is controlled by feeding reference (e.g. trajectory) defined in the target’s frame.
velocity by applying a time derivative as following
\[\begin{split}{}^{N}\dot{\mathbf{p}}_{B}={}^{N}\mathbf{v}_{B}&= \frac{d}{dt}({}^{N}\mathbf{R}_{W})\mathbf{t}_{\overline{N}\overline{B}}+{}^{N}\mathbf{R}_{W} \dot{\mathbf{t}}_{\overline{N}\overline{B}}\\ &=-[{}^{N}\mathbf{\Omega}_{N}]_{\times}{}^{N}\mathbf{R}_{W}\mathbf{t}_{ \overline{N}\overline{B}}+{}^{N}\mathbf{R}_{W}\dot{\mathbf{t}}_{\overline{N}\overline{B }}\\ &=-[{}^{N}\mathbf{\Omega}_{N}]_{\times}{}^{N}\mathbf{p}_{B}+{}^{N}\mathbf{R}_ {W}\dot{\mathbf{t}}_{\overline{N}\overline{B}}\end{split} \tag{2}\]
The relative acceleration is the time derivative of the relative velocity
\[\begin{split}{}^{N}\dot{\mathbf{v}}_{B}&=-\frac{d}{dt}([{} ^{N}\mathbf{\Omega}_{N}]_{\times})^{N}\mathbf{p}_{B}-[{}^{N}\mathbf{\Omega}_{N}]_{\times}{ }^{N}\dot{\mathbf{p}}_{B}\\ &-[{}^{N}\mathbf{\Omega}_{N}]_{\times}{}^{N}\mathbf{R}_{W}\dot{\mathbf{t}}_{ \overline{N}\overline{B}}+{}^{N}\mathbf{R}_{W}\dot{\mathbf{t}}_{\overline{N}\overline{B }}\\ &=-\frac{d}{dt}([{}^{N}\mathbf{\Omega}_{N}]_{\times})^{N}\mathbf{p}_{B}-[{ }^{N}\mathbf{\Omega}_{N}]_{\times}{}^{N}\mathbf{v}_{B}\\ &-[{}^{N}\mathbf{\Omega}_{N}]_{\times}{}^{N}\mathbf{v}_{B}+[{}^{N}\mathbf{ \Omega}_{N}]_{\times}{}^{N}\mathbf{p}_{B})+{}^{N}\mathbf{R}_{W}(\tilde{\mathbf{t}}_{B}- \tilde{\mathbf{t}}_{N})\\ &=-[{}^{N}\mathbf{\beta}_{N}]_{\times}{}^{N}\mathbf{p}_{B}-2[{}^{N}\mathbf{ \Omega}_{N}]_{\times}{}^{N}\mathbf{v}_{B}-[{}^{N}\mathbf{\Omega}_{N}]_{\times}^{2}{}^{ N}\mathbf{p}_{B}\\ &\hskip-14.226378pt+{}^{N}\mathbf{R}_{B}{}^{B}\mathbf{T}_{B}+\underbrace{ {}^{N}\mathbf{R}_{W}\mathbf{g}-{}^{N}\mathbf{R}_{W}\mathbf{a}_{N}}_{\text{values relying on estimations in $W$}}\end{split} \tag{3}\]
where \(\tilde{\mathbf{t}}_{B}\) in Equ. 3 is the acceleration of a quadrotor modeled in the world frame as a rigid body as in [21]
\[\tilde{\mathbf{t}}_{B}={}^{W}\mathbf{R}_{B}{}^{B}\mathbf{T}_{B}+\mathbf{g} \tag{4}\]
\({}^{B}\mathbf{T}_{B}=[0,0,T]^{\top}\) is the normalized collective thrust of the quadrotor and \(T=\sum_{i}T_{i},i\in\{1,2,3,4\}\) is the normalized thrust force from four motors, \(\mathbf{g}=[0,0,-g]^{\top}\) is the gravity, \({}^{N}\mathbf{\Omega}_{N}\) is the body rate of the non-inertial frame, \({}^{N}\mathbf{\beta}_{N}\) is the angular acceleration of the non-inertial frame, \({}^{N}\mathbf{R}_{W}\mathbf{a}_{N}\) is the linear acceleration of the non-inertial frame expressed in the non-inertial frame.
The rotation matrix from \(B\) to \(N\) is
\[\begin{split}{}^{N}\mathbf{R}_{B}&={}^{N}\mathbf{R}_{W}{}^ {W}\mathbf{R}_{B}\end{split} \tag{5}\]
The time derivative of the above rotation matrix is
\[\begin{split}{}^{N}\dot{\mathbf{R}}_{B}&=\frac{d}{dt}({} ^{N}\mathbf{R}_{W})^{W}\mathbf{R}_{B}+{}^{N}\mathbf{R}_{W}\frac{d}{dt}({}^{W}\mathbf{R}_{B})\\ &=-[{}^{N}\mathbf{\Omega}_{N}]_{\times}{}^{N}\mathbf{R}_{B}+{}^{N}\mathbf{R} _{B}[{}^{B}\mathbf{\Omega}_{B}]_{\times}\end{split} \tag{6}\]
At the same time, we show the quaternion here for model implementation in the next section
\[\begin{split}{}^{N}\dot{\mathbf{q}}_{B}&={}^{N}\dot{ \mathbf{q}}_{W}\odot{}^{W}\mathbf{q}_{B}+{}^{N}\mathbf{q}_{W}\odot{}^{W}\dot{\mathbf{q}}_{B}\\ &=-\frac{1}{2}{}^{N}\mathbf{\Omega}_{N}\odot{}^{N}\mathbf{q}_{B}+\frac{1} {2}{}^{N}\mathbf{q}_{B}\odot{}^{B}\mathbf{\Omega}_{B}\end{split} \tag{7}\]
In the system model of Equ. 3 and Equ. 7, we almost eliminate the dependency on values in world frame \(W\) except for \({}^{N}\mathbf{R}_{W}\) in Equ. 3. We notice that the last two terms (\({}^{N}\mathbf{R}_{W}\mathbf{g}-{}^{N}\mathbf{R}_{W}\mathbf{a}_{N}\)) in Equ. 3 are actually the total measured acceleration of target expressed in target's frame, which can be handled using the data from an IMU attached to the non-inertial frame to the relative system model directly. In detail, Equ. 3 contains the projected gravitational acceleration \({}^{N}\mathbf{R}_{W}\mathbf{g}\) and the acceleration of the non-inertial frame \({}^{N}\mathbf{a}_{N}(={}^{N}\mathbf{R}_{W}\mathbf{a}_{N})\). The true acceleration of a MEMS IMU sensor can be calculated by applying the negative gravity in the target's body frame as
\[\begin{split}{}^{N}\mathbf{a}_{N}={}^{N}\mathbf{R}_{W}\begin{bmatrix}0\\ 0\\ -g\end{bmatrix}+\begin{bmatrix}\hat{a}^{x}\\ \hat{a}^{y}\\ \hat{a}^{z}\end{bmatrix}\end{split} \tag{8}\]
where \(\hat{a}^{x}\), \(\hat{a}^{y}\), and \(\hat{a}^{z}\) are the measured acceleration data from the IMU. With Equ. 8, Equ. 3 can be reformulated to
\[\begin{split}{}^{N}\dot{\mathbf{v}}_{B}&=-[{}^{N}\mathbf{\beta}_{N}]_{ \times}{}^{N}\mathbf{p}_{B}-2[{}^{N}\widehat{\mathbf{\Omega}}_{N}]_{\times}{}^{N}\mathbf{v} _{B}-[{}^{N}\widehat{\mathbf{\Omega}}_{N}]_{\times}^{2}{}^{N}\mathbf{p}_{B}\\ &+{}^{N}\mathbf{R}_{B}{}^{B}\mathbf{T}_{B}-{}^{N}\widehat{\mathbf{a}}_{N} \end{split} \tag{9}\]
where \({}^{N}\widehat{\mathbf{a}}_{N}\) and \({}^{N}\widehat{\mathbf{\Omega}}_{N}\) are the measured linear acceleration and angular velocity from the IMU, respectively.
### _CoNi-Mpc_
We propose a **Co**operative **N**on-**i**nertial **F**rame **B**ased **M**odel **P**redictive **C**ontrol (CoNi-MPC) with the above system model targeting relative motion. We define the cooperative system state \(\mathbf{x}=[{}^{N}\mathbf{p}_{B};{}^{N}\mathbf{v}_{B};{}^{N}\mathbf{q}_{B};{}^{N}\mathbf{\widehat{a}}_ {N};{}^{N}\widehat{\mathbf{\Omega}}_{N};{}^{N}\mathbf{\beta}_{N}]\in\mathbb{R}^{19}\). The first three vectors \({}^{N}\mathbf{p}_{B}\), \({}^{N}\mathbf{v}_{B}\), and \({}^{N}\mathbf{q}_{B}\), as defined in above section, are the relative quantities in the system. The last three vectors \({}^{N}\widehat{\mathbf{a}}_{N}\), \({}^{N}\widehat{\mathbf{\Omega}}_{N}\), and \({}^{N}\mathbf{\beta}_{N}\) contain the dynamic information of the non-inertial frame. The time derivative of the angular velocity is \(\frac{d}{dt}({}^{N}\widehat{\mathbf{\Omega}}_{N})={}^{N}\mathbf{\beta}_{N}\). For the linear and angular accelerations, we assume their change rates are 0 in each control window, i.e., \({}^{N}\dot{\widehat{\mathbf{a}}}_{N}=0\) and \(\dot{\mathbf{\beta}}_{N}=0\). As they are relatively high order values, this assumption does not affect the performance very much for our system. We put the dynamic information of the non-inertial frame in the state vector for convenience in the implementation part and for future improvement on a actively collaborative system (expanding the control vector and adding the dynamic evolution of frame \(N\)). The control input is \(\mathbf{u}=[T;{}^{B}\Omega_{B}^{x};{}^{B}\Omega_{B}^{y};{}^{B}\Omega_{B}^{z}]\in \mathbb{R}^{4}\) where \(T=\sum_{i=1}^{4}T_{i}\).
We define the quadratic cost
\[\mathbf{C}(\mathbf{x},\mathbf{u})=\|\mathbf{x}(t)-\mathbf{x}(t)_{ref}\|_{\mathbf{Q}}+\|\mathbf{u}(t)-\mathbf{u}_{h}\|_{\bm {R}}\]
where \(\|\mathbf{x}\|_{\mathbf{M}}=\mathbf{x}^{\top}\mathbf{M}\mathbf{x}\) and \(\mathbf{u}_{h}=[g;0;0;0]\) is the hover input. The discretized optimization problem is
\[\begin{split}\min_{\mathbf{u}_{0},\dots,\mathbf{u}_{
As our system does not rely on any information in the global world frame and only relates to the agent and target, the system can handle the cooperation between robots elegantly. Tasks like leader-follower, landing, orbit flight, rings crossing, etc. can be solved by simply defining the reference in the CoNi-MPC. The advantage of the system is that the reference is fixed, which is just a pre-computed expression of the relative motion between robots and does not need any online replaning. We classify the reference into two categories, fixed-point scheme and fixed-plan scheme, corresponding to leader-follower and complex motion respectively.
#### Iii-C1 Fixed point scheme (leader and follower)
The proposed method can be easily used for leader-follower control. CoNi-MPC only needs a fixed point (containing the full state) so as to let the agent track that point while the target moves. For example, an array containing the same point reference, \(\mathbf{x}(k)=[(0,0,z),\mathbf{0}^{\top},(1,0,0,0),\mathbf{0}^{\top},\mathbf{0}^{\top},\mathbf{0}^ {\top}]^{\top}\), fed to the controller will let the agent hover at the \({}^{N}\mathbf{p}_{B}=(0,0,z)^{\top}\) point in the non-inertial target frame with the same orientation of the target frame, even if the target frame may arbitrarily move in the world frame.
#### Iii-C2 Fixed plan scheme (complex trajectories)
For complex tasks such as landing, orbit flight, and rings crossing, we simply need to define the trajectory of the agent in the target frame. The landing tasks can use a trajectory approaching the origin of \(N\) frame and the orbit flight is a circle trajectory directly. We adopt our previous work of a minimum control effort polynomial trajectory class named MINCO [22] to define the relative trajectory between robot. MINCO trajectory can achieve smooth motions by decoupling the space and time parameters of the trajectory for users, which greatly improves the quality and efficiency of trajectory generation. We only need to take initial and terminal relative states of the UAV as boundary conditions, and specify the position of intermediate waypoints and the time duration of each piece to obtain a polynomial trajectory \(\mathbf{p}(t)\) with minimum jerk. Furthermore, we limit the maximum relative velocity and acceleration to guarantee dynamic feasibility. After discretizing \(\mathbf{p}(t)\) and calculating the orientation based on the differential flatness of multicopters [23], we can get a series of reference states \(\{[{}^{N}\mathbf{p}_{B}(k)\,;{}^{N}\mathbf{v}_{B}(k)\,;{}^{N}\mathbf{q}_{B}(k)]\}_{k=0}^{N}\). Thus, the proposed controller can be fed with only one fixed global trajectory to achieve autonomous landing and tracking while the UGV moves.
### _Implementation_
Fig. 3 shows the system overview of UAV-UGV cooperative motion control system. A CoNi-MPC controller, implemented using ACADO toolkit [24], is employed by the UAV and works as a high-level controller to produce thrust and body rate control command \(\mathbf{u}_{0}\). A low-level multi-stage PID controllers is used to track the control command. CoNi-MPC needs a pre-defined desired relative motion trajectory refer to the UGV as motion control target. With input of UGV's IMU measurements and relative state estimations, CoNi-MPC solves a optimization via multiple shooting technique and Runge-Kutta integration scheme. An average signal filter is applied to the IMU data from the non-inertial frame, which is transmitted through ROS's multi-machine communication mechanism in current setup. The relative estimation can be generated either from a motion capture system or directly from our previous work of CREPES [25], a relative estimation device. For each control iteration of the MPC optimization problem, we set the time window as \(T=2\) seconds and discretization time step \(dt=0.1\) second. The real control loop time of the MPC is around 10 ms, which is smaller than \(dt=100\) ms. This implementation differs from standard MPC formulation, where CoNi-MPC uses latest relative state to produce the control command much more frequently, but with relatively small scale optimization problem to save computation cost. In each iteration, the initial state is set as the current estimated relative state \(\mathbf{x}_{\text{est}}\). For \({}^{N}\widehat{\mathbf{a}}_{N}\), \({}^{N}\widehat{\mathbf{\Omega}}_{N}\), and \({}^{N}\mathbf{\beta}_{N}\), the penalty terms for them are set to \(\mathbf{Q}(i,i)=0,\;\forall i=10\dots 19\). Note that in the simulation and experiment, since the angular acceleration is hard to retrieve, we set this term to \(\mathbf{0}\) both in estimations and references, which assumes that the non-inertial frame rotates with a constant angular velocity in each prediction horizon.
## IV Experiments
Consider a UAV-UGV cooperative system, the uncontrolled motions of the UGV place a crucial role for the system performance. Most results of UAV-UGV cooperative tasks, such as the autonomous landing in [6] and [9], only show the UGV with constant linear velocities without any aggressive rotational movements. However, whether that UAV can track the UGV with both aggressive linear and angular motions well is the key point to consider when applying this system to corresponding tasks. From Equ. 9 it can be inferred that \({}^{N}\mathbf{p}_{B}\), \({}^{N}\mathbf{\Omega}_{N}\), \({}^{N}\mathbf{\beta}_{N}\), and \({}^{N}\widehat{\mathbf{a}}_{N}\) affect the relative acceleration \({}^{N}\hat{\mathbf{v}}_{B}\). Meanwhile, \({}^{N}\widehat{\mathbf{\Omega}}_{N}\), \({}^{N}\mathbf{p}_{B}\), and the relative linear velocity in \(W\), \(\hat{\mathbf{t}}_{N\overline{\mathbf{\beta}}}\), are coupled in the relative velocity (Equ. 2). In order to test the performance of the proposed controller, we decouple the variables and pick the parameters \((r,v,\omega)\) to conduct parameter study. These parameters stand for the range (x-y plane) between robots, linear and angular velocity of the target, which also fits the cooperative task intuitively. Parameters definition and theoretical analysis is as following.
* \(r\triangleq-N^{p}_{B}\), s.t. \({}^{N}\mathbf{p}_{B}^{x}<0\), \({}^{N}\mathbf{p}_{B}^{y}=0\), \({}^{N}\mathbf{p}_{B}^{z}=z(t)\geq 0\) where \(z(t)\) can be fixed or time-varying
* \(v\triangleq({}^{N}\mathbf{R}_{W}\hat{\mathbf{t}}_{N})^{x}\), s.t. \(({}^{N}\mathbf{R}_{W}\hat{\mathbf{t}}_{N})^{x}>0\), \(({}^{N}\mathbf{R}_{W}\hat{\mathbf{t}}_{N})^{y}=0\), \(({}^{N}\mathbf{R}_{W}\hat{\mathbf{t}}_{N})^{z}=0\)
Fig. 3: System overview and implementation of UAV-UGV cooperative motion control using CoNi-MPC.
* \(\omega\triangleq{}^{N}\mathbf{\Omega}_{N}^{z}\), s.t. \({}^{N}\mathbf{\Omega}_{N}^{x}=0\), \({}^{N}\mathbf{\Omega}_{N}^{y}=0\), \({}^{N}\mathbf{\Omega}_{N}^{z}>0\)
For simplicity, if no other specification, the units of the configuration \((r,v,\omega)\) are m, m/s, and rad/s by default, respectively. We expand Equ. 2 and 9 in three dimensions to show how \((r,v,\omega)\) are involved in the system model:
\[{}^{N}\mathbf{\dot{v}}_{B}^{x} =({}^{N}\mathbf{R}_{W}\mathbf{\dot{t}}_{B})^{x}-v \tag{11}\] \[{}^{N}\mathbf{\dot{v}}_{B}^{y} =({}^{N}\mathbf{R}_{W}\mathbf{\dot{t}}_{B})^{y}-wr\] \[{}^{N}\mathbf{\dot{v}}_{B}^{z} =({}^{N}\mathbf{R}_{W}\mathbf{\dot{t}}_{B})^{z}\] \[{}^{N}\mathbf{\dot{v}}_{B}^{x} =2\omega({}^{N}\mathbf{R}_{W}\mathbf{\dot{t}}_{B})^{y}+({}^{N} \mathbf{R}_{B}{}^{B}\mathbf{T}_{B})^{x}-({}^{N}\mathbf{\widehat{a}}_{N})^{x}\] \[{}^{N}\mathbf{\dot{v}}_{B}^{y} =-2\omega({}^{N}\mathbf{R}_{W}\mathbf{\dot{t}}_{B})^{x}+2\omega v +({}^{N}\mathbf{R}_{B}{}^{B}\mathbf{T}_{B})^{y}-({}^{N}\mathbf{\widehat{a}}_{N })^{y}\] \[{}^{N}\mathbf{\dot{v}}_{B}^{z} =({}^{N}\mathbf{R}_{B}{}^{B}\mathbf{T}_{B})^{z}-9.8\]
In the experiment, apart from the range parameter, the target UGV is programmed with a circular motion with different \(v,w\) combination in the world frame (the radius of circle is \(v/\omega\)). We classify the experiments into two schemes, fixed point and fixed plan scheme. The first is to control the quadrotor follow a fixed point (e.g., \({}^{N}\mathbf{\mathbf{p}}_{B}=(-r,0,z)^{\top}\)) in the non-inertial frame. We set the \(z\) as 2.0 m in simulation and 1.0 m in real experiment. For example, the configuration \((r,v,\omega)=(1.0\text{ m},1.0\text{ m/s},0.5\text{ rad/s})\) represents the quadrotor will follow a fixed point \((-1.0\text{ m},0.0\text{ m},2.0\text{ m})\) in the non-inertial frame with forward 1.0 m/s speed and counter-clockwise 0.5 rad/s rotating speed. The other test scheme is fixed plan experiment. The configuration has same meaning with the \((v,w)\) but the \(r\) stands for the inertial range of the fixed landing trajectory. The quadrotor will follow the pre-computed trajectory to land at the origin of the non-inertial frame. For example, \(\underline{r=1.0}\) represents that the landing trajectory will start at \({}^{N}\mathbf{\mathbf{p}}_{B}=(-1.0,0.0,2.0)^{\top}\) and end at \({}^{N}\mathbf{\mathbf{p}}_{B}=(0,0,0)^{\top}\). For each scheme, we define the tracking error \(e_{i}\) at each control iteration \(i\) as \(e_{i}=\|({}^{N}\mathbf{\mathbf{p}}_{B})_{\text{est}}-({}^{N}\mathbf{\mathbf{p}}_{B})_{ \text{ref},i}(0)\|\) which is the distance that the the current estimated point deviates from the first point of the reference window. The mean tracking error for each scheme is defined as \(\bar{e}=\sum_{i}e_{i}/M\), where \(M\) is the total number of iterations.
### _Simulation_
The numerical simulation is performed on a work station with an AMD Ryzen PRO 5995WX CPU, where we use 64 Docker containers simultaneously simulating the quadrotor system model with different parameter configurations using
Fig. 4: Mean errors for tracking a point with different \((r,v,\omega)\) settings. (a-b) error distribution of \(r-v\) with \(\omega\) = 0.31 and 0.71. (c-d) error distribution of \(\omega-r\) with \(v\) = 0.3 and 0.7. (e-f) error distribution of \(\omega-v\) with \(r\) = 0.3 and 0.7.
Fig. 5: Mean errors for tracking a landing trajectory with different \((r,v,\omega)\) settings. (a-b) error distribution of \(r-v\) with \(\omega\) = 0.31 and 0.71. (c-d) error distribution of \(\omega-r\) with \(v\) = 0.3 and 0.7. (e-f) error distribution of \(\omega-v\) with \(r\) = 3.3 and 3.7.
Fig. 6: Mean errors for tracking a point with different \((r,v,\omega)\) settings. (a) illustrates the data points with tracking error \(\leq\) 0.30 m. (b) shows three surfaces of data points with around tracking errors of 0.10 (red), 0.05 (green), and 0.01 (blue) m.
Fig. 7: Mean errors for tracking a landing trajectory with different \((r,v,\omega)\) settings. (a) illustrates the data points with tracking error \(\leq\) 0.30 m. (b) shows three surfaces of data points with around tracking errors of 0.25 (red) and 0.20 (green) m.
the proposed controller. The MPC in the simulation runs over 100Hz. For all simulations, the range of the control inputs is set to
* \(T\in[2.0,20.0]\) m/s\({}^{2}\)
* \({}^{B}\boldsymbol{\Omega}_{B}^{i}\in[-3.14,3.14]\) rad/s \(\forall i\in\{x,y,z\}\)
The penalty matrices \(\boldsymbol{Q}\) and \(\boldsymbol{R}\) and the constraints are the same. The simulated relative estimation is added with a Gaussian noise to represent real sensors, where \(\boldsymbol{\sigma}(^{N}\boldsymbol{p}_{B})^{i}=0.025\) m, \(\boldsymbol{\sigma}(^{N}\boldsymbol{v}_{B})^{i}=0.025\) m/s, \(\boldsymbol{\sigma}(\theta(^{N}\boldsymbol{q}_{B}))^{i}=0.044\) rad (\(5^{\circ}\), \(\theta(\cdot)\) is the rotation angle of the quaternion along the rotation axis), \(\boldsymbol{\sigma}(^{N}\boldsymbol{\tilde{a}}_{N})^{i}=0.025\) m/s\({}^{2}\), and \(\boldsymbol{\sigma}(^{N}\boldsymbol{\Omega}_{N})^{i}=0.044\) rad/s \((5^{\circ}/s)\), where \(i\in\{x,y,z\}\).
The parameter configurations of the fixed-point scheme are set with \(r\in[0.0,2,0]\), \(v\in[0.0,2.0]\), and \(\omega\in[0.01,2.01]\) all with stepsizes of \(0.1\). Fig. 4 illustrates the tracking errors for different \(r,v,\omega\) combinations by selecting two example values for each parameter. The parameters for the fixed-plan scheme are: \(r\in[3.0,5.0]\), \(v\in[0.0,2.0]\), and \(\omega\in[0.01,2.01]\) all with stepsizes of \(0.1\). Fig. 5 illustrates the mean tracking errors with the same logic. We consider tracking error more than 0.30 m as tracking failure and they are shown in the dark red regions in Fig. 4 and 5. In Fig. 6 and 7, we show all the errors in 3D in the left and selected error surfaces (relation between result error and \(r,v,\omega\)) in the right. From these figures, we can conclude that the angular velocity \(\omega\) of the non-inertial frame affects the tracking error most. For a fixed \(\omega\), the tracking error increases as \(v\) grows. For a landing task, the simulations can help find the safe \((r,v,\omega)\) parameters that can guarantee the success of landing.
traditional methods, this method bypasses the dependency of global state estimation of the agent and/or the target in the world frame. The system also avoids relying on prior knowledge of the target and does not need complex trajectory re-planning. CoNi-MPC only requires the relative states (pose and velocity), the angular velocity and the accelerations of the target, which can be obtained by relative localization methods and ubiquitous MEMS IMU sensors, respectively. We have performed extensive fixed-point and fixed-plan simulations and considerable real world experiments to test the proposed system. Experiment results show that the controller has promising robustness and tracking performance. For future works, this method can be extended to achieve multi-agent formation control and more demanding cooperative tasks.
|
2303.13290 | Unsupervised Deep Probabilistic Approach for Partial Point Cloud
Registration | Deep point cloud registration methods face challenges to partial overlaps and
rely on labeled data. To address these issues, we propose UDPReg, an
unsupervised deep probabilistic registration framework for point clouds with
partial overlaps. Specifically, we first adopt a network to learn posterior
probability distributions of Gaussian mixture models (GMMs) from point clouds.
To handle partial point cloud registration, we apply the Sinkhorn algorithm to
predict the distribution-level correspondences under the constraint of the
mixing weights of GMMs. To enable unsupervised learning, we design three
distribution consistency-based losses: self-consistency, cross-consistency, and
local contrastive. The self-consistency loss is formulated by encouraging GMMs
in Euclidean and feature spaces to share identical posterior distributions. The
cross-consistency loss derives from the fact that the points of two partially
overlapping point clouds belonging to the same clusters share the cluster
centroids. The cross-consistency loss allows the network to flexibly learn a
transformation-invariant posterior distribution of two aligned point clouds.
The local contrastive loss facilitates the network to extract discriminative
local features. Our UDPReg achieves competitive performance on the
3DMatch/3DLoMatch and ModelNet/ModelLoNet benchmarks. | Guofeng Mei, Hao Tang, Xiaoshui Huang, Weijie Wang, Juan Liu, Jian Zhang, Luc Van Gool, Qiang Wu | 2023-03-23T14:18:06Z | http://arxiv.org/abs/2303.13290v1 | # Unsupervised Deep Probabilistic Approach for Partial Point Cloud Registration
###### Abstract
Deep point cloud registration methods face challenges to partial overlaps and rely on labeled data. To address these issues, we propose UDPReg, an unsupervised deep probabilistic registration framework for point clouds with partial overlaps. Specifically, we first adopt a network to learn posterior probability distributions of Gaussian mixture models (GMMs) from point clouds. To handle partial point cloud registration, we apply the Sinkhorn algorithm to predict the distribution-level correspondences under the constraint of the mixing weights of GMMs. To enable unsupervised learning, we design three distribution consistency-based losses: self-consistency, cross-consistency, and local contrastive. The self-consistency loss is formulated by encouraging GMMs in Euclidean and feature spaces to share identical posterior distributions. The cross-consistency loss derives from the fact that the points of two partially overlapping point clouds belonging to the same clusters share the cluster centroids. The cross-consistency loss allows the network to flexibly learn a transformation-invariant posterior distribution of two aligned point clouds. The local contrastive loss facilitates the network to extract discriminative local features. Our UDPReg achieves competitive performance on the 3DMatch/3DLoMatch and ModelNet/ModelLoNet benchmarks.
## 1 Introduction
Rigid point cloud registration aims at determining the optimal transformation to align two partially overlapping point clouds into one coherent coordinate system [30, 31, 21, 32]. This task dominates the performance of systems in many areas, such as robotics [58], augmented reality [6], autonomous driving [35, 43], radiotherapy [27], etc. Recent advances have been monopolized by learning-based approaches due to the development of 3D point cloud representation learning and differentiable optimization [38].
Existing deep learning-based point cloud registration methods can be broadly categorized as _correspondence-free_[21, 30, 32, 48, 2] and _correspondence-based_[4, 9, 19, 51]. The former minimizes the difference between global features extracted from two input point clouds. These global features are typically computed based on all the points of a point cloud, making correspondence-free approaches inadequate to handle real scenes with partial overlap [9, 56]. Correspondence-based methods first extract local features used for the establishment of point-level [9, 17, 19, 21] or distribution-level [53, 29, 40, 15] correspondences, and finally, estimate the pose from those correspondences. However, point-level registration does not work well under conditions involving varying point densities or repetitive patterns [31]. This issue is especially prominent in indoor environments, where low-texture regions or repetitive patterns sometimes dominate the field of view. Distribution-level registration, which compensates for the shortcomings of point-level methods, aligns two point clouds without establishing explicit point correspondences. Unfortunately, to the best of our knowledge, the existing methods are inflexible and cannot handle point clouds with partial overlaps in real scenes [31, 28]. Moreover, the success of learning-based methods mainly depends on large amounts of ground truth transformations or correspondences as the supervision signal for model training. Needless to say, the required ground truth is typically difficult or costly to acquire, thus hampering their application in the real world [39].
We thus propose an unsupervised deep probabilistic registration framework to alleviate these limitations. Specifically, we extend the distribution-to-distribution (D2D) method to solve partial point cloud registration by adopting the Sinkhorn algorithm [11] to predict correspondences of distribution. In order to make the network learn geometrically and semantically consistent features, we design distribution-consistency losses, i.e., self-consistency and cross-consistency losses, to train the networks without using any ground-truth pose or correspondences. Besides, we also introduce a local contrastive loss to learn more discriminative features by pushing features of points belonging to the same clusters together while pulling dissimilar features of points coming from different clusters apart.
Our UDPReg is motivated by OGMM [33] and UGMM [20] but differs from them in several ways. Firstly, unlike OGMM, which is a supervised method, our approach is unsupervised. Secondly, while UGMM [20] treats all clusters equally in the matching process, our method aligns different clusters with varying levels of importance. This enables our approach to handle partial point cloud registration successfully. To enable unsupervised learning, the designed self-consistency loss encourages the extracted features to be geometrically consistent by compelling the features and coordinates to share the posterior probability. The cross-consistency loss prompts the extracted features to be geometrically consistent by forcing the partially overlapping point clouds to share the same clusters. We evaluate our UDPReg on 3DMatch [54], 3DLoMatch [19], ModelNet [46] and ModelLoNet [19], comparing our approach against traditional and deep learning-based point cloud registration approaches. UDPReg achieves state-of-the-art results and significantly outperforms unsupervised methods on all the benchmarks.
In summary, the main contributions of this work are:
* We propose an unsupervised learning-based probabilistic framework to register point clouds with partial overlaps.
* We provide a deep probabilistic framework to solve partial point cloud registration by adopting the Sinkhorn algorithm to predict distribution-level correspondences.
* We formulate self-consistency, cross-consistency, and local-contrastive losses, to make the posterior probability in coordinate and feature spaces consistent so that the feature extractor can be trained in an unsupervised way.
* We achieve state-of-the-art performance on a comprehensive set of experiments, including synthetic and real-world datasets1. Footnote 1: [https://github.com/gfmei/UDPReg](https://github.com/gfmei/UDPReg)
## 2 Related Work
**Point-Level Methods.** Point-level registration approaches first extract point-wise features, then establish point-to-point correspondences through feature matching, followed by outlier rejection and robust estimation of the rigid transformation. Numerous works, such as FCGF [10] and RGM [17], focus on extracting discriminative features for geometric correspondences. For the correspondence prediction, DCP [44], RPMNet [50], and REGTR [51] perform feature matching by integrating the Sinkhorn algorithm or Transformer [42] into a network to generate soft correspondences from local features. IADAM [25] incorporates both geometric and distance features into the iterative matching process. To reject outliers, DGR [9] and 3DRegNet [36] use networks to estimate the inliers. Predator [19] and PRNet [45] focus on detecting points in the overlap region and utilizing their features to generate matches. Keypoint-free methods [52, 55, 30] first downsample the point clouds into super-points and then match them by examining whether their neighborhoods (patch) overlap. Though achieving remarkable performance, most of these methods rely on large amounts of ground-truth transformations, as inaccessible or expensive as such annotation may get. This said, the ground-truth geometric labels could potentially be obtained from full 3D reconstruction pipelines [8], but these require delicate parameter tuning, partial human supervision, and extra sensory information such as GPS. As a result, the success of learning-based techniques has been limited to a handful of datasets with ground-truth annotations.
**Distribution-Level Methods.** Distribution-level methods model the point clouds as probability distributions, often via the use of GMMs, and perform alignment either by employing a correlation-based or an EM-based optimization framework. The correlation-based methods [53, 22] first build GMM probability distributions for both the source and target point clouds. Then, the transformation is estimated by minimizing a metric or divergence between the distributions. However, these methods lead to nonlinear optimization problems with nonconvex constraints [24]. Unlike correlation-based methods, the EM-based approaches, such as JRMPC [15], CPD [34], and FilterReg [18], represent the geometry of one point cloud using a GMM distribution over 3D Euclidean space. The transformation is then calculated by fitting another point cloud to the GMM distribution under the maximum likelihood estimation (MLE) framework. These methods are robust to noise and density variation [53]. Most of them utilize robust discrepancies to reduce the influence of outliers by greedily aligning the largest possible fraction of points while being tolerant to a small number of outliers. However, if outliers dominate, the greedy behavior of these methods easily emphasizes outliers, leading to degraded registration results [15]. Considering these factors, we formulate registration in a novel partial distribution matching framework, where we only seek to partially match the distributions.
**Unsupervised Point Cloud Registration.** To handle ground-truth labeling issues, great efforts [49, 21, 23, 39, 45, 12] have been devoted to unsupervised deep point cloud registration. The existing methods mainly lie in auto-encoders [39, 21, 12] with a reconstruction loss or contrastive learning [10, 14, 47] with data augmentation. Although encouraging results have been achieved, some limitations remain to be addressed. Firstly, they depend on the point-level loss, such as Chamfer distance in auto-encoder [12], finding it difficult to handle large-scale scenarios due to computational complexity. Secondly, many pipelines [45] apply fixed/handcrafted data augmentation to generate transformations or correspondences, leading to sub-optimal learning. This is because they cannot fully use the cross information of partially overlapping point
clouds without geometric labels, and the shape complexity of the samples is ignored in the fixed augmentation [26]. To overcome these limitations, we provide a distribution consistency-based unsupervised method, which utilizes the distribution-level loss to reduce the computational complexity. Even without using any data augmentation, the proposed method is still suitable and available.
## 3 Method
### Problem Formulation
Point cloud registration aims to seek a transformation \(T{\in}SE(3)\) that optimally aligns the source point cloud \(\mathbf{\mathcal{P}}^{s}{=}\{\mathbf{p}_{i}^{s}{\in}{\mathbb{R}}^{3}|i{=}1,2,...,N_{s}\}\) to the target point cloud \(\mathbf{\mathcal{P}}^{t}{=}\{\mathbf{p}_{j}^{t}{\in}{\mathbb{R}}^{3}|j{=}1,2,...,N_{t}\}\). \(\mathbf{\mathcal{P}}^{s}\) and \(\mathbf{\mathcal{P}}^{t}\) contain \(N_{s}\) and \(N_{t}\) points, respectively. \(T\) consists of rotation \(R{\in}SO(3)\) and translation \(\mathbf{t}{\in}{\mathbb{R}}^{3}\). Instead of directly employing the point-level solution, we apply the distribution-to-distribution (D2D) approach to fit these two point clouds and obtain individual potential GMMs, where each component represents the density of the spatial coordinates and features in a local region. The transformation is then recovered from the learned GMMs. Our goal is to learn GMMs of point clouds for registration without any ground-truth geometric labels. Our UDPReg framework is conceptually simple and is illustrated in Fig. 1. The shared weighted feature extractor consisting of an encoder, Transformer (self- and cross-attention), and decoder first extracts point-wise features \(\mathbf{\mathcal{F}}^{s}\) and \(\mathbf{\mathcal{F}}^{t}\), overlap scores \(\mathbf{O}^{s}\) and \(\mathbf{O}^{t}\) from point clouds \(\mathbf{\mathcal{P}}^{s}\) and \(\mathbf{\mathcal{P}}^{t}\), respectively. \(\mathbf{\mathcal{F}}^{s}\) and \(\mathbf{\mathcal{F}}^{t}\) are then fed to cluster head to estimate the distributions (GMMs) of \(\mathbf{\mathcal{P}}^{s}\) and \(\mathbf{\mathcal{P}}^{t}\) in both coordinate and feature spaces. After that, the correspondences \(\mathcal{M}\) are estimated by performing cluster-level and point-level matching based on the Sinkhorn algorithm [11]. Finally, a variant of RANSAC [16] specialized to 3D registration is adopted to calculate \(T\) based on the estimated correspondences. The network is trained using the proposed self-consistency, cross-consistency, and local contrastive losses in an unsupervised manner.
### The Proposed GMM-Based Registration
**Feature Extraction.** Following [19, 31, 38], a shared encoder KPConv-FPN [41], which is composed of a series of ResNet-like blocks and stridden convolutions, simultaneously downsamples the raw point clouds \(\mathbf{\mathcal{P}}^{s}\) and \(\mathbf{\mathcal{P}}^{t}\) into superpoints \(\mathbf{\tilde{\mathcal{P}}}^{s}\) and \(\mathbf{\tilde{\mathcal{P}}}^{t}\) and extracts associated features \(\mathbf{\tilde{\mathcal{F}}}^{s}{=}\{\mathbf{\tilde{f}}_{i}^{s}{\in}{\mathbb{R}}^{b}| i{=}1,2,...,\bar{N}_{s}\}\) and \(\mathbf{\tilde{\mathcal{F}}}^{t}{=}\{\mathbf{\tilde{f}}_{j}^{t}{\in}{\mathbb{R}}^{b}| j{=}1,2,...,\bar{N}_{t}\}\), respectively. \(b\) is dimension. Then, self- and cross-attention are applied to encode contextual information of two point clouds with partial overlaps, which outputs conditioned features \(\mathbf{\tilde{\mathcal{F}}}^{s}\) and \(\mathbf{\tilde{\mathcal{F}}}^{t}\). Finally, the shared decoder starts with conditioned features \(\mathbf{\tilde{\mathcal{F}}}^{s}{\) and \(\mathbf{\tilde{\mathcal{F}}}^{t}\), and outputs the point-wise feature descriptor \(\mathbf{\mathcal{F}}^{s}{\in}{\mathbb{R}}^{N_{s}\times d}\) and \(\mathbf{\mathcal{F}}^{t}{\in}{\mathbb{R}}^{N_{t}\times d}\) and overlap scores \(\mathbf{O}^{s}{=}\{o_{i}^{s}\}{\in}{\mathbb{R}}^{N_{+}}_{+}\) and \(\mathbf{O}^{t}{=}\{o_{j}^{t}\}{\in}{\mathbb{R}}^{N_{t}}_{+}\). \(d\) is the dimension of features. The de
-coder combines NN-upsampling with linear layers and includes skip connections from the corresponding encoder layers. For more details on feature extraction, please refer to the supplementary material.
**Learning Posterior.** Different from the previous works [13, 53] only considering the spatial coordinates of the points in the probabilistic registration model, we propose a method to learn the joint distribution over the spatial coordinate and feature spaces. Specifically, we apply a multi-layer perceptron (MLP), i.e., cluster head \(\psi\), that takes as input \(\mathbf{\mathcal{F}}^{s}\) and \(\mathbf{\mathcal{F}}^{t}\) and outputs joint log probabilities and a Softmax operator that acts on log probabilities to generate probability matrices \(\mathbf{S}^{s}\)=\(\{s^{s}_{ij}\}_{i,j=1}^{N^{r},L-1}\) and \(\mathbf{S}^{t}\)=\(\{s^{t}_{ij}\}_{i,j=1}^{N^{t},L-1}\), respectively. To deal with outliers, it is straightforward to add a Gaussian kernel density. We define \(\mathbf{\hat{S}}^{x}\)=\(\{s^{x}_{ij}\}_{i,j=1}^{N^{x},L}\) (\(x\)\(\in\)\(\{s,t\}\)) with elements satisfying \(\hat{s}^{x}_{iL}\)=\(1.0\)\(-\)\(\sigma^{x}_{i}\) and \(\hat{s}^{x}_{iL}\)=\(\sigma^{x}_{i}s^{x}_{ij},1\leq j<L\). UDPReg assumes that coordinate and feature spaces share the same probability matrix (posterior distribution). The GMM parameters \(\mathbf{\Theta}^{x}\) for point cloud \(\mathbf{\mathcal{P}}^{x}\), in 3D coordinate space, consists of \(L\) triples \((\pi^{x}_{j},\mathbf{\mu}^{x}_{j},\mathbf{\Sigma}^{x}_{j})\), where \(\pi^{x}_{j}\) is the mixing weight of component \(j\) satisfying \(\sum_{j=1}^{L}\pi^{x}_{j}=1\), \(\mathbf{\mu}^{x}_{j}\) is a \(3\times 1\) mean vector and \(\mathbf{\Sigma}^{x}_{j}\) is a \(3\times 3\) covariance matrix of the \(j\)-th component. Given the outputs \(\mathbf{S}^{x}\) of \(\psi\) together with the point coordinates \(\mathbf{\mathcal{P}}^{x}\), the GMMs are calculated as:
\[\begin{split}&\pi^{x}_{j}\!\!=\!\!\frac{1}{N_{x}}\sum_{i=1}^{ \tilde{x}_{ij}^{x}},\mathbf{\mu}^{x}_{j}\!\!=\!\!\frac{1}{N_{x}\pi^{x}_{j}}\sum_{i=1 }^{\tilde{x}_{ij}^{x}}\mathbf{p}^{x}_{i},\\ &\mathbf{\Sigma}^{x}_{j}\!\!=\!\sum_{i=1}^{\tilde{x}_{ij}^{x}}\left( \mathbf{p}^{x}_{i}\!-\!\mathbf{\mu}^{x}_{j}\right)\left(\mathbf{p}^{x}_{i}\!-\!\mathbf{\mu}^{x} _{j}\right)^{\top},\\ & G^{x}\left(\mathbf{x}\right)\!=\!\sum_{j=1}^{\pi^{x}_{j}}\mathcal{N} \left(\mathbf{x}|\mathbf{\mu}^{x}_{j},\mathbf{\Sigma}^{x}_{j}\right),x\in\{s,t\}.\end{split} \tag{1}\]
Similar in the coordinate space, based on probability matrices \(\mathbf{S}^{s}\) and \(\mathbf{S}^{t}\), the GMM parameters of point clouds \(\mathbf{\mathcal{P}}^{s}\) and \(\mathbf{\mathcal{P}}^{t}\) in feature space are also computed as:
\[\mathbf{\mu}^{f_{x}}_{j}\!\!=\!\sum_{i=1}^{N_{x}}\frac{\tilde{s}^{x}_{ij}\mathbf{f}^{x }_{i}}{N_{x}\pi^{x}_{j}},\mathbf{\Sigma}^{f_{x}}_{j}\!\!=\!\sum_{i=1}^{N_{x}}\tilde {s}^{x}_{ij}\left(\mathbf{f}^{x}_{i}\!-\!\mathbf{\mu}^{f_{x}}_{j}\right)\left(\mathbf{f}^{ x}_{i}\!-\!\mathbf{\mu}^{f_{x}}_{j}\right)^{\top},\]
where subscript \(x\)\(\in\)\(\{s,t\}\). Note that the GMMs in coordinate and feature spaces share mixing coefficients. For simplify, we denote \(\Phi^{f_{x}}_{k}(\mathbf{x})\)=\(\mathcal{N}\left(\mathbf{x}|\mathbf{\mu}^{f_{x}}_{k},\mathbf{\Sigma}^{f_{x}}_{k}\right)\) with \(k\)\(\in\)\(\{1,\cdots,L\}\). The GMMs of point clouds \(\mathbf{\mathcal{P}}^{s}\) and \(\mathbf{\mathcal{P}}^{t}\) in feature space are then given as:
\[G^{f_{x}}\left(\mathbf{x}\right)\!=\!\sum_{j=1}^{L}\pi^{s}_{j}\Phi^{f_{x}}_{j}(\bm {x}),\quad G^{f_{t}}\left(\mathbf{x}\right)\!=\!\sum_{j=1}^{L}\pi^{t}_{j}\Phi^{f_{t} }_{j}(\mathbf{x}). \tag{2}\]
**Cluster-Level Matching.** Instead of indirectly performing the maximum likelihood estimation between \(G^{s}\) and \(G^{t}\), weighted distribution-level correspondences are represented as soft assignments to the components based on the mixing weights of GMMs and the \(L_{2}\) distance [22] of distribution in the feature space. This is because \((\pi^{s}_{j},\mathbf{\mu}^{s}_{j},\mathbf{\Sigma}^{s}_{j})\) and \((\pi^{t}_{j},\mathbf{\mu}^{t}_{j},\mathbf{\Sigma}^{t}_{j})\) are not wholly matched when two point clouds are partially overlapped. Moreover, the aligned components should have similar mixing weights and small distances. To estimate the correspondences, we first calculate the distance between two GMMs as follows:
\[\mathcal{D}(\Phi^{f_{x}}_{i},\Phi^{f_{t}}_{j})=\int_{\mathbb{R}}\left(\Phi^{f_{x} }_{i}(\mathbf{x})-\Phi^{f_{t}}_{j}(\mathbf{x})\right)^{2}d\mathbf{x}. \tag{3}\]
We denote \(r^{x}\)\(=\)\(\sum_{i=1}^{L-1}\max(\frac{\pi^{x}_{i}}{1-\pi^{x}_{L}}-\frac{\pi^{y}_{i}}{1-\pi^{x}_{i }},0)\) and cost matrix \(\mathbf{D}\) with elements satisfying \(\mathbf{D}_{ij}\)=\(\mathcal{D}(\Phi^{f_{x}}_{i},\Phi^{f_{t}}_{j})\). \(x=s,y=t\) or \(x=t,y=s\). In partially overlapping registration, some components are occluded in the other frame. Similar to [52], we propose here to solve it directly by changing the cost matrix as \(\hat{\mathbf{D}}\) with elements satisfying, if \(i,j\)\(<\)\(L\), \(\hat{\mathbf{D}}_{ij}\)=\(\mathbf{D}_{ij}\) otherwise \(\mathbf{D}_{ij}\)=\(z\). \(z\) is a learnable parameter. The extended assignment matrix \(\Gamma\)\(\in\)\(\)\(\mathbf{R}^{L\times L}\) can be estimated by solving the following optimization problem:
\[\begin{split}\min_{\Gamma}\sum_{ij}\Gamma_{ij}\hat{\mathbf{D}}_{ij},\\ \text{s.t.,}&\Gamma\mathbf{1}_{L}\!\!=\!\hat{\mathbf{\pi}}^ {s},\Gamma^{\top}\mathbf{1}_{L}\!\!=\!\hat{\mathbf{\pi}}^{t},\Gamma_{ij}\in[0,1], \end{split} \tag{4}\]
where \(\hat{\mathbf{\pi}}^{x}\)\(=\)\(\frac{1}{1+r^{x}-\pi^{x}_{L}}(\pi^{x}_{1},\pi^{x}_{2},\cdots,\pi^{x}_{L-1},r^{x}),x\)\(\in\)\(\{s,t\}\). We run the Sinkhorn Algorithm [11] to seek an optimal solution. After that, each entry \((i,j)\) of \(\Gamma\) implies the matching confidence between components. Following [52], we pick correspondences whose confidence scores are above a threshold \(\tau=0.1\). We define the picked distribution-level correspondence set as \(\bar{C}\)\(=\)\(\{(\hat{\mathbf{\mu}}^{s}_{i},\hat{\mathbf{\mu}}^{t}_{i})\}\).
**Point-Level Registration.** We first partition the points into clusters by assigning each point to its closest centroid in the geometric space. Once grouped, we obtain 3D patches comprised of points along with their corresponding clustering scores and descriptors. These patches enable us to extract point correspondences. For a centroid \(\mathbf{\mu}^{s}_{i}\), its associated point set \(\mathrm{C}^{s}_{i}\) and feature set \(\mathrm{F}^{s}_{i}\) are denoted as:
\[\begin{cases}\mathrm{C}^{s}_{i}=\{\mathbf{p}^{s}\in\mathbf{\mathcal{P}}^{s}||\mathbf{p}^{s}- \mathbf{\mu}^{s}_{i}||_{2}\leq\|\mathbf{p}^{s}-\hat{\mathbf{\mu}}^{s}_{j}\|_{2},i\neq j\},\\ \mathrm{F}^{s}_{i}=\{\mathbf{f}^{s}_{j}\in\mathbf{\mathcal{F}}^{s}|\mathbf{p}^{s}_{j}\in \mathrm{C}^{s}_{i}\},\\ \mathrm{S}^{s}_{i}=\{\mathbf{s}^{s}_{i}\in\mathbf{s}^{s}_{i}|\mathbf{p}^{s}_{j}\in \mathrm{C}^{s}_{i}\}.\end{cases}\]
The same operator is also performed for \(\mathbf{\mu}^{t}_{j}\) and we get \(\mathrm{C}^{t}_{i}\), \(\mathrm{F}^{t}_{i}\), and \(\mathrm{S}^{t}_{i}\). The cluster-level correspondence set \(\mathcal{M}^{\prime}\) are expanded to its corresponding
reaching \(\mathbf{\Gamma}^{i}\), we select correspondences from \((\mathrm{C}_{i}^{s},\mathrm{C}_{i}^{t})\) with maximum confidence score for each row of \(\mathbf{\Gamma}^{i}\). We denote each correspondence set extracted from a pair of patches as \(\mathcal{M}^{i}\)\(=\)\(\{\mathbf{p}_{i}^{s}\mathrm{C}_{i}^{s},\mathbf{p}_{j}^{t}\mathrm{C}_{i}^{t}),\hat{i}\)\(=\)\(1,2,\cdots,K\)\(\hat{j}\)\(=\)\(\arg\max_{k}\mathbf{\Gamma}_{i,k}^{i}\). The final point correspondence set \(\mathcal{M}\) consists of the union of all the obtained patch-level correspondence sets \(\mathcal{M}^{i}\). Following [4, 52], a variant of RANSAC [16] that is specialized to 3D registration takes \(\mathcal{M}\) as an input to estimate the transformation.
### Consistency-Based Unsupervised Learning
**Self-Consistency Loss.** Our self-consistency loss encourages point clouds to share an identical posterior distribution in coordinate and feature spaces. It can be directly used without using any data augmentation. Because training the network parameters is equivalent to optimizing the \(\mathbf{\Theta}^{s}\) and \(\mathbf{\Theta}^{t}\), the GMMs parameters can be fitted to the observed data points via maximizing the log-likelihood of samples to \(\mathbf{\Theta}^{s}\) and \(\mathbf{\Theta}^{t}\). However, the log-likelihood function is unstable in the training processing since its value goes to infinity for a specific combination of means and some degenerating covariance matrices. To avoid covariance degeneration, we approximate the probabilities of points belonging to each cluster based on their distance to the centroids estimated by Eq. (1) under the constraints of the mixture weights. We denote the empirical distribution matrices of \(\mathbf{\mathcal{P}}^{s}\) and \(\mathbf{\mathcal{P}}^{t}\) as \(\mathbf{\gamma}^{s}=\{\mathbf{\gamma}_{ij}^{s}\}\) and \(\mathbf{\gamma}^{t}=\{\mathbf{\gamma}_{ij}^{t}\}\). This results in the following optimization objective:
\[\min_{\mathbf{\gamma}^{x}}\sum_{i,j}\mathbf{\gamma}_{ij}^{x}\|\mathbf{p}_{i}^ {x}-\mathbf{\mu}_{j}^{x}\|_{2}^{2}, \tag{6}\] \[\text{s.t., }\sum_{i}\mathbf{\gamma}_{ij}^{x}\!\!=\!\!N_{x}\mathbf{\pi}_{j }^{x},\sum_{j}\mathbf{\gamma}_{ij}^{x}\!\!=\!\!1,\mathbf{\gamma}_{ij}\in[0,1],\]
where \(x\in\{s,t\}\). \(\sum_{j}\mathbf{\gamma}_{ij}^{x}\!\!\!=\!\!1\) is based on the property of the probability that the sum of all the probabilities for all possible events is equal to one. \(\sum_{i}\mathbf{\gamma}_{ij}^{x}\!\!\!=\!\!N_{x}\mathbf{\pi}_{j}^{x}\) is the constraints of the mixture weights. We address the minimization of Eq. (6) by adopting an efficient version of the Sinkhorn algorithm [11]. Coordinate and feature spaces share an identical posterior distribution means that \(\mathbf{S}^{x}\) and \(\mathbf{\gamma}^{x}\) should be equal, which leads to a cross-entropy loss. Our self-consistency loss is thus formulated as follows:
\[\mathcal{L}_{sc}=-\sum_{ij}\mathbf{\gamma}_{ij}^{s}\log s_{ij}^{t}-\sum_{ij}\mathbf{ \gamma}_{ij}^{t}\log s_{ij}^{t}. \tag{7}\]
**Cross-Consistency Loss.** The described self-consistency loss only encourages the learned representation to be spatially sensitive, but it cannot ensure that the learned features be transformation invariant. Therefore, we introduce a cross-consistency loss to encourage the network to learn transformation-invariant feature representations. Our cross-consistency loss is based on the fact that the cluster labeling should not change if the points are rigidly transformed. This fact means that if points \(\mathbf{p}^{s}\)\(\in\)\(\mathbf{\mathcal{P}}^{s}\) and \(\mathbf{p}^{t}\)\(\in\)\(\mathbf{\mathcal{P}}^{t}\) belong to the same cluster, they should share the same cluster centroid. Therefore, the cross-consistency loss can make full use of the information from both aligned point clouds. Concretely, for two input features sets \(\left(\mathbf{\mathcal{F}}^{s},\mathbf{\mathcal{F}}^{t}\right)\), and two probability matrices \(\left(\mathbf{S}^{s},\mathbf{\mathcal{S}}^{t}\right)\), we obtain a new feature set \(\mathbf{\mathcal{F}}\)\(=\)\(cat\left(\mathbf{\mathcal{F}}^{s},\mathbf{\mathcal{F}}^{t}\right)\) and a probability matrix \(\mathbf{\mathcal{S}}\)\(=\)\(cat\left(\mathbf{S}^{s},\mathbf{\mathcal{S}}^{t}\right)\). \(cat(\cdot,\cdot)\) means concatenation. We assume the current estimated rotation and translation are \(\mathbf{R}\) and \(\mathbf{t}\). We define \(\mathbf{\bar{\mathcal{P}}}^{s}\)\(=\)\(\mathbf{R}\mathbf{\mathcal{P}}^{s}\)\(+\)\(\mathbf{t}\) and \(\mathbf{\mathcal{P}}\)\(=\)\(cat(\mathbf{\bar{\mathcal{P}}}^{s},\mathbf{\mathcal{P}}^{t})\). Then, we calculate the parameters of global GMMs in both feature and euclidean spaces as:
\[\pi_{j}=\frac{\sum_{i}\mathbf{s}_{ij}}{N},\ \mathbf{\mu}_{j}^{f}=\frac{\sum_{i}\mathbf{s}_{ij}\mathbf{f} _{i}}{\pi_{j}N},\ \mathbf{\mu}_{j}^{e}=\frac{\sum_{i}\mathbf{s}_{ij}\mathbf{p}_{i}}{\pi_{j}N},\]
where \(N\)\(=\)\(N_{s}\)\(+\)\(N_{t}\). To avoid two aligned point clouds being grouped into separate clusters, we assume that clustering satisfies two constraints:
* GMMs are coupled with approximate uniform mixing weights in coordinate and feature spaces.
* If a point \(\mathbf{p}_{i}\) belongs to partition \(j\), point \(\mathbf{p}_{i}\) and its coupled centroid should have the shortest distance.
Let \(\mathbf{\gamma}\)\(=\)\(\{\gamma_{ij}\}\) to be the empirical probability matrix. The two constraints can then be ensured by minimizing the following objective:
\[\min_{\mathbf{\gamma}}\sum_{ij}\left(\lambda_{1}\|\mathbf{p}_{i}-\mathbf{\mu} _{j}^{e}\|_{2}^{2}+\lambda_{2}\|\mathbf{f}_{i}-\mathbf{\mu}_{j}^{f}\|_{2}^{2}\right) \gamma_{ij}, \tag{8}\] \[\text{s.t.}\sum_{i}\gamma_{ij}=1,\sum_{j}\gamma_{ij}=\frac{N}{L},\gamma_{ij}\in[0,1],\]
where \(\lambda_{i}\)\(\in\)\([0,1]\) are learned parameters. After solving Eq. (8), we then infer our cross-consistency loss as:
\[\mathcal{L}_{\mathrm{cc}}(\mathbf{\gamma},\mathbf{S})=-\sum_{ij}\mathbf{\gamma}_{ij}\log s_{ ij}, \tag{9}\]
which corresponds to the minimization of the standard cross-entropy loss between \(\mathbf{\gamma}\) and predictions \(\mathbf{S}\).
**Local Contrastive Loss.** The local neighbors provide essential information for feature learning on the objects of the point clouds [26]. For instance, occlusions and holes always occur in objects in indoor and outdoor scenes [26]. If the network captures the local structure information from other complete objects, it can boost the model robustness on incomplete objects during training. While the local descriptors of the point clouds mainly derive from the points and their neighbors [26], which motivates us to model the local information of the point cloud by introducing local contrastive loss. Specifically, given a centroid \(\mathbf{\mu}_{i}^{x}\) of point cloud \(\mathbf{\mathcal{P}}^{x}\) with \(x\)\(\in\)\(\{s,t\}\), we search its nearest point \(\mathbf{p}_{i}^{x}\) and associated feature vector \(\mathbf{f}_{i}^{x}\) by the point-wise Euclidean distance. Based on this, we construct the local contrastive loss \(\mathcal{L}_{lc}\) following InfoNCE [47] by pulling \(\mathbf{f}_{i}^{x}\) close to \(\mathbf{\mu}_{i}^{x}\), while pushing it away from the neighbor vector of other points. We also encourage \(\mathbf{\mu}_{i}^{f_{x}}\) and \(\mathbf{\mu}_{i}^{f_{t}}\) to be similar:
\[\mathcal{L}_{lc} = -\frac{1}{L}\sum_{i=1}^{L}\log\frac{\exp\left(\boldsymbol{\mu}_{i}^{f_ {s}}\boldsymbol{\mu}_{i}^{f_{t}\top}\right)}{\sum_{j=1}^{L}\exp\left(\boldsymbol{ \mu}_{i}^{f_{s}}\boldsymbol{\mu}_{j}^{f_{t}\top}\right)}-\] \[\frac{1}{L}\sum_{i=1}^{L}\log\frac{\exp\left(\boldsymbol{\mu}_{i} ^{f_{s}}\boldsymbol{f}_{i}^{\top}\right)\exp\left(\boldsymbol{\mu}_{i}^{f_{s} }\boldsymbol{f}_{i}^{\top}\right)}{\sum_{j=1}^{L}\exp\left(\boldsymbol{\mu}_{i }^{f_{s}}\boldsymbol{f}_{j}^{\top}\right)\sum_{j}^{L}\exp\left(\boldsymbol{ \mu}_{i}^{f_{s}}\boldsymbol{f}_{j}^{\top}\right)}.\]
Thus, the final loss is the combination of self-consistency loss, cross-consistency loss, and local contrastive loss as:
\[\mathcal{L}=\mathcal{L}_{sc}+\mathcal{L}_{cc}+\mathcal{L}_{lc}. \tag{10}\]
In particular, different from most existing methods, the correspondence or pose between two partially overlapping point clouds is unknown in our training processing.
## 4 Experiments
We conduct extensive experiments to evaluate the performance of our method on the real datasets 3DMatch [54] and 3DLoMatch [19], as well as on the synthetic datasets ModelNet [46] and ModelLoNet [19].
### Implementation Details
Our method is implemented in PyTorch and was trained on one Quadro GV100 GPU (32G) and two Intel(R) Xeon(R) Gold 6226 CPUs. We used the AdamW optimizer with an initial learning rate of \(1e{-4}\) and a weight decay of \(1e{-6}\). We adopted the similar encoder and decoder architectures used in [38]. For the 3DMatch dataset, we trained for 200 epochs with a batch size of 1, halving the learning rate every 70 epochs. We trained on the ModelNet for 400 epochs with a batch size of 1, halving the learning rate every 100 epochs. On 3DMatch and 3DLoMatch, we set \(L{=}128\) with truncated patch size \(K{=}64\). On ModelNet and ModelLoNet, we set \(L{=}64\) with truncated patch size \(K{=}32\). The cluster head MLP consists of 3 fully connected layers. Each layer is composed of a linear layer followed by batch normalization. The hidden layer and the final linear layer output dimension are 512 and clusters, respectively. Except for the final layer, each layer has a LeakyReLU activation.
### Evaluation on 3DMatch and 3DLoMatch
**Datasets and Metrics.** 3DMatch [54] and 3DLoMatch [19] are two widely used indoor datasets with more than \(30\%\) and \(10\%\)\(\sim\)\(30\%\) partially overlapping scene pairs, respectively. 3DMatch contains 62 scenes, from which we use 46 for training, 8 for validation, and 8 for testing. The test set contains 1,623 partially overlapping point cloud fragments and their corresponding transformation matrices. We used training data preprocessed by [19] and evaluated with both the 3DMatch and 3DLoMatch protocols. Each input point cloud contains an average of about 20,000 points. We performed training data augmentation by applying small rigid perturbations, jittering the point locations, and shuffling points. Following REGTR [51] and SGP [49], we evaluated the Relative Rotation Errors (RRE) and Relative Translation Errors (RTE) that measure the accuracy of successful registrations. We also assessed Registration Recall (RR), the fraction of point cloud pairs whose transformation error is smaller than a threshold (i.e., 0.2m).
**Baselines.** We chose supervised state-of-the-art (SOTA) methods: OMNet [48], FCGF [10], D3Feat [3], SpinNet [1], Predator [19], REGTR [51], CoFiNet [52], and Geo-Transformer [38], as well as unsupervised PPFFoldNet [12] and SGP [49] as our baselines.
**Registration Results.** The results of various methods are shown in Table 1, where the best performance is highlighted in bold while the best-unsupervised results are marked with an underline. For both 3DMatch and 3DLoMatch, our method outperforms all unsupervised methods and achieves the lowest average rotation (RRE) and translation (RTE) errors across scenes. Our method also achieves the highest average registration recall, which reflects the final performance on point cloud registration (91.4% on 3DMatch and 64.3% on 3DLoMatch). Specifically, UDPReg largely exceeds the previous winner and our closest competitor, SGP,
\begin{table}
\begin{tabular}{r|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{3DMatch} & \multicolumn{3}{c}{3DLoMatch} \\ Method & RR\(\uparrow\) & RRE \(\downarrow\) & RRE \(\downarrow\) & RR \(\uparrow\) & RRE \(\downarrow\) & RTE \(\downarrow\) \\ \hline \multicolumn{8}{c}{Supervised Methods} \\ \hline FCGF [10] & 85.1\% & 1.949 & 0.066 & 40.1\% & 3.147 & 0.100 \\ D3Feat [3] & 81.6\% & 2.161 & 0.067 & 37.2\% & 3.361 & 0.103 \\ OMNet [48] & 35.9\% & 4.166 & 0.105 & 8.4\% & 7.299 & 0.151 \\ DGR [9] & 85.3\% & 2.103 & 0.067 & 48.7\% & 3.954 & 0.113 \\ Predator1K [19] & 90.5\% & 2.062 & 0.068 & 62.5\% & 3.159 & 0.096 \\ CoFiNet [52] & 89.7\% & 2.147 & 0.067 & 67.2\% & 3.271 & 0.090 \\ GeoTrans [38] & **92.0\%** & 1.808 & 0.063 & **74.0\%** & 2.934 & 0.089 \\ REGTR [51] & **92.0\%** & **1.567** & **0.049** & 64.8\% & **2.827** & **0.077** \\ \hline \multicolumn{8}{c}{Unsupervised Methods} \\ \hline PPFFoldNet [12] & 69.3\% & 3.021 & 0.089 & 24.8\% & 7.527 & 1.884 \\ SGP + R10K [49] & 85.5\% & 1.986 & 0.079 & 39.4\% & 3.529 & 0.099 \\ UDPReg (Ours) & 91.4\% & 1.642 & 0.064 & 64.3\% & 2.951 & 0.086 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on both 3DMatch and 3DLoMatch datasets. The best results for each criterion are labeled in bold, and the best results of unsupervised methods are underlined.
Figure 2: Example qualitative registration results for 3DMatch. The unsuccessful cases are enclosed in red boxes.
(85.5% RR on 3DMatch) by about 5.9% and (39.4% RR on 3DLoMatch) by 24.9%. Interestingly, our method also exceeds some supervised methods, _e.g._ OGMM, FCGF, D3Feat, DGR, and Predator1K, showing its efficacy in both high- and low-overlap scenarios. Even compared with recent supervised SOTA methods, our method achieves competitive results. Figs. 2 and 3 show examples of qualitative results on both 3DMatch and 3DLoMatch. GT indicates ground truth. SGP failed in one case of Fig. 2 on 3DMatch and failed in two cases of Fig. 3 on 3DLoMatch, but our method succeeded in all cases. This is because our unsupervised method can learn more discriminative features and our matching strategy can deal with partial overlap registration, which further shows the effectiveness of UDPReg.
### Evaluation on ModelNet40
**Datasets and Metrics.** ModelNet40 [46] contains 12,311 meshed CAD models from 40 categories. Following the data setup in [19, 51], each point cloud is sampled from ModelNet40 with 1,024 points followed by cropping and sub-sampling into two partial overlap settings: ModelNet has 73.5% pairwise overlap on average, and ModelLoNet contains a lower 53.6% average overlap. We train only on ModelNet and generalize to ModelLoNet. We follow [51] and measure the performance using Relative Rotation Error (RRE) and Relative Translation Error (RTE) on all point clouds and as Chamfer distance (CD) between scans.
**Baselines.** We chose recent supervised SOTA methods: DCP-v2 [44], OMNet [48], RPM-Net [50], Predator [19], REGTR [51], CoFiNet [52], and GeoTransformer [38], as well as unsupervised method RIENet [39] and UGMM [20] as our baselines. For traditional methods, we choose point-level methods ICP [5] and FGR [57], as well as probabilistic methods CPD [34], GMMReg [22], SVR [7], and FilterReg [18] as baselines. For Predator, RPM-Net, OMNet, and REGTR, we use the results provided in REGTR. In REGTR, Predator samples 450 points in the experiment, and OMNet obtained a slightly improved result in all categories. We utilize the codes provided by the authors for probabilistic methods. To improve the results for partial registration, we replace PointNet with DGCNN in DeepGMR. Additionally, we use Open3D for ICP and FGR.
**Registration Results.** Table 2 reports registration results on ModelNet40, in which the best results for each criterion are labeled in bold, and the best results by unsupervised methods are underlined. We compare against the recent unsupervised [39] and supervised [19, 38, 44, 51, 52] methods. When compared with unsupervised methods, our UDPReg outperforms the correspondence-based CEMNet, RIENet and GMM-based UGMM [20] in all metrics under both normal overlap (ModelNet) and low overlap (ModelLoNet) regimes. Compared with supervised methods, our approach also achieves competitive results. Specifically, our UDPReg outperforms all previous methods regarding rotation and translation criteria. It is worth noting that RPM-Net [50] additionally uses surface normals and is trained with transformation information. Despite this, the UDPReg still performs better. In addition to the quantitative results, Fig. 4 shows results on ModelNet with more than 70.0% partial
\begin{table}
\begin{tabular}{r|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{ModelNet} & \multicolumn{3}{c}{ModelLoNet} \\ Method & RRE \(\downarrow\) & RTE \(\downarrow\) & CD \(\downarrow\) & RRE \(\downarrow\) & RTE \(\downarrow\) & CD \(\downarrow\) \\ \hline \multicolumn{7}{c}{Traditional Methods} \\ \hline ICP [5] & 13.74 & 0.132 & 0.1225 & 24.13 & 0.224 & 0.1289 \\ FGR [57] & 28.68 & 0.160 & 0.1290 & 34.39 & 0.244 & 0.1339 \\ CPD [34] & 14.17 & 0.139 & 0.1277 & 28.78 & 0.253 & 0.1320 \\ GMMReg [22] & 16.41 & 0.163 & 0.1304 & 24.03 & 0.243 & 0.1298 \\ SVR [7] & 14.40 & 0.140 & 0.1279 & 23.45 & 0.222 & 0.1322 \\ FilterReg [18] & 24.07 & 0.193 & 0.1336 & 37.28 & 0.298 & 0.1367 \\ \hline \multicolumn{7}{c}{Supervised Methods} \\ \hline DCP-v2 [44] & 11.98 & 0.171 & 0.0117 & 16.50 & 0.300 & 0.0268 \\ DeepGMR [53] & 7.871 & 0.108 & 0.0056 & 9.867 & 0.117 & 0.0064 \\ OMNet [48] & 2.947 & 0.032 & 0.0015 & 6.517 & 0.129 & 0.0074 \\ RPM-Net [50] & 1.712 & 0.018 & 0.0009 & 7.342 & 0.124 & 0.0050 \\ Predator [19] & 1.739 & 0.019 & 0.0009 & 5.235 & 0.132 & 0.0083 \\ GeoTrans [38] & 2.145 & 0.020 & **0.0003** & 4.741 & 0.103 & 0.0143 \\ REGTR [51] & 1.473 & 0.014 & 0.0008 & 3.930 & 0.087 & **0.037** \\ \hline \multicolumn{7}{c}{Unsupervised Methods} \\ \hline CEMNet [23] & 2.575 & 0.019 & 0.0368 & 9.417 & 0.151 & 0.0861 \\ RIENet [39] & 2.447 & 0.018 & 0.0365 & 14.49 & 0.105 & 0.0828 \\ UGMM [20] & 13.65 & 0.124 & 0.0753 & 17.39 & 0.161 & 0.0745 \\ UDPReg (Ours) & **1.331** & **0.011** & 0.0306 & **3.578** & **0.069** & 0.0416 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on both ModelNet and ModelLoNet datasets. The best results for each criterion are labeled in bold, and the best results of unsupervised methods are underlined.
\begin{table}
\begin{tabular}{r|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{ModelNet} & \multicolumn{3}{c}{ModelLoNet} \\ Method & RRE \(\downarrow\) & RTE \(\downarrow\) & CD \(\downarrow\) & RRE \(\downarrow\) & RTE \(\downarrow\) & CD \(\downarrow\) \\ \hline CC & 6.985 & 0.087 & 0.0357 & 8.176 & 0.084 & 0.0483 \\ SC & 5.898 & 0.045 & 0.0314 & 8.104 & 0.081 & 0.0470 \\ LC & 7.871 & 0.046 & 0.0393 & 8.790 & 0.091 & 0.0482 \\ SC + LC & 3.742 & 0.062 & 0.0324 & 5.835 & 0.084 & 0.0334 \\ CC + LC & 3.867 & 0.059 & 0.0314 & 5.256 & **0.061** & 0.0422 \\ CC + SC & 3.421 & 0.048 & 0.0360 & 5.229 & 0.064 & 0.0423 \\ CC + SC + LC & **1.331** & **0.011** & **0.0306** & **3.578** & 0.069 & **0.0416** \\ \hline \hline \end{tabular}
\end{table}
Table 3: The results of different combinations of loss functions in both ModelNet and ModelLoNet datasets. The best results for each criterion are labeled in bold.
Figure 3: Example qualitative registration results for 3DLoMatch. The unsuccessful cases are enclosed in red boxes.
overlap. We also offer registration results for ModelLoNet with more than 50.0% partial overlap in Fig. 5. Compared with the recent SOTA unsupervised method RIENet, our UDPReg recovers the transformation more accurately on the challenging dataset ModelLoNet.
**Loss Functions.** We trained our model with different combinations of the local contrastive loss (LC), cross consistency loss (CC), and self-consistency loss (SC), where the experiments were conducted on both ModelNet and ModelLoNet. Table 3 shows that the cross-consistency, self-consistency, and local contrastive losses can boost registration precision. Specifically, for a single loss, self-consistency loss archives the best results, and local contrastive loss performs worse on all metrics on both datasets.
**Influence of the Number of Clusters.** We assess the effect of the number of clusters \(L\) for ModelNet and ModelLoNet. We trained UDPReg with different values of \(L\), from 4 to 160, and report the results in Table 4. UDPReg achieves the best results with \(L{=}64\) on both benchmarks. The results are stable for \(16{\leq}L{\leq}96\). This suggests that the number of clusters has little influence as long as there are "enough".
**Importance of Individual Modules.** In the registration process, UDPReg extracts hierarchical correspondences from clusters to points. Therefore, we further explore the efficiency of the hierarchical registration strategy. Table 5 reports the results on ModelNet and ModelLoNet, where _Cluster_, _Point_, and _Cluster-point_ indicate distribution-level, point-level, and distribution-based point-level correspondences, respectively. In the first experiment, we only used distribution-level correspondences for point cloud registration. Unsurprisingly, it performs worse on all metrics, indicating UDPReg benefits from point-level matching. In the second experiment, we directly predict the point-level correspondences to estimate transformation by performing feature matching. Its performance is still worse than that of the hierarchical registration strategy, further showing the effectiveness of our correspondence prediction strategy.
## 5 Conclusion
This paper presents a distribution consistency-based unsupervised deep probabilistic registration framework. One of the advantages of this method is that it extends the probabilistic registration to handle point cloud registration with partial overlaps by adopting the Sinkhorn algorithm to predict distribution-level correspondences. Moreover, we propose self-consistent, cross-consistent, and local-contrastive losses to train feature extractors in an unsupervised manner. Experiments demonstrate that the proposed algorithm achieves the best performance.
\begin{table}
\begin{tabular}{r|c c c|c c c} \hline \hline \multirow{2}{*}{Clusters} & \multicolumn{3}{c|}{ModelNet} & \multicolumn{3}{c}{ModelLoNet} \\ & RRE \(\downarrow\) & RTE \(\downarrow\) & CD \(\downarrow\) & RRE \(\downarrow\) & RTE \(\downarrow\) & CD \(\downarrow\) \\ \hline
4 & 1.504 & 0.009 & 0.0366 & 4.348 & 0.068 & 0.0452 \\
16 & 1.439 & 0.007 & 0.0334 & 3.713 & 0.057 & 0.0424 \\
32 & **1.305** & 0.014 & 0.0339 & 3.659 & 0.058 & 0.0419 \\
64 & 1.331 & **0.011** & **0.0306** & **3.578** & **0.069** & **0.0416** \\
96 & 1.454 & 0.021 & 0.0367 & 3.598 & 0.070 & 0.0428 \\
128 & 1.468 & 0.009 & 0.0310 & 4.440 & 0.057 & 0.0399 \\
160 & 1.530 & 0.009 & 0.0338 & 4.564 & 0.059 & 0.0422 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study results of UDPReg on ModelNet40 with different number of clusters \(L\). The best results for each criterion are labeled in bold.
Figure 4: Registration results of different methods on ModelNet with more than 70% partial overlaps.
Figure 5: Registration results of different methods on ModelLoNet with more than 50% partial overlaps.
\begin{table}
\begin{tabular}{r|c c c|c c c} \hline \hline \multirow{3}{*}{Method} & \multicolumn{3}{c|}{ModelNet} & \multicolumn{3}{c}{ModelLoNet} \\ & RRE \(\downarrow\) & RTE \(\downarrow\) & CD \(\downarrow\) & RRE \(\downarrow\) & RTE \(\downarrow\) & CD \(\downarrow\) \\ \hline Cluster & 3.932 & 0.033 & 0.0330 & 6.018 & 0.182 & 0.0463 \\ Point & 2.505 & 0.014 & 0.0311 & 4.264 & 0.096 & 0.0431 \\ Cluster-Point & **1.331** & **0.011** & **0.0306** & **3.578** & **0.069** & **0.0416** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation study of individual modules on ModelNet and ModelLoNet. The best performance is highlighted in bold. |
2305.17378 | Improving Generalization in Language Model-Based Text-to-SQL Semantic
Parsing: Two Simple Semantic Boundary-Based Techniques | Compositional and domain generalization present significant challenges in
semantic parsing, even for state-of-the-art semantic parsers based on
pre-trained language models (LMs). In this study, we empirically investigate
improving an LM's generalization in semantic parsing with two simple
techniques: at the token level, we introduce a token preprocessing method to
preserve the semantic boundaries of tokens produced by LM tokenizers; at the
sequence level, we propose to use special tokens to mark the boundaries of
components aligned between input and output. Our experimental results on two
text-to-SQL semantic parsing datasets show that our token preprocessing,
although simple, can substantially improve the LM performance on both types of
generalization, and our component boundary marking method is particularly
helpful for compositional generalization. | Daking Rai, Bailin Wang, Yilun Zhou, Ziyu Yao | 2023-05-27T06:09:03Z | http://arxiv.org/abs/2305.17378v1 | # Improving Generalization in Language Model-Based Text-to-SQL
###### Abstract
Compositional and domain generalization present significant challenges in semantic parsing, even for state-of-the-art semantic parsers based on pre-trained language models (LMs). In this study, we empirically investigate improving an LM's generalization in semantic parsing with two simple techniques: at the _token_ level, we introduce a token preprocessing method to preserve the semantic boundaries of tokens produced by LM tokenizers; at the _sequence_ level, we propose to use special tokens to mark the boundaries of components aligned between input and output. Our experimental results on two text-to-SQL semantic parsing datasets show that our token preprocessing, although simple, can substantially improve the LM performance on both types of generalization, and our component boundary marking method is particularly helpful for compositional generalization.1
Footnote 1: The source code for our implementation is available at [https://github.com/Dakingrai/ood-generalizatio-semantic-boundary-techniques](https://github.com/Dakingrai/ood-generalizatio-semantic-boundary-techniques).
## 1 Introduction
Pre-trained language models (LMs)2 such as T5 Raffel et al. (2020) have now been more and more widely adopted for semantic parsing due to their promising performance and straightforward architectures Shaw et al. (2021); Scholak et al. (2021); Yin et al. (2021); Qi et al. (2022); Xie et al. (2022); Qiu et al. (2021). However, recent work revealed that these LMs still struggle to generalize on out-of-distribution (OOD) samples Lake and Baroni (2018); Keysers et al. (2019); Shaw et al. (2021); Qiu et al. (2022). For example, if a parser has learned "how many heads are in the department" and "how many people are older than 56", it is expected to generalize to "how many heads of the departments are older than 56".
Footnote 2: We use “LMs” to refer to a broad set of models that are pre-trained in (masked/autoregressive) language modeling objectives, with encoder-decoder or decoder-only architecture.
Generalizing to such novel component compositions is known as _compositional generalization_. Additionally, generalizing to new domains (e.g., from "entertainment" to "flight") is referred to as _domain generalization_.
In this paper, we investigate these two types of generalization of LMs in text-to-SQL semantic parsing, i.e., given a natural language (NL) input and the database schema, producing a SQL query that can be executed against the database for desired output. We conduct experiments using the cross-database Spider benchmark Yu et al. (2018) and its derivation Spider-CG Gan et al. (2022). Compared with existing benchmarks Keysers et al. (2019); Lake and Baroni (2018), this task setting is both more realistic (e.g., containing larger language variations) and more challenging (e.g., requiring grounding to the database context).
\begin{table}
\begin{tabular}{l l} \hline \hline \multicolumn{3}{l}{_Token Preprocessing (applied to database schema)_} \\
Although previous work tackling the two types of generalization all requires non-trivial engineering effort (see Section 2), in this work, we present two simple yet effective techniques, which are extremely easy to implement with LMs (Table 1). Our techniques improve the generalization of LMs by preserving the _semantic boundaries_ at the token and the sequence levels. At the token level, our first technique rewrites the inputs to handle naming conventions in database schemas and SQL queries such that a pre-trained LM tokenizer can split them into semantically meaningful tokens.
At the sequence level, our second technique introduces special tokens to mark the semantic boundaries (e.g., phrases) aligned between the source NL and the target SQL. These special tokens implicitly help the LM-based parser build more precise input-output correspondences that are crucial for compositional generalization.
On five evaluation sets, the experimental results based on T5-base show that, albeit simple, our token-level technique dramatically improves both types of LM generalization, and our sequence-level technique is particularly helpful for compositional generalization. Combining them together leads to further improvements. Our additional experiments further demonstrate the generalizability of our approaches (e.g., to text-to-LISP expression parsing (Semantic Machines et al., 2020)).
## 2 Related Work
Text-to-SQL Semantic Parsing.This task has received consider attention since the creation of the WikiSQL Zhong et al. (2017) and Spider Yu et al. (2018) datasets. While a large amount of existing work designed specialized architectures for this task Yu et al. (2018); Zhang et al. (2019); Wang et al. (2020); Lin et al. (2020), there has been a trend of directly fine-tuning pre-trained sequence-to-sequence models as semantic parsers Shaw et al. (2021); Scholak et al. (2021); Xie et al. (2022); Qi et al. (2022). Our work follows the same line and proposed approaches to further improve the LM performance. On the other hand, Guo et al. (2019); Gan et al. (2021); Herzig et al. (2021) showed that simplifying the SQL representation in a way that the new representation can semantically better align with the NL can dramatically improve the parsing performance. In our work, we follow the NatSQL representation Gan et al. (2021) as it has better alignments with the NL.
Injecting Priors into Semantic Parsers.Our two techniques can be viewed as injecting human prior knowledge into neural models for better generalization, which has been one of the major research efforts on improving domain and compositional generalization. The key consideration to be taken when injecting priors is the trade-off between the form and the generalizability. Strong priors in the form of specialized model architectures Shaw et al. (2021); Herzig and Berant (2021); Wang et al. (2021) are either too expensive or not applicable across domains. Weaker priors in terms of specialized training algorithms Yin et al. (2021); Conklin et al. (2021) are more general, but often weaker in performance compared to other lines of methods. Our work is in the spirit of the third line on the use of data augmentation Andreas (2020); Akyurek et al. (2020); Qiu et al. (2022). However, instead of synthesizing new data from scratch, we "annotate" the data with semantic boundary markers, which is not only much simpler but also brings better performance. The final line of work Qiu et al. (2022); Levy et al. (2022) is based on the learning capacities in the context of large LMs, which is out of the scope of this work.
## 3 Methods
### Token Preprocessing
We present our two techniques for improving the generalization of LM-based semantic parsers. LM pre-training learns high-quality contextualized word representation Devlin et al. (2019), but to effectively use it on a downstream task, the tokenization needs to "make sense." For example, if the text "pet_age" is tokenized as "pet", "_-" and "age", then the semantics of "pet" and "age" acquired during pretraining can be directly used. However, if it is
\begin{table}
\begin{tabular}{l l} \hline \hline Before preprocessing & After preprocessing \\ \hline _Snake case in schema items (add space)_ \\ booking1status\_code & booking1status\_code \\ document\_type & document\_type \\ \hline _Dot notation in column references (add space)_ \\ farm\_cows & farm\_cows \\ origin\_flight & origin\_flight \\ \hline _SQL keyword (expand spelling)_ \\ avg & average \\ descending \\ \hline \hline \end{tabular}
\end{table}
Table 2: Three token preprocessing types. Coloring indicates tokenization, same as Table 1.
tokenized as "pe", "t_a" and "ge", then pre-training is hardly useful because the model does not even recognize the two semantic words.
Unfortunately, this latter case is very common when tokenizing non-natural language texts, such as database schemas and SQL queries. Thus, we propose a token preprocessing method to induce more natural tokenization by, at a high level, adding white spaces and handling the naming conventions in database schema and SQL queries. We show examples in Table 2 and details in Appendix A.
### Component Boundary Marking
At the sequence level, our second technique further assists LMs in recognizing the semantic boundaries of components aligned between input and output. An example is shown in Table 1. While prior work has attempted the goal via implementing alignment-based attention supervision Yin et al. (2021), we propose to insert _special tokens_ in input and output to inject such bias. Specifically, we use pairs of "[sep/N]" and "[/sep/N]", \(N\in\mathbb{Z}\), to mark the boundaries, so as to hint the LM that components within the paired special tokens should be aligned.
In practice, we also observed cases where an NL component has to be aligned with a SQL component consisting of multiple non-continuous segments. To handle it, we will apply the same pair of special tokens to each segment of the same component. An example is shown in Table 8 in the Appendix.
Finally, we note that our method assumes the availability of component annotations. Such annotations can be obtained via human labeling Gan et al. (2021), heuristic rules Yin et al. (2021), or other advanced machine learning algorithms, but this is beyond the scope of our work.
## 4 Experiments
### Setup
Datasets.We use two datasets, Spider Yu et al. (2018) and Spider-CG Gan et al. (2022). Spider consists of a training set (\(\text{Spider}_{T}\)) and a development set (\(\text{Spider}_{D}\)) with non-overlapping domains but otherwise similar data characteristics (e.g., length). Thus, we train the models on \(\text{Spider}_{T}\), and consider \(\text{Spider}_{D}\) as the evaluation for domain generalization. Spider-CG is derived from Spider by first dissecting each Spider instance into different components according to its dependency parse and generates data in two ways: substituting a component in one instance with one from another instance and appending one component from one instance to another instance. Depending on whether the instances come from the Spider training or development set, we get four splits: CG-SUB\({}_{T}\), CG-SUB\({}_{D}\), CG-APP\({}_{T}\) and CG-APP\({}_{D}\), all of which are only used for evaluation. The instances created under substitution share similar data characteristics while those under appending are much longer, so a good model performance on the latter requires compositional generalization. Table 3 summarizes the dataset information. In addition, we use the NatSQL representation Gan et al. (2021) throughout the experiment due to its better alignment with the NL input.
Evaluation Metrics.We follow the standard Spider benchmarking and employ two evaluation metrics. **Exact Match (EM)** compares the generated and the ground-truth query by performing exact set matching at the lexical level Yu et al. (2018). **Execution Match (EX)** measures whether executing the generated query on the given database can yield the same results as using the ground truth. Notably, for a fair comparison with existing semantic parsers on the Spider leader board, we follow Gan et al. (2022), convert each generated NatSQL query into a SQL query, and report the evaluation results based on the converted SQL query.
Models, Baselines, and Implementation.We evaluate our proposed techniques by applying them to the pre-trained T5 model Raffel et al. (2020). Our experiments are conducted using T5-base, with the use of database contents following Lin et al. (2020). As our second technique leverages component boundary labels to encourage the compositional generalization of LM, we compare it with a baseline Yin et al. (2021) which similarly assumes the labels but utilizes them in a more complicated way, i.e., transforming the component alignments into supervision on the cross attention between input and output of the LM. We denote this baseline
\begin{table}
\begin{tabular}{l c c l} \hline \hline
**Dataset** & **Size** & **Usage** & **Generalization Type** \\ \hline \(\text{Spider}_{T}\) & 7,000 & Train & None (in-distribution) \\ \(\text{Spider}_{D}\) & 1,034 & Eval & Domain \\ CG-SUB\({}_{T}\) & 20,686 & Eval & None (in-distribution) \\ CG-SUB\({}_{D}\) & 2,883 & Eval & Domain \\ CG-APP\({}_{T}\) & 18,793 & Eval & Composition \\ CG-APP\({}_{D}\) & 3,237 & Eval & Domain \& Composition \\ \hline \hline \end{tabular}
\end{table}
Table 3: Datasets in our experiments.
as **Attn. Sup**.3 For both methods, we leverage component annotations from Spider-SS Gan et al. (2022).
Footnote 3: In our implementation, we apply the supervision to cross-attention distribution averaged across all decoder layers and heads. We also tried cross-attention from only the top decoder layer, but the results are similar.
These annotations were generated by applying a syntactic parser to decompose the NL question into sub-questions and then manually annotating their corresponding NatSQL components.
We also compare with the state-of-the-art models, RATSQL\({}_{B(S)}\) and RATSQL\({}_{G(S)}\), from Gan et al. (2022), although their models adopt a specialized architecture (i.e., RATSQL Wang et al. (2020)) and RATSQL\({}_{G(S)}\) additionally employed task-specific pre-training Shi et al. (2021). Both models used the same component annotations from Spider-SS.
Finally, for each of our model variants in Table 4, we repeat the experiment three times, using three random seeds consistently across all models, and report the average results. We include more implementation details in Appendix D.
### Results
Main Results.We present our results in Table 4. First, all models obtain the best performance on the in-distribution evaluation set CG-SUB\({}_{T}\) while suffering from more than 10% performance drops on others, confirming the challenges of the domain and compositional generation. As expected, all models have the worst performance on CG-APP\({}_{D}\), which requires both types of generalization. Between the two types, it is also observed that compositional generalization (as measured by CG-APP\({}_{T}\)) is more challenging than domain generalization (as measured by Spider\({}_{D}\) and CG-SUB\({}_{D}\)).
Second, our results show that the token preprocessing method, albeit simple, can improve both domain and compositional generalizations of LMs dramatically. For example, comparing T5-base with T5-base+Tok, the latter is improved by around 5-7% EM and 7% EX for domain generalization (on Spider\({}_{D}\) and CG-SUB\({}_{D}\)), 5% EM and 3.5% EX for compositional generalization (on CG-SUB\({}_{T}\)), and 9% EM and 11% EX for the challenging case when both types occur (on CG-APP\({}_{D}\)). Additionally, we also show the effectiveness of token pre-processing with T5-3B on Spider\({}_{D}\) in App. B.
Moving on to our proposed component boundary marking method, it shows to be particularly helpful for compositional generalization. Specifically, applying it to T5-base leads to a 9% EM and 7% EX increase on CG-APP\({}_{T}\), and an 8% EM and 8% EX increase on CG-APP\({}_{D}\). On the in-distribution evaluation set, this technique also gives slight improvement, whereas, for domain generalization, there is no obvious impact from this technique.
Finally, augmenting T5-base with both techniques (i.e., T5-base+Tok+Comp) leads to better performance than applying each technique individually in most evaluation sets, implying that our two techniques are complementary to each other. Specifically, for in-distribution evaluation, using each technique individually or both of them together yield similar results; for domain generalization, there is no additional gain from applying
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Spider\({}_{D}\)**} & \multicolumn{2}{c}{**CG-SUB\({}_{T}\)**} & \multicolumn{2}{c}{**CG-SUB\({}_{D}\)**} & \multicolumn{2}{c}{**CG-APP\({}_{T}\)**} & \multicolumn{2}{c}{**CG-APP\({}_{D}\)**} \\ & **EM** & **EX** & **EM** & **EX** & **EM** & **EX** & **EM** & **EX** & **EM** & **EX** \\ \hline \multicolumn{10}{l}{_Semantic Parsers with Specialized Architectures_ (_Gan et al._, _2022_)} \\ RATSQL\({}_{B(S)}\) & 71.9 & - & 91.0 & - & 72.6 & - & 79.8 & - & 61.5 & - \\ RATSQL\({}_{G(S)}\) & **74.5** & - & **91.4** & - & **76.7** & - & **82.5** & - & **68.3** & - \\ \hline \multicolumn{10}{l}{_Semantic Parsers based on LMs_} \\ T5-base & 64.6 & 67.9 & 83.8 & 88.1 & 69.1 & 71.1 & 60.2 & 70.3 & 45.0 & 54.9 \\ T5-base + Tok & 71.8 & 75.6 & 85.9 & 89.5 & 74.1 & 78.6 & 65.2 & 73.8 & 54.2 & 65.9 \\ T5-base + Comp & 64.4 & 68.2 & 86.3 & 90.2 & 69.3 & 73.1 & 69.8 & 77.9 & 53.5 & 63.4 \\ T5-base + Tok + Comp & 69.4 & 73.2 & 86.6 & 90.7 & 76.6 & 79.8 & 71.1 & 77.8 & 61.0 & 69.4 \\ T5-base + Tok + Attn. Sup & 69.4 & 73.7 & 83.6 & 87.7 & 71.7 & 75.6 & 62.3 & 70.8 & 56.3 & 66.2 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results (%) on different evaluation sets. Top: state-of-the-art model using specialized architecture; numbers are collected from its paper and only EM is reported (code unavailable). Bottom: T5-base models with our proposed or baseline techniques; we report the average performance of each model over three runs. **Tok**: token preprocessing. **Comp**: component boundary marking. **Attn. Sup**: the attention supervision method of Yin et al. (2021).
component boundary marking on the top of the token preprocessing; for compositional generalization, the two techniques together contribute the best EM across all models and baselines. Overall, combining the two techniques shrinks the performance gap between in-distribution and domain OOD by around 2-4% EM, composition OOD by 7%, and joint OOD by 13%.
**Compared with Special Architectures.** Despite its simplicity, our T5-base+Tok+Comp model achieves comparable or better performance than the two RATSQL variants on CG-SUB\({}_{D}\). It also performs comparably to RATSQL\({}_{B(S)}\) on CG-APP\({}_{D}\).
**Compared with Attn. Sup.** Surprisingly, the attention supervision has only led to around 2% EM and 1.5% EX gains on CG-APP\({}_{D}\), while no further advantage is observed on other evaluation sets. In our conjecture, this is due to the misalignment between the objective of Attn. Sup Yin et al. (2021) and the attention mechanism of pre-trained LMs. Specifically, Attn. Sup encourages the attention distribution of different heads to be consistent with the component alignment supervision. However, prior work Voita et al. (2019) suggests that different attention heads of even the same layer may have different functions and roles. Thus, when coarsely defining the objective function, it may not allow for the most effective supervision. Furthermore, similar to our finding, Yin et al. (2021) did not observe performance gain when they applied Attn. Sup to T5-base on CFQ Keysers et al. (2020).
**Qualitative Analysis on Tokenization.** To qualitatively understand how our token preprocessing helps the generalization, we randomly sampled 50 examples from the Spider\({}_{D}\) to analyze how frequently the T5 tokenizer divides tokens into less meaningful subtokens. Consequently, we found 243 tokenization issues in total, and 140 of them can be resolved by our token preprocessing. The remaining cases are like splitting "id" into "i" and "d" as shown in Table 1, which is beyond our scope.
**Error Analysis on Component Boundary Marking.** We manually examined 50 error predictions from T5-base+Tok+Comp and contrasted them with the errors of T5-base+Tok. Intriguingly, we observed much more frequent schema items or value hallucinations from the former. For example, it may generate queries accessing non-existing columns in a table, or misspells the literal values in the queries. We conjecture that this is because our component boundaries are only applied to the NL input, not the database schema (note that literal values are grounded and attached to schema items in their input representations; see Appendix D for details). This reveals a new challenge of LM generalization in text-to-SQL semantic parsing, i.e., how to properly handle the database schema when injecting prior knowledge into LMs for compositional generalization.
**Generalizing to Other Semantic Parsing Tasks.** While our main focus in this work is on text-to-SQL parsing, we also investigate whether our approaches can generalize beyond this specific task. To this end, we implemented both of our techniques to SMCalFlow-CS Yin et al. (2021), a compositional generalization dataset for text-to-LISP expression parsing Semantic Machines et al. (2020). For "+Comp", We utilize the span-level alignments heuristically derived by Yin et al. (2021) as component annotations.4 Our results in Table 5 show that: (1) Our token preprocessing can be universally helpful for LMs to model schema items, predicates, etc., leading to 1.2% performance gain over T5-base; (2) Our component boundary marking method is highly effective for compositional generalization, which offers 2.6% additional gain.
Footnote 4: Yin et al.’s approach requires knowing the ground-truth LISP expression when deriving the component boundaries for the input question. In our experiment, we assume the availability of these question boundaries at test time and focus on showcasing the potential of “Comp”, while automating this question decomposition is left as future work.
## 5 Conclusion
In this paper, we present two simple yet effective techniques to improve the domain and compositional generalization of LMs in text-to-SQL semantic parsing. Our techniques aid LMs in preserving the semantic boundaries of tokens and components in their input and output. We also demonstrate their potential to be generalized to other semantic parsing tasks.
\begin{table}
\begin{tabular}{l r} \hline \hline
**Model** & **Exact Match** \\ \hline COARSE2FINE + SS (Span-level Sup.) & 47.4 \\ \hline T5-base & 63.9 \\ T5-base + Tok & 65.1 \\ T5-base + Tok + Comp & **67.7** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results (%) on SMCalFlow-Compositional Skills dataset (16-shot setting). Top: Result from Yin et al. (2021). Bottom: T5-base models with our proposed or baseline techniques; we report the average performance of each model over three runs.
## Limitations
Future work can further apply our approaches to other semantic parsing tasks. For example, for parsing texts to lambda-calculus expressions for knowledge base question answering Dong and Lapata (2016), one can similarly preprocess the schema items (e.g., "department_time" into "department_time") and typed values (e.g., "dallas:ci" into "dallas : ci") for more meaningful subword tokenization results. In addition, our experiments are based on T5. To further verify the effectiveness of our techniques, one can apply them to other pre-trained language models such as BART Lewis et al. (2020) and GPT-2 Radford et al. (2019) as well.
## Acknowledgments
We would like to thank all anonymous reviewers for their constructive comments. We also thank Yujian Gan and Xinyun Chen for their help in using the NatSQL and the Spider-SS datasets, as well as Pengcheng Yin for using the code base of Attn. Sup. This project was supported by resources provided by the Office of Research Computing at George Mason University ([https://orc.gmu.edu](https://orc.gmu.edu)) and funded in part by grants from the National Science Foundation (Awards Number 1625039 and 2018631).
|
2306.16536 | New Insight into the FS CMa System MWC 645 from Near-Infrared and
Optical Spectroscopy | The B[e] phenomenon is manifested by a heterogeneous group of stars
surrounded by gaseous and dusty circumstellar envelopes with similar physical
conditions. Among these stars, the FS CMa-type objects are suspected to be
binary systems, which could be experiencing or have undergone a mass-transfer
process that could explain the large amount of material surrounding them. We
aim to contribute to the knowledge of a recently confirmed binary, MWC 645,
which could be undergoing an active mass-transfer process. We present
near-infrared and optical spectra, identify atomic and molecular spectral
features, and derive different quantitative properties of line profiles. Based
on publicly available photometric data, we search for periodicity in the light
curve and model the spectral energy distribution. We have detected molecular
bands of CO in absorption at 1.62 $\mu$m and 2.3 $\mu$m for the first time. We
derive an upper limit for the effective temperature of the cool binary
component. We found a correlation between the enhancement of the H$\alpha$
emission and the decrease in optical brightness that could be associated with
mass-ejection events or an increase in mass loss. We outline the global
properties of the envelope, possibly responsible for brightness variations due
to a variable extinction, and briefly speculate on different possible
scenarios. | Andrea F. Torres, María L. Arias, Michaela Kraus, Lorena V. Mercanti, Tõnis Eenmäe | 2023-06-28T20:20:02Z | http://arxiv.org/abs/2306.16536v1 | # New Insight into the FS CMa System MWC 645 from Near-Infrared and Optical Spectroscopy
###### Abstract
The B[e] phenomenon is manifested by a heterogeneous group of stars surrounded by gaseous and dusty circumstellar envelopes with similar physical conditions. Among these stars, the FS CMa-type objects are suspected to be binary systems, which could be experiencing or have undergone a mass-transfer process that could explain the large amount of material surrounding them. We aim to contribute to the knowledge of a recently confirmed binary, MWC 645, which could be undergoing an active mass-transfer process. We present near-infrared and optical spectra, identify atomic and molecular spectral features, and derive different quantitative properties of line profiles. Based on publicly available photometric data, we search for periodicity in the light curve and model the spectral energy distribution. We have detected molecular bands of CO in absorption at 1.62 \(\upmu\)m and 2.3 \(\upmu\)m for the first time. We derive an upper limit for the effective temperature of the cool binary component. We found a correlation between the enhancement of the H\(\alpha\) emission and the decrease in optical brightness that could be associated with mass-ejection events or an increase in mass loss. We outline the global properties of the envelope, possibly responsible for brightness variations due to a variable extinction, and briefly speculate on different possible scenarios.
stars: emission-line, Be; stars: peculiar; stars: individual: MWC 645; circumstellar matter; binaries: general; techniques: spectroscopic +
Footnote †: journal: Physics Letters B
0
Footnote 0: [https://doi.org/10.3390/galaxies1010000](https://doi.org/10.3390/galaxies1010000)
## 1 Introduction
In their evolution, some B-type stars undergo phases that are still puzzling for astrophysicists even after years of study since they develop certain peculiarities that are not yet well understood. The B[e] phenomenon displayed by several B-type stars in their optical spectra is an example of them. Its manifestation can be seen through the presence of permitted and forbidden low-excitation emission lines of neutral and low ionization metals arising from circumstellar (CS) gas and large infrared excess due to CS dust [1]. The phenomenon is associated with stars with different initial masses, isolated or in binary systems, transiting different evolutionary stages, such as supergiants, compact planetary nebulae, Herbig Ae/Be stars, and symbiotic systems [2]. Despite the cited differences among the stars, the physical conditions of their CS gaseous and dusty envelopes are similar, which is a crucial factor when seeking to understand the development of the phenomenon. Furthermore, since the CS envelopes veil the photospheric features of the central objects, it is difficult to determine their spectral types and evolutionary states. Therefore, stars without a proper classification comprise the category named "Unclassified B[e] stars" (UnclB[e]).
Nearly a decade after Lamers' classification, Miroshnichenko proposed a new group called FSCMa stars [3]. The observational defining criteria are 1) the presence of a hot star (between O9 and A2 spectral types) continuum with emission lines of H I, Fe II, O I, [Fe II], [O I], Ca II; 2) an infrared spectral energy distribution (SED) that shows a large excess with a maximum at 10-30 \(\upmu\)m and a strong decrement beyond these wavelengths; and 3) a star location outside a region of star formation. According to these properties, almost all the UnclB[e] objects in Lamers et al.'s publication are in this group [4], which has approximately seventy members between confirmed and candidate ones [5]. They are suspected to be binaries at a post-mass-exchange evolutionary phase, with a secondary component fainter and cooler than the primary or degenerate [3; 6; 7]. Since the predictions of mass-loss rates of single-stars theory [8; 9] cannot explain the existence of a large amount of CS matter, a mass-transfer process in a binary system could be a likely explanation. However, only a few objects of this class have been confirmed as binary systems, probably due to the scarcity of available observational data to discover them and the difficulties in detecting signs of binarity due to the presence of CS matter, the intrinsic stellar variability, and the low brightness of most members of the FS CMa group [10]. Recently, Miroshnichenko et al. [11] published a review of FSCMa objects, where they reported fifteen stars as binaries and six as binary system candidates.
MWC 645 (= V2211 Cyg, \(\alpha\) = 21:53:27.49, \(\delta\) = +52:59:58.01; V = 13.0, H-K = 1.53, J-H = 1.267) was originally included in the supplement of the Catalogue of Mount Wilson about A and B stars with bright H I spectral lines [12]. The presence of strong double emission lines of Fe II and [Fe II] (with a radial velocity difference between the red and blue peaks of 150 km s\({}^{-1}\)) and triple-peaked profiles of the H\(\gamma\) and H\(\delta\) transitions were reported by Swings and Allen [13]. They also remarked striking spectral similarities between MWC 645 and \(\eta\) Car. Also, permitted and forbidden transitions of low excitation and ionization potential belonging to Ti II, Cr II, [O I], and [N II] were observed [1]. Photometric variations were found by Gottlieb and Liller [14] with an amplitude of 0.3 mag and a possible period of 23.6 years. A deep spectroscopic study was done by Jaschek et al. [15] that revealed no stellar absorption features. These authors concluded that possibly MWC 645 is a late B-type object based on the absence of He II lines and the weakness of the He I lines at \(\lambda\) 6678 A and \(\lambda\) 7065 A possibly detected once, each one at different years. They did not find spectral transitions from C, Ne, and Mg atoms or ions, but they found lines of K I and Cu II (typically seen in stellar types later than F) and Zr II (usually seen in stars later than A0-type). They highlighted the extreme spectroscopic variability of MWC 645 over the years. Lamers et al. [2] included it in the UnclB[e] stars group. MWC 645 has IRAS flux ratios that locate it in the region occupied by OH/IR stars [16]. Zickgraf [17] detected a characteristic asymmetric profile for the emission metal lines, with a steep red flank and a blue wing. He reported the splitting in the central emission of [O I] and [Fe II] lines and a peculiar emission profile of the H\(\alpha\) line showing a broad blue and a narrow red component with a full width at half maximum (FWHM) of 5.0 A and 1.3 A, respectively. He proposed a latitude-dependent wind model with a large optical depth dust disk at an intermediate inclination to explain the asymmetric line profiles and their splitting. Marston and McCollum [18] obtained H\(\alpha\) narrow band imaging and found no visible extended emission associated with the star.
Recently, Nodyarov et al. [19; 20] studied high-resolution optical spectra of MWC 645 taken in two different years. He found absorption lines of neutral metals, such as Li I, Ca I, Fe I, Ti I, V I, and Ni I, typically present in cool stellar spectra, with a different average radial velocity in each spectrum, that revealed the binary nature of the object. However, they did not find any absorption line typical of a B-type object in any of their spectra. They disentangled the contribution of each stellar component and estimated their surface temperatures and luminosities (T\({}_{eff}\) = 18 000 \(\pm\) 2000 K and 4250 \(\pm\) 250 K, log (L/L\({}_{\odot}\)) = 4.0 \(\pm\) 0.5 and 3.1 \(\pm\) 0.3 for the hot and cool components, respectively). Low-resolution near-IR spectra displayed emission lines of the H I Paschen and Brackett series, as well as of Fe II, O I, N I, and He I. Photometric monitoring in the optical and near-IR regions
showed quasi-cyclic variations of both short and long periods (months and \(\sim\) 4 years, respectively). The authors conclude that the star can be classified as an FSCMa-type object, where its intermediate-mass components (7 M\({}_{\odot}\) and 2.8 M\({}_{\odot}\)) undergo an ongoing mass-transfer process. According to the shape displayed by the spectral energy distribution with weak emission peaks at about 10 \(\upmu\)m and 18 \(\upmu\)m, they inferred the presence of silicates in an optically thin dusty shell.
MWC 645 is one of the eight FSCMa objects in which absorption lines of neutral metals typical of late-type secondaries have been detected. To contribute to the study of this intriguing object, we decided to observe it in the near-IR to search for signatures of both stars, mainly of the cool component, that help to characterize it. In addition, the acquisition of new optical spectra and the public availability of data (spectroscopic and photometric) that could shed some light on this complex system motivated us to analyze them. The paper is organized as follows: We present the infrared and optical observations used in this work in Section 2. In Sections 3 and 4, we analyze the data. In Section 5, we discuss the results. Finally, Section 6 contains the main conclusions.
## 2 Observations
### Near-Infrared Spectra
Near-infrared spectra were taken using the Gemini Near-Infrared Spectrograph (GNIRS, [21]) attached to the 8 m telescope at GEMINI-North (Hawaii) under the programs GN-2017A-Q-62, GN-2018A-Q-406, and GN-2022B-Q-225. On 6 June 2017, we obtained \(K\)-band spectra in long-slit mode centered at 2.35 \(\upmu\)m. The instrumental configuration used was a 110.5 l/mm grating, a 0.3 arcsec slit, and the short camera (0.15 arcsec/pix). We also acquired spectra with the same configuration but in cross-dispersed mode centered at 2.19 \(\upmu\)m and 2.36 \(\upmu\)m on 24 and 30 July 2018, respectively. The effective spectral coverage by these set-ups was 0.90 \(\upmu\)m-2.27\(\upmu\)m and 0.85 \(\upmu\)m-2.45 \(\upmu\)m, respectively, with gaps between the orders (the interval 1.36 \(\upmu\)m to 1.46 \(\upmu\)m is unusable due to saturated telluric lines). The resulting mean spectral resolving power of the spectra was R\(\sim\)5500. On 24 August 2022, \(L\)-band spectra were obtained with a different long-slit configuration: a 31.71/mm grating, a 0.1 arcsec slit, and the long camera (0.05 arcsec/pix), with two different central wavelengths (3.48 and 4.00 \(\upmu\)m). This configuration resulted in R\(\sim\)5100.
The spectra were taken in two ABBA nodding sequences along the slit. To account for telluric absorption, a late-B- or an early-A-type star close to the target in both time and position was observed. Stars of these spectral types are featureless in the observed wavelength range, except for hydrogen absorption lines that can be successfully removed in the reduction process by fitting theoretical line profiles. Flats were also acquired. The data were reduced with the Image Reduction and Analysis Facility (IRAF)/Gemini tasks. The sky contribution was removed by subtracting the AB pairs. The spectra were flat-fielded and telluric corrected. The wavelength calibration was performed using the telluric lines. The data were normalized to unity.
A and B positions were added to increase the signal-to-noise ratio (S/N). The final S/N ratio varies for the different spectral ranges, as it is affected by the quality of the telluric correction. Some regions are very polluted with telluric lines and it was impossible to make a complete cancellation, thus some residuals remain. In addition, for some spectral regions heavily crowded by emission lines, it becomes difficult to make accurate S/N ratio estimates. Table 1 summarizes the mean values of the S/N ratio for all our GNIRS near-IR observations.
### Complementary Data
Optical observations were carried out at Ondrejov Observatory, Czech Republic, using the Coude spectrograph [22] attached to the Perek 2 m telescope. We obtained spectra with a resolving power of R\(\sim\)12 000 covering a spectral range from 6262 A to 6735 A on 12 and 13 September 2018. We also acquired a spectrum centered at 8600 A. We used a grating of 830.77 1/mm with a SITe 2030 \(\times\) 800 CCD and a slit width of 0.7 arcsec. Additional observations were done at Tartu Observatory, Estonia, with the 1.5 m Cassegrain reflector AZT-12 on 1 November 2021, using the long-slit spectrograph ASP-32 with a 600 l/mm. The wavelength coverage extended from 5450 A to 7480 A. Data were processed using standard IRAF tasks. Spectra were bias and flat-field corrected, wavelength calibrated, heliocentric velocity corrected, and flux normalized.
We also searched for available optical spectra in the BeSS database [23]. We downloaded sixteen spectra taken between 2019 and 2022, with a resolving power R \(\sim\) 14,000/16,000 in the spectral range 6500-6600 A. In addition, we collected a lower resolution spectrum (R \(\sim\) 5000) acquired on 30 August 2019 that covers the range 6150-7000 A. The spectra were corrected by heliocentric velocity and normalized to the continuum using the standard IRAF tasks. We chose the same sample of continuum points to normalize all spectra. The telluric correction has not been applied.
In addition, we extracted from the public database ASAS-SN (All-Sky Automated Survey for Supernovae; Shappee et al. [24], Kochanek et al. [25]) ([https://www.astronomy.ohio-state.edu/assassn/](https://www.astronomy.ohio-state.edu/assassn/)(accessed on 2 February 2023)) survey photometric data of this star obtained over eight years. The collection is composed of \(V\)-band magnitudes from 16 December 2014 to 29 November 2018 and data in the \(g\)-band from 12 April 2018 up to 17 January 2023. Furthermore, we collected ground- and spaced-based multicolor photometry from Vizier service from 0.3 um to 140 um.
## 3 Analysis of the IR Data
Figure 1 shows the near-IR spectrum of MWC 645 from 8400 to 13,600 A. It displays numerous emission lines, particularly of the H I Paschen series. The strongest lines, except those of H I, correspond to O I, Fe II, and the Ca II triplet. Many transitions of N I can be identified in emission along this spectral range. Forbidden lines of [Fe II] and [S II] are also present. The moderate-resolution data reveal several absorption lines of Fe I that could be associated with the cool stellar companion.
We carefully searched for He I lines in our spectra. If the lines are present, they are incipient and hidden in the noise. The most intense transitions in the interval \(\lambda\) 8400-24,500 A (cited in the NIST database [26]) correspond to \(\lambda\)10,830 A and \(\lambda\)20,587 A. Nodyarov et al. [20] reported the presence of the He I \(\lambda\)10,830 A transition in emission from a low-resolution spectrum (R \(\sim\) 700). We identified a group of Fe II lines with the three emission peaks at the interval 10,826-10,862 um. However, the bluest feature of this group (see Figure 1) is broad, and thus, the He I \(\lambda\)10,830 A line could be blended with the Fe II lines. Our data have a higher resolution than the spectrum of Nodyarov et al., but not sufficient to separate the Fe II lines from the He I line. These authors also identified the He I \(\lambda\)20,587 A line. Unfortunately, it lies outside our spectral coverage.
\begin{table}
\begin{tabular}{l c c} \hline
**Observations** & **Spectral Range** & **Mean S/N Ratio** \\
**Program ID** & **[Å]** & \\ \hline GN-2017A-Q-62 & 22,570–24,400 & 200 \\ GN-2018A-Q-406 & 21,000–24,500 & 200 \\ GN-2018A-Q-406 & 15,800–18,200 & 100 \\ GN-2018A-Q-406 & 11,000–13,600 & 90 \\ GN-2018A-Q-406 & 8400–11,000 & 60 \\ GN-2022B-Q-225 & 33,311–36,446 & 200 \\ GN-2022B-Q-225 & 38,500–41,786 & 100 \\ \hline \end{tabular}
\end{table}
Table 1: Mean values of the S/N ratio for GNIRS near-IR observations in different spectral ranges.
The \(H\)-band spectrum of MWC 645 (upper panel of Figure 2) is dominated by the H I Brackett series and several permitted and forbidden lines of Fe II. In the \(K\)-band, the most intense feature is the Br\(\gamma\) line (see lower panel of Figure 2), which stands out among several emission lines corresponding to Fe II, [Fe II], and presumably [Ni II]. The Mg II doublet at \(\lambda\lambda\) 21,374 A and 21,437 A is also in emission. The Pfund series extends from 2.3 microns longward. In addition, absorption features of neutral metals characteristic of the Mg II doublet are also observed in the \(K\)-band.
Figure 1: Normalized medium-resolution spectrum of MWC 645 taken with Gemini/GNIRS on July 2018 from 8500 Å to 13,600 Å. The normalized Ondřjov spectrum from 8400 Å to 8870 Å, acquired on September 2018, is shown in cyan. Main spectral lines are identified by colored markings. The spectral features of a given element (either permitted or forbidden and of different ionization states) are joined by a dashed line of the same color: hydrogen is indicated in red, oxygen in gray, iron in blue, calcium in pink, nitrogen in violet, sulfur in green, and helium in cyan. Wavelengths are given in angstroms.
late-type stars, such as Ca I, Mg I, and Na I, are present. For the first time, we have detected the presence of CO band heads in absorption around 2.3 \(\upmu\)m and around 1.6 \(\upmu\)m, which are typical photospheric features of late-type luminous stars.
Figure 3 shows the first obtained \(L\)-band spectrum of MWC 645 in two different spectral regions. The first interval between 33,310 A and 36,410 A is relatively featureless (see left panel), except for the presence of the permitted emission line of Fe II \(\lambda\) 35,423 A,
Figure 2: Normalized medium-resolution spectrum of MWC 645 taken with Gemini/GNIRS in 2018, covering the \(H\)-(upper panel) and \(K\)-bands (lower panel). Main spectral lines and molecular bands are identified by colored markings. The spectral features of a given element (either permitted or forbidden and of different ionization states) or molecule (of different isotopes) are joined by a dashed line of the same color: hydrogen is indicated in red, magnesium in gray, iron in blue, sodium in violet, nitrogen in green, and carbon monoxide in pink. Wavelengths are given in angstroms.
which is clearly seen above the continuum level, and some H I lines of the Humphreys series in emission, where the strongest is the one corresponding to the 20-6 transition. Lower-order members of the Humphreys series can be seen in the second spectral interval (right panel) that ranges from 38,520 A to 41,730 A, where the strong emission of the Bra line can also be observed. We searched for absorption bands of the first-overtone of silicon monoxide (SiO) around 4 \(\upmu\)m but found none.
Figure 4 plots three H I lines: Pa\(\beta\), Br\(\gamma\), and Br\(\alpha\). Their profiles are single-peaked but asymmetric. We measured the total equivalent width (EW) of each line using the 'e' function in the IRAF splot routine. The total measured EWs are 184 A, 15 A, and 48 A, respectively. The percentage uncertainty of the EW measurements is 5%. Unfortunately, the wavelength calibration of our near-IR spectra is not accurate enough to determine reliable radial velocity measurements.
### CO Absorption Bands
Figure 5 (upper panel) displays the second-overtone band heads of \({}^{12}\)CO in absorption from the 2018 \(H\)-band spectrum. The lower panel compares the \(K\)-band spectra taken in 2017 (in red) and 2018 (in black), where the variation of the first-overtone band heads of \({}^{12}\)CO is clearly seen. The positions of the \({}^{13}\)CO band heads are also marked and clearly detected in the spectrum from 2017. The 2018 spectrum is too weak to see these faint features.
The strength of the CO absorption bands in the near-IR spectra of classical late-type stars depends on the stellar effective temperature, T\({}_{eff}\), and surface gravity, log g [27; 28]. The CO absorption becomes deeper when the effective temperature decreases and the luminosity increases. Thus, hot star spectra display no trace of CO features (T\({}_{eff}\geq\) 5800 K-6000 K, Ali et al. [29]), and dwarf stars present weaker CO absorption bands than supergiants. To characterize the cool companion of MWC 645, responsible for the CO absorption features, and estimate its fundamental parameters, we used the IRTF (NASA Infrared Telescope Facility) Spectral Library [30; 31], which collects stellar spectra observed with the spectrograph SpeX
Figure 3: Normalized \(L\)-band spectrum of MWC 645 obtained in 2022. The emission lines of H I and Fe II are marked in red and blue, respectively. The “bump” longward of the Bra line is a remnant from telluric correction. Wavelengths are in angstroms.
at a resolving power of R\(\sim\)2000 and an S/N ratio of about 100 at \(\lambda\) \(<\) 4 \(\upmu\)m. We looked for late-type stars with spectral types between F and M and luminosity classes between I and V to compare their spectra with our spectrum from 2017 in the wavelength range from 2.26 to 2.44 \(\upmu\)m. Figure 6 shows this comparison. The MWC 645 spectrum (solid black line) was degraded to the resolution of the template spectrum (dashed red line), which corresponds to a G0 Ib-II star (HD 185018). The intensity of the first \({}^{12}\)CO band head of MWC 645 coincides reasonably well with that of the early G-type star; however, the rest of the band heads are less intense. The blue edge of the CO(2-0) band head might present an incipient emission. The absorption of the first \({}^{13}\)CO band head is more intense than that displayed by the library star. According to Wallace and Hinkle (2018), the \({}^{13}\)CO isotope is prominent in the supergiants and giants but is not apparent in the dwarfs, although its strength also depends on the initial rotation velocity of the star and the mixing processes that can cause a surface enrichment in \({}^{13}\)C. Otherwise, the absorption lines of neutral metals are less intense than those in the template spectrum, indicating an earlier spectral type (F8-F9 subtypes). Furthermore, the lack of SiO band heads at 4 \(\upmu\)m, often observed in K0-type stars and later, also points towards an earlier type (Winge et al., 2018).
Winge et al. (2018) presented a spectroscopic library of late spectral-type stellar templates in the \(K\)-band at a resolving power of R \(\sim\) 5900. The authors plotted the equivalent width (EW) of the first CO overtone as a function of T\({}_{eff}\) (see their Figure 3) for a stellar sample with T\({}_{eff}\) in the range 3200-5200 K and different luminosity classes. They measured the EW from the blue edge of the (2-0) band head to the blue edge of the (3-1) band head, more precisely in the window 2.293-2.322 \(\upmu\)m. We measured the EW of the CO(2,0) band head from the spectrum of 2017 and obtained 2.55 \(\pm\) 0.5 A. A visual extrapolation of the relation seen in the figure between EW and T\({}_{eff}\) in the hottest edge of the plot gives an estimation of T\({}_{eff}\) around 5200 \(\pm\) 100 K. From the spectrum obtained in 2018, we measured an EW of the CO(2,0) band head equal to 1.22 \(\pm\) 0.1 and estimated a T\({}_{eff}\)\(\sim\) 5300 \(\pm\) 100 K.
Figure 4: Strongest H I lines detected in our IR spectra of MWC 645. They display an asymmetric profile, where the red flank is steeper than the blue one. Wavelengths are in angstroms.
Figure 5: CO molecular bands of MWC 645. Some emission and absorption lines are also identified. The spectral features of a given element or molecule (of different isotopes) are indicated by colored markings and joined by a dashed line of the same color: hydrogen is indicated in red, magnesium in gray, iron in blue, sodium in violet, nitrogen in green, calcium in cyan, and carbon monoxide in pink. Upper panel: \({}^{12}\)CO second-overtone band heads seen in the \(H\)-band spectrum taken in 2018. Lower panel: \({}^{12}\)CO and \({}^{13}\)CO band heads in absorption detected in the \(K\)-band. The spectra obtained in 2017 (in red) and 2018 (in black) revealed the variability in the strength of the observed bands. Wavelengths are given in angstroms.
Figure 6: Comparison between the degraded spectrum of MWC 645 to R\(\sim\)2 000 (solid black line) and a G0 Ib-II star, HD 185018 (dashed red line), where the CO(2-0) band head fits well. The other \({}^{12}\)CO absorption bands from MWC 645 are shallower than those of the template; perhaps they are filled by emission. Wavelengths are given in angstroms.
## 4 Analysis of the Optical Data
### Photometric Light Curve
Figure 7 shows the light curve of MWC 645 taken from ASAS-SN. We applied the relationship derived by Nodyarov et al. [20] to convert the \(g\)-band magnitudes into \(V\)-band magnitudes. The optical photometry in the \(V\)-band acquired by the authors mentioned above is also included. The dates of the spectroscopic observations presented in this work are marked in the light curve as vertical lines.
The brightness fluctuations of the star up to July 2022 have been reported by Nodyarov et al. [20], who suggested that a new minimum in the light curve might take place in the second half of 2022. As can be seen in the plot, the star continued fading up to the end of October approximately, reaching a minimum of \(\sim\)0.1 mag brighter than the minimum that occurred in August/September 2018. Then, it began to strengthen in brightness again.
Nodyarov et al. [20] searched for periodicity, excluding visual magnitudes greater than 13.2 mag from their analysis. They derived 69, 145, and 295 days. They attributed the quasi-cyclic photometric variations to variable CS extinction. We applied the Lomb-Scargle method using the IRSA ([https://irsa.ipac.caltech.edu/irsaviewer/timeseries](https://irsa.ipac.caltech.edu/irsaviewer/timeseries) (accessed on 30 November 2022)) time series tool to the \(V\)-band light curve shown in Figure 7. We discarded the magnitudes with errors greater than 0.03. The scan of periodic signals with values below ten days gave strong peaks at one day and harmonics of the sidereal day due to the observing cycle.
The periodogram for periods greater than one day is shown in Figure 8. The six peaks at or above a confidence level of 20 in the power spectrum correspond to periods of approximately 65, 112, 162, 298, 461, and 709 days. We dismissed the last period since the time coverage of the observations is not enough for its precise determination. The phase diagram for each period shows a large scatter of the magnitude points and a small amplitude in their modulation (\(\sim\)0.2-0.3 mag). We should note that the 65- and 298-day periods were also found by Nodyarov et al. [20]. As our data spread over a more extended baseline than the one used by the authors mentioned above, this might be a possible explanation for the differences in the other identified periods.
Figure 7: Light curve of MWC 645 from 16 December 2014 up to 11 November 2022 taken from ASAS-SN. The purple squares indicate the \(V\)-band magnitudes, and the green circles represent the \(g\)-band measurements converted to the \(V\)-band magnitudes. The conversion has been carried out with the relationship found by Nodyarov et al. [20]. Their optical photometric observations are also included (blue circles). Vertical solid blue lines mark the dates of our IR observations (2017, 2018, and 2022, respectively); dotted blue lines mark the dates of our optical spectra (2018 and 2021, respectively); and those dotted in green, gray, and red correspond to the spectra downloaded from the BeSS database taken in 2019, 2020, and 2022, respectively. Time is given in heliocentric Julian dates (HJD) minus \(2.45\times 10^{6}\) days.
### Spectroscopic Data
The peculiar profile of the H\(\alpha\) line of MWC 645, composed of a broad blue-shifted peak and a narrow red-shifted one, can be seen in Figure 9, where all the spectra are normalized to the continuum level. This plot shows not only the profile changes over different years (spectra from the same year are displayed in the same color) but also daily. We note that as the BeSS spectra are not corrected by telluric lines, some of the profiles present one or two absorption features superimposed on the blue-shifted emission peak corresponding to water vapor lines of the Earth's atmosphere, which affect the shape of the profile. A variation in the emission strength of both peaks is seen. Using the 'e' task in the IRAF splot routine, we measured the intensity of the blue (V) and red (R) emission peaks and the total equivalent width of the H\(\alpha\) profile, except for the lowest resolution spectrum. Table 2 presents these values and the calculated V/R ratios. We can see that the V/R ratio presents changes over four years, even doubling its value. We note that the ratio of V/R\(\sim\)0.3 corresponds to observations close in time (except for one). For this subset, the changes in EW might be mainly due to the continuum-level variations.
The average radial velocity of the blue and red emission components derived from the Ondrejov spectra are \(-\)225 \(\pm\) 5 km s\({}^{-1}\) and \(-\)31 \(\pm\) 3 km s\({}^{-1}\), respectively, which are in agreement with the values reported by Zickgraf (2008) of \(-\)218 km s\({}^{-1}\) and \(-\)30 km s\({}^{-1}\) and Nodyarov et al. (2010) of \(-\)252 \(\pm\) 9 km s\({}^{-1}\) and \(-\)30 \(\pm\) 2 km s\({}^{-1}\), respectively. Fitting a Gaussian profile to the narrow red component of the H\(\alpha\) line, we obtained an average FWHM of 90 \(\pm\) 1 km s\({}^{-1}\). To fit the broad emission component, we built a profile with a red wing symmetrical to the observed blue one and obtained an average FWHM of 318 \(\pm\) 4 km s\({}^{-1}\).
Even though the BeSS material is not accurate enough to measure radial velocities, we have estimated them from the different spectra for both emission peaks fitting the components with Gaussian profiles. The average value is \(-\)229 km s\({}^{-1}\) and \(-\)26 km s\({}^{-1}\) for the blue and red emission peaks, respectively. In Figure 9, a variation in the central wavelength of the H\(\alpha\) red emission peak (which is not distorted by telluric lines) can be observed from the different spectra; however, as the wavelength calibration of the BeSS spectra is not well suited for radial velocity determination, we cannot confirm if this change is real. The average FWHMs of the blue and red components are 256 \(\pm\) 4 km s\({}^{-1}\) and 80 \(\pm\) 2 km s\({}^{-1}\), respectively. The H\(\alpha\) broad component line profile from the BeSS spectra has a smaller average FWHM than the Ondrejov spectra. In the latter, the broad component
Figure 8: Fourier power spectrum of the ASAS-SN light curve of MWC 645. The periods in days are on a logarithmic scale. The green line shows the confidence level.
presents a blue wing with a gentler slope that is also outlined for the red wing, giving a greater width.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Obs. Date** & **HJD-2450000** & **Observatory** & **V** & **R** & **V/R** & **EW** \\ **(yyyy-mm-dd)** & & & & & & [Å] \\ \hline
2018-09-11 & 8373.3290 & Ondrejov & 118.4 & 452.0 & 0.26 & \(-\)1980.3 \\
2018-09-12 & 8374.2823 & Ondrejov & 132.8 & 509.0 & 0.26 & \(-\)2122.8 \\
2019-08-28 & 8724.3870 & BeSS 1 & 90.2 & 268.9 & 0.33 & \(-\)1132.2 \\
2019-08-30 & 8726.4279 & BeSS 2 & 75.1 & 391.9 & 0.45 & \(-\)2130.7 \\
2019-09-01 & 8728.4429 & BeSS & 95.7 & 283.9 & 0.34 & \(-\)1237.2 \\
2019-09-02 & 8729.4419 & BeSS & 105.7 & 316.0 & 0.33 & \(-\)1291.7 \\
2019-09-03 & 8730.4365 & BeSS & 101.5 & 306.1 & 0.33 & \(-\)1353.7 \\
2019-09-05 & 8732.4433 & BeSS & 100.1 & 292.3 & 0.34 & \(-\)1147.3 \\
2019-09-07 & 8734.3942 & BeSS & 109.7 & 323.5 & 0.34 & \(-\)1372.1 \\
2019-09-10 & 8737.3883 & BeSS & 99.8 & 296.0 & 0.34 & \(-\)1292.5 \\
2019-09-11 & 8738.3925 & BeSS & 132.3 & 397.1 & 0.33 & \(-\)1642.3 \\
2019-09-14 & 8741.4241 & BeSS & 110.5 & 330.9 & 0.33 & \(-\)1348.1 \\
2019-09-19 & 8746.3989 & BeSS & 100.5 & 297.7 & 0.34 & \(-\)1129.5 \\
2019-09-20 & 8747.3862 & BeSS & 103.1 & 304.3 & 0.34 & \(-\)1264.6 \\
2019-09-24 & 8751.3722 & BeSS & 109.1 & 326.1 & 0.33 & \(-\)1283.5 \\
2020-06-25 & 9025.5535 & BeSS & 94.1 & 223.7 & 0.42 & \(-\)933.8 \\
2020-07-12 & 9043.5126 & BeSS & 127.4 & 264.8 & 0.48 & \(-\)1216.8 \\
2021-11-01 & 9520.3161 & Tartu & — & — & — & — \\
2022-07-26 & 9787.5322 & BeSS & 47.3 & 195.6 & 0.24 & \(-\)718.2 \\
2022-12-09 & 9923.2775 & BeSS & 100.5 & 321.2 & 0.31 & \(-\)1392.3 \\ \hline \hline \end{tabular}
\end{table}
Table 2: H\(\alpha\) line parameters of MWC 645. Column 1 indicates the observing date; column 2 the heliocentric Julian date (minus 2.45 \(\times\) 10\({}^{6}\) d); column 3 the observatory/database where the spectrum was obtained; columns 4 and 5 the emission intensities of the blue and red peaks (V and R) in continuum units, respectively; column 6 the emission intensity ratio of both peaks (V/R); and column 7 the total equivalent width (EW) in Å. The measurement errors are on the order of 1% for the intensities and 10% for the EW values.
Figure 9: H\(\alpha\) line variation of MWC 645. Spectra taken in 2018, 2019, 2020, and 2022 are displayed in blue, green, gray, and red, respectively. The spectrum with the lowest resolution is plotted with a dashed-line. The heliocentric radial velocity scale is shown in km s\({}^{-1}\).
Apart from the H\(\alpha\) line, our medium-resolution spectroscopic observations and the 2019 low-resolution BeSS spectrum also show the lines of [O I] \(\lambda\lambda\) 6300, 6364 A. These lines appear single-peaked, although asymmetric, as was previously mentioned by Zickgraf [17], which might suggest that the lines are composed of two blended components (see Figure 10). Several permitted Fe II lines and the forbidden lines of [N II] \(\lambda\) 6583 A and [S II] \(\lambda\lambda\) 6716, 6731 A are apparent. We calculated the average heliocentric radial velocity of the emission lines of the Ondrejov spectra and the standard error of the mean, obtaining \(-\)43 \(\pm\) 2 km s\({}^{-1}\). Jaschek et al. [15] derived \(-\)76 \(\pm\) 5 km s\({}^{-1}\), and Nodyarov et al. [20] found \(-\)61 \(\pm\) 4.3 km s\({}^{-1}\), which indicates a variation in radial velocity. The He I \(\lambda\) 6678 A transition is absent, as in the spectra studied by the last-mentioned authors and Zickgraf [17].
### Global Properties of the Circumstellar Material
Figure 11 shows the spectral energy distribution (SED) of MWC 645, built from the photometry publicly available from the ultraviolet to the far-IR (0.3 um-140 um). The low-resolution spectrum taken in 2021 in the spectral region of the H\(\alpha\) line is also included.
To derive the global physical properties of the CS material, we considered a simple model presented by Marchiano et al. [35] and Arias et al. [36]. The numerical code allows for obtaining the SED assembled by different envelope components. The model assumes the presence of a spherical envelope composed of gas close to the star (\(\leq 5\,R_{*}\)) and (or) dust further away from it (\(\geq 100\,R_{*}\)) [37; 38]. The emergent flux is computed from the central star and the envelope (considering that the latter can be reduced to an equivalent shell), applying a plane-parallel solution for the transfer equation. The optical depth \(\tau_{\lambda}^{G}\) and the source function characterize the gaseous shell, which can be described by adding as free parameters the electron temperature T\({}_{G}\) and the effective radius R\({}_{G}\). The dusty region is treated using an analogous scheme with similar parameters describing the shell: an optical depth \(\tau_{\lambda}^{D}\), a temperature T\({}_{D}\), and an effective radius R\({}_{D}\). The model allows several dust shell components to be added. The interstellar extinction is also included by an optical depth \(\tau_{\lambda}^{ISM}\). The absorption A(\(\lambda\)) is related to each optical depth through the expression \(\tau\) = 0.41(10) A(\(\lambda\)). Using the law given by Cardelli et al. [39], it can be written as A(\(\lambda\))= [R\({}_{V}\) a(1/\(\lambda\)) + \(b\)(1/\(\lambda\))] E(B-V), where R\({}_{V}\) is the total to selective extinction and E(B-V) is the color excess. We took R\({}_{V}^{ISM}\) = 3.1 for the interstellar dust and tried different values of R\({}_{V}^{D}\) greater than 3.1 for the CS dust shell components. The temperature of the dust
Figure 10: Example of the emission lines in the surroundings of the H\(\alpha\) line of MWC 645. Ondřejov normalized spectra taken in 2018 on 12 September (in red line) and 13 September (in black line) and the 2019 low-resolution spectrum (in green) are shown with the main lines identified by colored markings. The spectral features of a given element (either permitted or forbidden and of different ionization states) are joined by a dashed line of the same color: hydrogen is indicated in red, oxygen in gray, iron in blue, nitrogen in violet, and sulfur in green. Wavelengths are given in angstroms.
grains depends on the stellar radiation and the distance from the star center [40]. That is, \(\rm T_{D}(r)=T_{eff}\,W(r)^{1/(4+p)}\), where W(r) is the geometrical dilution factor. The parameter p depends on the nature of the dust, but it is usually on the order of one. Furthermore, as the equilibrium temperature should be lower than the dust condensation temperature (typically around 1500 K) to allow the formation of grains, it constrains the distance where condensation can occur, e.g., for a T\({}_{eff}\) of 18,000 K, the condensation distance is about 249 R\({}_{*}\).
The code gives the observed flux normalized to that at a reference wavelength \(\lambda_{ref}\); we chose \(\lambda_{ref}=0.55\) um. We assumed that the flux of the central object results from the contribution of both stellar fluxes, for which we considered the Kurucz ([41]) atmosphere models. Regarding what we know about the stars, we selected models between 17,000 and 20,000 K for the hot binary component and between 4500 and 6000 K for the cool one and explored the flux contribution of the hot and cool components to the total flux in the range of 70%-90% and 30%-10 %, respectively (similar percentages were suggested by Nodyarov et al. [20]). Figure 11 displays our best fit of the theoretical SED (solid blue line) to the observed data. The best-fitting model was computed by taking the photospheric fluxes from a star with T\({}_{eff}\) = 18,000 K, \(\rm log\,\ g=4.0\), and R\({}_{*}\) = 3.73 R\({}_{\odot}\) and a cool star with T\({}_{eff}\) = 5000 K and \(\rm log\,\ g=1.5\). The contribution of each stellar flux to the total flux is 80% and 20% for the hot and cold components, respectively. The resulting envelope has one gaseous shell at R\({}_{G}\) = 1.15 R\({}_{\odot}\) with T\({}_{G}\) = 16780 K and \(\rm\tau_{V}=0.1\). The dusty region comprises three different shells with the following parameters: R\({}_{D}^{1}\) = 348 R\({}_{*}\), T\({}_{D}^{1}\) = 1310 K, R\({}_{D}^{2}\) = 3750 R\({}_{*}\), T\({}_{D}^{2}\) = 507 K, R\({}_{D}^{3}\) = 0.02 pc, T\({}_{D}^{3}\) = 98 K. The computed total CS visual absorption is A\({}_{V}^{2}\) = 0.097 mag. We obtained a color excess due to the interstellar medium of E(B-V)\({}^{ISM}\) = 0.98 \(\pm\) 0.02 mag, which results in a total visual absorption A\({}_{V}\) = 3.13 \(\pm\) 0.11 mag. This value agrees with the one derived by Nodyarov et al. [20]. A disagreement between the theoretical and observed SEDs in the region of log \(\lambda\) = 1.0-1.5 is observed. We should note that the employed code can model the thermal emission of the dust, but it cannot address the computation of silicate bands. Thus, the presence of silicate particles might be responsible for the observed difference between the SEDs at 10 um and 18 um. In fact, Nodyarov et al. [20] have already reported weak emission bumps at these wavelengths.
Figure 11: Spectral energy distribution of MWC 645. The open triangles represent the observed photometric data: optical bands (yellow), 2MASS (green) [42], WISE (violet) [43], MSX (light blue) [44], IRAS (red) [45], and AKARI (black) [46]. The error bars of the photometric data are included (in most photometric bands, they fall inside the symbols). The low-resolution spectrum acquired in 2021 over the H\(\alpha\) region is also displayed (with a dashed black line). The solid red line shows the SED modeled considering the contribution of the photospheric fluxes from both stars, the thermal emission from a gaseous shell close to the system and the effect of the interstellar medium extinction. The solid blue line shows our best-fitting theoretical SED, obtained by adding to the SED plotted in red the contribution of three dusty shells surrounding the stellar system. The flux is normalized to that at \(\lambda_{ref}\) = 0.55 μm and displayed on a logarithmic scale. The wavelengths are in microns.
## 5 Discussion
The forest of emission features in the IR spectral ranges presented in this work account for the presence of large amounts of CS gas embedding and veiling MWC 645, which makes it challenging to characterize the components of the binary system. In addition, the existence of dust revealed by the strong IR excess of radiation above the photospheric fluxes is another element to consider when building a probable scenario.
In previous studies, the hot component of the system was considered to be a B-type star [13; 15], until Nodyarov et al. [20] assigned it as an early-B subtype. This assignment was mainly constrained by the absence of He II lines and the possible presence of He I absorption lines suggested by Jaschek et al. [15] (see Section 1) in the optical spectral range and the identification of the He I lines in emission in the near-IR. We were not able to detect neutral helium lines in either emission or absorption in our data, at least with a signal intensity above the noise level. However, we cannot discard a possible blend of the He I line at 1.083 \(\upmu\)m (the transition of He I with the highest theoretical intensity that falls in our observed spectral ranges) with a group of Fe II emission lines precisely identified in the same spectral region. We detected Mg II lines in emission at 2.138 \(\upmu\)m and 2.144 \(\upmu\)m. According to the works of Clark and Steele [47] and Steele and Clark [48], who studied a representative sample of Be stars in the \(H\)- and \(K\)-bands, no evidence of He I features and the simultaneous presence of Br\(\gamma\) and Mg II lines in emission indicate a spectral type between B2 and B4. If He I lines are present, the spectral type is B3 or earlier. New high-resolution IR spectra could be valuable to clarify the presence of He I lines. Nevertheless, we cannot discard any variability in neutral helium lines.
The \({}^{12}\)CO absorption bands detected for the first time at 1.62 \(\upmu\)m and 2.3 \(\upmu\)m allow us to constrain the spectral type and effective temperature of the cool binary component. The spectral type derived by the best fit of the first \({}^{12}\)CO band head with that of a G0-type star does not agree with the strength of metallic lines, which are seen very weakly in the spectra of MWC 645. The effective temperature associated with this spectral type for supergiant/giant stars is around 5600 K [49; 50]. Using the EW of the CO(2,0) band head, we determined a T\({}_{eff}\) average value of about 5 250 K. However, since the SED of the star in the near-IR has a significant contribution from the hot companion and from the ionized envelope (free-free emission), all spectral features from the cool companion are reduced in their intensity (due to the false continuum). Thus, the temperature determination should be interpreted as an upper limit to the effective temperature.
On the other hand, we might consider that instead of tracing a cool companion, the detected CO absorption might also uncover some mass-ejection episode, such as can be seen in the yellow hypergiant \(\rho\) Cas [51] or in the eruptive variable V838 Mon [52], when molecular absorption bands start to develop in the spectrum while the star's brightness fades. In the case of \(\rho\) Cas, the CO bands turn from absorption into emission when the star reaches the next maximum brightness phase in which it is too hot for molecule condensation in its extended outer layers [53]. This finding agrees with the scenario that mass was ejected into the environment, radiating while it is expanding and cooling. Since we have only two observations, we cannot trace the evolution of the molecular absorption along the light curve. However, it is interesting to mention that the most intense CO absorption (seen in 2017) occurs when the brightness begins to increase after a seeming local minimum. This observation presents an emission peak blueward of the CO(2,0) absorption band head that is blue-shifted at about 308 km s\({}^{-1}\). If it is indeed due to CO, it might suggest a high-velocity molecular outflow. Furthermore, the considerably weaker CO absorption spectrum observed in 2018 might be interpreted as partially filled with circumstellar emission. Moreover, the absorption-line spectrum ascribed to the cool companion might have originated in an optically thick disk. Polster et al. [54] suggested that the absorption features seen in the spectrum of the FSCMa star, MWC 623, are formed in an equatorial disk viewed nearly edge-on, which acts as a pseudo-photosphere.
When we look at the light curve, we see global qualitative similarities between the range mostly recorded in \(V\)-band magnitudes up to the deepest minimum (which occurred
in August/September 2018) and that traced by the \(g\)-band magnitudes from this minimum up to January 2023. In both time lapses of about four years, we can distinguish a well-outlined dip (at HJD-2450000 \(\sim 7500\) d and \(\sim 9000\) d) less intense than the main minimum. Although they present slightly different shapes and depths, and the light curve afterwards reaches a different maximum magnitude level, they are alike. This similar pattern suggests that the dominant source of variability is the same. Variable CS (or circumbinary) extinction along the line of sight due to dust clumps might be responsible for these photometric variations [55; 56; 57]. Our simple model to fit the SED allowed us to derive the global properties of the dusty envelope. Despite its spherical geometry, it traces different components of optically thin dust, the first located at a distance of \(\sim 6\) AU. This distance agrees with the innermost dusty disk radius of \(\sim 5\) AU derived for the star FS CMa through aperture-synthesis imaging in the \(L\) and \(N\) bands [58]. Material orbiting the system at this distance at Keplerian velocity would have a period of \(\sim 5\) years. A warped inner edge of the disk can also produce variable extinction [59].
Previous studies of MWC 645 have not reported variations in the V/R ratio of the H\(\alpha\) line. Observations made over the last 30 years, although discontinuous in time, have not revealed an inversion in the intensity ratio of the peaks, always showing V/R \(<1\). We noticed that the V/R ratio derived from our spectra varies from 0.2 to 0.5 (see Table 2). We also calculated the V/R ratio from the intensities of both H\(\alpha\) peaks included in the work of Nodyarov et al. [20]. Their observations are from 2004 to 2021 and not continuous. We found a variable V/R ratio with values between 0.3 and 0.9. The only two spectra from dates included in the light curve range, October 2016 (HJD-2450000 \(\sim 7680\) d) and November 2021 (HJD-2450000 \(\sim 9\,545\) d), present a V/R ratio of 0.7 and 0.3, respectively. Spectroscopic monitoring of the H\(\alpha\) line could help scan and characterize this variability and search for any periodicity in the changes of the peak intensities that could be related to the rotation of a density perturbation in the disk [60] or an orbital motion [61]. Zickgraf computed line profiles assuming a latitude-dependent wind model with a dust disk and obtained similar line profiles to the observed H\(\alpha\) profile with V/R \(<1\) for an intermediate inclination angle.
The dimming in optical brightness occurs over a long time, and, along this phase, the H\(\alpha\) emission changes. The V/R ratio decreases, and the EW increases (with the smallest value of the V/R relation and the maximum EW at the minimum of the light curve). This fact seems to be associated with a change in the amount of the circumstellar material, and not only due to a natural increase in emission intensity during the light curve minimum. Also, the FWHM of the blue emission component is the largest at this point. This H\(\alpha\) line strengthening might be attributed to enhanced mass loss (or a mass ejection episode). Similar variations in the emission of the spectral lines and brightness have been observed in the FS CMa star, MWC 728. The light curve of MWC 645 suggests a (possible) periodic behavior. If this were true, the H\(\alpha\) enhancement might result from a mass transfer process during periastron passage in an eccentric binary system, with a period of the order of 4 years. A denser observational grid with high-quality spectra is needed to study the link between mass loss and brightness behavior.
A detailed calculation of the H\(\alpha\) line profile based on a physically consistent model is a difficult task. However, it would be valuable to explore models with simplified assumptions to gain insight into the system geometry and the structure of the CS matter [9; 54]. More complex scenarios considering non-conservative mass transfer between the binary components should be considered to draw a picture of the structures involved in the emission processes [62; 63].
## 6 Conclusions
In this paper, we have studied the FS CMa-type object, MWC 645, a recently confirmed binary system. We have presented IR medium-resolution spectra covering the \(J\)-, \(H\)-, \(K\)-, and \(L\)-bands and identified the main spectral features. We have reported the presence of CO bands in absorption for the first time. We have searched for periodicity in the light curve and a possible correlation between its behavior and the spectroscopic optical data. We found that the photometric variations could be explained by variable extinction along the line of sight. In addition, we noted that the stellar brightness fading is accompanied by the enhancement of the H\(\alpha\) line emission, which might be due to mass ejection events. Finally, a proper fitting to the observed SED was found, giving a global picture of the gaseous and dusty structures that could enshroud the binary.
Simultaneous optical and near-IR spectroscopy during the following brightness minimum would be very useful for tracing the onset and progress of the possible mass transfer. Such an understanding is utmost for deepening the comprehension of binary evolution in general and of the nature of this fascinating object in particular.
Conceptualization, A.F.T., M.L.A., and M.K.; methodology, A.F.T., M.L.A., and M.K.; software, A.F.T., M.L.A., and L.V.M.; formal analysis, A.F.T. and M.L.A.; investigation, A.F.T., M.L.A., M.K., L.V.M., and T.E.; resources, A.F.T., M.L.A., and M.K.; data curation, A.F.T., M.L.A., M.K., L.V.M., and T.E.; writing--original draft preparation, review and editing, A.F.T., M.L.A., M.K., L.V.M., and T.E.; visualization, A.F.T., M.L.A., and M.K.; funding acquisition, A.F.T., M.L.A., M.K., and T.E. All authors have read and agreed to the published version of the manuscript.
A.F.T. and M.L.A. acknowledge financial support from the Universidad Nacional de La Plata (Programa de Ineentivos 11/G160) and CONICET (PIP 1337), Argentina. M.K. acknowledges financial support from the Czech Science Foundation (GACR, grant number 20-00150S). The Astronomical Institute of the Czech Academy of Sciences, Ondrejov, is supported by project RVO: 67985815. This project has received funding from the European Union's Framework Programme for Research and Innovation Horizon 2020 (2014-2020) under the Marie Sklodowska-Curie Grant Agreement No. 823734. T.E. gratefully acknowledges financial support from the Estonian Ministry of Education and Research through the Estonian Research Council institutional research funding IUT40-1, from the European Union European Regional Development Fund project KOMEET 2014-2020.4.01.6-0029.
Not applicable.
Not applicable.
Not applicable.
The data involved in this research are available on request from the authors.
The authors are grateful to the referees, whose comments and suggestions helped to improve the paper. This work has made use of IRAF, which is distributed by the National Optical Astronomy Observatory, operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation; the BeSS database operated at LESIA, Observatoire de Meudon, France ([http://basebe.obspm.fr](http://basebe.obspm.fr) (accessed on 2 February 2023)); the SIMBAD database and the VizieR catalog access tool, both operated at CDS, Strasbourg, France; the NASA Astrophysics Data System (ADS); the NASA IRTF (Infrared Telescope Facility) Spectral Library; and the NASA IRSA period search tool and the All-Sky Automated Survey for Supernovae (ASAS-SN). This paper is based on observations obtained at (i) Ondrejov Observatory (Czech Republic) with the Perek 2 m telescope; (ii) Tartu Observatory (Estonia); and (iii) the international Gemini Observatory, a program of NSF's NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation on behalf of the Gemini Observatory partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigacion y Desarrollo (Chile), Ministerio de Ciencia, Tecnologia e Innovacion (Argentina), Ministerio da Ciencia, Tecnologia, Inovacoes e Comunicacoes (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea) under program IDs GN-2017A-Q-62, GN-2018A-Q-406, and GN-2022B-Q-225. This work has made use of the ground-based research infrastructure of Tartu Observatory, funded
through the projects TT8 (Estonian Research Council) and KosEST (EU Regional Development Fund). The authors thank L.S. Cidale for many fruitful discussions. The authors declare no conflict of interest.
## Abbreviations
The following abbreviations are used in this manuscript:
\begin{tabular}{l l} CS & Circumstellar \\ CCD & Charged-coupled device \\ EW & Equivalent width \\ V & Blue peak intensity \\ R & Red peak intensity \\ V/R & Blue-to-red emission peak ratio \\ FWHM & Full width at half maximum \\ HJD & Heliocentric Julian date \\ IR & Infrared \\ SED & Spectral energy distribution \\ \end{tabular}
|
2308.00820 | Geometry preserving numerical methods for physical systems with
finite-dimensional Lie algebras | We propose a geometric integrator to numerically approximate the flow of Lie
systems. The key is a novel procedure that integrates the Lie system on a Lie
group intrinsically associated with a Lie system on a general manifold via a
Lie group action, and then generates the discrete solution of the Lie system on
the manifold via a solution of the Lie system on the Lie group. One major
result from the integration of a Lie system on a Lie group is that one is able
to solve all associated Lie systems on manifolds at the same time, and that Lie
systems on Lie groups can be described through first-order systems of linear
homogeneous ordinary differential equations (ODEs) in normal form. This brings
a lot of advantages, since solving a linear system of ODEs involves less
numerical cost. Specifically, we use two families of numerical schemes on the
Lie group, which are designed to preserve its geometrical structure: the first
one based on the Magnus expansion, whereas the second is based on
Runge-Kutta-Munthe-Kaas (RKMK) methods. Moreover, since the aforementioned
action relates the Lie group and the manifold where the Lie system evolves, the
resulting integrator preserves any geometric structure of the latter. We
compare both methods for Lie systems with geometric invariants, particularly a
class on Lie systems on curved spaces. We also illustrate the superiority of
our method for describing long-term behavior and for differential equations
admitting solutions whose geometric features depends heavily on initial
conditions. As already mentioned, our milestone is to show that the method we
propose preserves all the geometric invariants very faithfully, in comparison
with nongeometric numerical methods. | L. Blanco, F. Jiménez Alburquerque, J. de Lucas, C. Sardón | 2023-08-01T20:17:48Z | http://arxiv.org/abs/2308.00820v2 | # Geometry preserving numerical methods
###### Abstract
In this paper we propose a geometric integrator to numerically approximate the flow of Lie systems. The highlight of this paper is to present a novel procedure that integrates the system on a Lie group intrinsically associated to the Lie system, and then generating the discrete solution of this Lie system through a given action of the Lie group on the manifold where the system evolves.
One major result from the integration on the Lie group is that one is able to solve all automorphic Lie systems at the same time, and that they can be written as first-order systems of linear homogeneous ODEs in normal form. This brings a lot of advantages, since solving a linear ODE involves less numerical cost. Specifically, we use two families of numerical schemes on the Lie group, which are designed to preserve its geometrical structure: the first one based on the Magnus expansion, whereas the second is based on RKMK methods. Moreover, since the aforementioned action relates the Lie group and the manifold where the Lie system evolves, the resulting integrator preserves any geometric structure of the latter. We compare both methods for Lie systems with geometric invariants, particularly a class on Lie systems on curved spaces.
As already mentioned, the milestone of this paper is to show that the method we propose preserves all the geometric invariants very faithfully, in comparison with nongeometric numerical methods.
_MSC 2020 classes: 34A26; 53A70; (primary) 37M15; 49M25 (secondary)_
## 1 Introduction
The history of numerical methods on Lie groups is intertwined with the development of computational mathematics and the study of Lie theory. The foundations of Lie theory were settled by the Norwegian mathematician Sophus Lie in the late 19th century, however, it wasn't until the 20th century that the application of Lie groups to practical problems and the development of numerical methods gained momentum.
In the 1970s, mathematicians and physicists began to explore numerical integration methods for Lie group equations of motion. Afterwards, pioneering work by Blanes, Casas, Oteo, and Ros provided explicit symplectic integrators for specific Lie groups, such as the rotation group \(SO(3)\) and the special Euclidean group \(SE(3)\). These methods preserved important geometric properties of Lie groups, such as energy conservation and symplecticity [4, 5].
The computation of geodesics on Lie groups became a topic of interest in the 1980s. Researchers like Murray, Arimoto, and Sastry developed numerical methods to compute geodesics on Lie groups such as \(SO(3)\) and \(SE(3)\)[49, 50]. These methods relied on various techniques, including the exponential map, interpolation, and numerical optimization algorithms. The optimization of functions defined on Lie groups gained prominence in the 1990s. Researchers such as Absil, Mahony, and Mallick
developed numerical optimization algorithms specifically tailored to the geometric properties of Lie groups [1, 40, 41]. These methods allowed for efficient optimization of functions over Lie groups, which found applications in robotics, computer vision, and control theory. The interpolation of motions on Lie groups received significant attention in the early 2000s. Researchers like Sola, Kuffner, and Agrawal proposed interpolation algorithms for Lie group elements, enabling smooth and visually appealing motion planning in applications such as robotics and computer graphics [61].
In recent years, there has been continued progress in numerical methods on Lie groups, fueled by advancements in computational power and the increasing demand for efficient algorithms in applications. Research continues to focus on refining existing methods, developing new techniques, and exploring applications in areas like machine learning, motion planning, and optimization.
The Runge-Kutta methods are a family of numerical integration techniques commonly used to solve ordinary differential equations (ODEs). They involve evaluating the derivative of the function at multiple points within a time step and using a weighted sum of these derivatives to update the solution. A comprehensive survey on modern geometric Lie group methods, including new ideas and techniques, can be found in [27].
The Runge-Kutta-Munthe-Kaas (RKMK) method combines these two concepts by using the Munthe-Kaas rule to select the sampling points in the Runge-Kutta integration scheme. RKMK methods is also the term we use to refer to the usual Runge-Kutta methods (RK) applied on Lie groups. By considering the distribution of the highest derivative of the function being integrated, the RKMK method aims to improve the accuracy and efficiency of the integration process [48, 46, 47].
The specific details of the RKMK method, including the choice of sampling points and the weights assigned to the derivatives, can vary depending on the implementation and the problem at hand. Researchers have proposed different variants of the RKMK method with varying degrees of accuracy and computational complexity. Since the properties of a RKMK methods are the same of a classical RK, the symplecticity is preserved for certain orders: for example, the second-order Stormer-Verlet method, also known as the leapfrog method, is a well-known second-order symplectic integrator [64]. There are several fourth-order symplectic integrators, such as the Forest-Ruth method and the Yoshida method [66]. Higher-order symplectic integrators have also been developed, such as the sixth-order McLachlan integrator [45] and the eighth-order Blanes-Moan integrator [6]. These symplectic Runge-Kutta methods are designed to preserve the symplectic structure of Hamiltonian systems and offer improved accuracy and long-term stability compared to non-symplectic methods.
It's important to note that the choice of a specific symplectic Runge-Kutta method depends on the requirements of the problem at hand, including the desired accuracy, computational efficiency, and preservation of particular properties. In our case, we will work with a fourth-order RKMK.
**The RKMK Methods**
The basic idea behind applying the fourth-order RKMK method is to update the group elements using Lie group operations while approximating the derivatives of the group elements at multiple intermediate points within a time step. The following steps outline a typical approach.
* Initialization: Start with an initial group element.
* Time Step Selection: Choose an appropriate time step size for the integration process.
* Derivative Evaluation: Evaluate the derivative of the group element at the initial time.
* State Update: Use the fourth-order RK method to update the group element by integrating the derivative. This involves evaluating the derivative at multiple intermediate points within the time step and combining them with weighted sums to update the state.
Group Operation: Apply appropriate Lie group operations (e.g., matrix multiplication, exponentiation) to ensure the updated state remains on the Lie group manifold.
* Repeat: Repeat steps 3-5 until the desired integration time is reached.
By incorporating the Lie group operations in the state update step and properly handling the derivatives, the fourth-order RK method can be applied to approximate solutions on Lie groups.
**Magnus Method and its Interpretation**
To solve the linear ordinary differential equation:
\[Y^{\prime}(t)=A(t)Y(t),\quad Y(t_{0})=Y_{0},\]
where \(Y(t)\) is an unknown \(n\)-dimensional vector function and \(A(t)\) is an \(n\times n\) coefficient matrix, the Magnus approach was introduced. The solution for \(n=1\) is straightforward:
\[Y(t)=\exp\left(\int_{t_{0}}^{t}A(s)\,ds\right)Y_{0}.\]
This solution also holds for \(n>1\) if \(A(t_{1})A(t_{2})=A(t_{2})A(t_{1})\) for any pair of \(t_{1}\) and \(t_{2}\) values, especially when \(A\) is independent of \(t\). However, for the general case, the aforementioned expression is not a valid solution.
Wilhelm Magnus devised a method to solve the matrix initial-value problem by introducing the exponential of a specific \(n\times n\) matrix function \(\Omega(t,t_{0})\):
\[Y(t)=\exp\left(\Omega(t,t_{0})\right)Y_{0}, \tag{1}\]
where \(\Omega(t)\) is constructed as a series expansion:
\[\Omega(t)=\sum_{k=1}^{\infty}\Omega_{k}(t),\]
with \(\Omega(t)\) representing \(\Omega(t,t_{0})\) for simplicity and taking \(t_{0}=0\).
Magnus appreciated that, since \(\frac{d}{dt}(e^{\Omega})e^{-\Omega}=A(t)\), using a Poincare-Hausdorff matrix identity, he could relate the time derivative of \(\Omega\) to the generating function of Bernoulli numbers and the adjoint endomorphism of \(\Omega\):
\[\Omega^{\prime}=\frac{\text{ad}(\Omega)}{\exp(\text{ad}(\Omega))-1}A,\]
to solve for \(\Omega\) recursively in terms of \(A\) in a continuous analog of the CBH expansion [21, 63].
The equation above constitutes the Magnus expansion, or Magnus series, for the solution of the matrix linear initial-value problem. The first four terms of this series read:
\[\Omega_{1}(t) =\int_{0}^{t}A(t_{1})\,dt_{1},\] \[\Omega_{2}(t) =\frac{1}{2}\int_{0}^{t}dt_{1}\int_{0}^{t_{1}}dt_{2}[A(t_{1}),A( t_{2})],\] \[\Omega_{3}(t) =\frac{1}{6}\int_{0}^{t}dt_{1}\int_{0}^{t_{1}}dt_{2}\int_{0}^{t_{ 2}}dt_{3}\left([A(t_{1}),[A(t_{2}),A(t_{3})]]+[A(t_{3}),[A(t_{2}),A(t_{1})]] \right),\] \[\Omega_{4}(t) =\frac{1}{12}\int_{0}^{t}dt_{1}\int_{0}^{t_{1}}dt_{2}\int_{0}^{t_ {2}}dt_{3}\int_{0}^{t_{3}}dt_{4}\left([[[A_{1},A_{2}],A_{3}],A_{4}]+\ldots \right),\]
By expressing the solution in terms of the exponential of a matrix function (1), the Magnus series offers a systematic way to approximate the solution. The advantage of the Magnus approach lies in its ability to preserve important qualitative properties of the exact solution, such as symplectic or unitary character, even in truncated forms. This method has found applications in various fields, including classical mechanics and quantum mechanics, where it offers an alternative to conventional perturbation theories. The Magnus expansion method stands as a valuable tool for analyzing and approximating solutions to linear differential equations [39, 4], and, naturally, has strong applications when \(Y(t)\) belongs to a matrix Lie group.
These two numerical integration methods play a crucial role in the study of Lie systems. These methods aim to preserve the geometric structures and qualitative properties of the underlying system, such as symplecticity, conservation laws and, in the first place, the Lie group structure itself. The Magnus expansion and RKMK methods are particularly useful for preserving the long-term behavior of Hamiltonian systems, which are a special class of Lie systems, the so called Lie-Hamilton systems.
Lie systems occurred for the first time in the study of Riccati equations [31] as a consequence of the generalisation to a nonlinear realm of the known superposition rules for linear systems of first-order ordinary differential equations. Among other reasons, superposition rules are interesting to solve numerically systems of differential equations whose general solutions cannot be exactly found [78]. Although most differential equations cannot be studied via Lie systems, Lie systems have many relevant applications in physics, control theory, and other fields [12, 38]. In particular, Lie-Hamilton systems occur in the study of Smorodinsky-Winternitz oscillators, Milne-Pinney equations, dissipative harmonic oscillators, trigonometric oscillators, and so on (see [38] and references therein). Certain quantum mechanical systems, like quantum mechanical oscillators with time-dependent frequency and other time-dependent parameters, can also be studied via Lie systems on Lie groups [12]. Particular cases of matrix Riccati equations, which are also Lie systems, are associated with Painleve trascendents, Sawada-Kotera equations, Kaup-Kupershmidt equations, etcetera [37]. For all that reasons, the study of Lie systems is fully motivated from the point of view of applications.
Our approach to numerical integration of Lie systems in this manuscript is the Lie group approach, which is relevant since there exists an action establishing a relationship between a certain Lie group intrinsically associated to the Lie system and the manifold where the Lie system itself evolves. The two Lie group integrators that we have introduced exploit the algebraic structure of the Lie group associated with the Lie system to construct accurate and efficient numerical schemes.
Our aim in this work is to depart from this preexisting technology on Lie group integrators and take advantage of it when numerically integrating Lie systems. These can evolve on manifolds with particular geometric structure, such as group structure or curvature, and, therefore, a geometric integrator form them is in order. The action relating the Lie group underlying the Lie system and the manifold where this evolves represents a perfect tool to achieve this goal, generating a discrete sequence of points on the manifold (which naturally inherit its geometry) from a discrete sequence of points on the Lie group which can be obtained from the Lie group integrators. This way we establish a novel geometric integrator [18], which we will test on a particular class of Lie systems on curved spaces.
So, the outline of the paper goes as follows:
In SS2 we introduce the fundamentals on Lie groups and Lie algebras needed in the further development of the work. Moreover, we describe the automorphic Lie system and how to solve them in its underlying Lie group. The definition of the action of this Lie group on the manifold where the Lie system evolves is also presented. These two elements allow the definition of the 7-step method for the reduction procedure of automorphic Lie systems in Definition 2. SS3 depicts some
basics on numerical schemes and the Lie group methods employed afterwards. In SS4 we combine all the previous elements to propose our geometric method to numerically integrate automorphic Lie systems in Definition 3. Finally, in SS5 we pick a class of Lie systems on curved spaces, we apply with high detail the 7-step method to analytically solve them and, afterwards, we employ our integrator, showing its geometric properties.
## 2 Geometric fundamentals
### Lie groups and matrix Lie groups
Let \(G\) be a Lie group and let \(e\) be its neutral element. Every \(g\in G\) defines a right-translation \(R_{g}:h\in G\mapsto hg\in G\) and a left-translation \(L_{g}:h\in G\mapsto gh\in G\) on \(G\). A vector field, \(X^{\mathrm{R}}\), on \(G\) is right-invariant if \(X^{\mathrm{R}}(hg)=R_{g*,h}X^{\mathrm{R}}(h)\) for every \(h,g\in G\), where \(R_{g*,h}\) is tangent map to \(R_{g}\) at \(h\in G\). The value of a right-invariant vector field, \(X^{\mathrm{R}}\), at every point of \(G\) is determined by its value at \(e\), since, by definition, \(X^{\mathrm{R}}(g)=R_{g*,e}X^{\mathrm{R}}(e)\) for every \(g\in G\). Hence, each right-invariant vector field \(X^{R}\) on \(G\) gives rise to a unique \(X^{R}(e)\in T_{e}G\) and vice versa. Then, the space of right-invariant vector fields on \(G\) is a finite-dimensional Lie algebra. Similarly, one may define left-invariant vector fields on \(G\), establish a Lie algebra structure on the space of left-invariant vector fields and set an isomorphism between the space \(\mathfrak{g}\) of left-invariant vector fields on \(G\) and \(T_{e}G\). The Lie algebra of left-invariant vector fields on \(G\), with Lie bracket \([\cdot\,,\cdot]:\mathfrak{g}\times\mathfrak{g}\to\mathfrak{g}\), induces in \(T_{e}G\) a Lie algebra via the identification of left-invariant vector fields and their values at \(e\). Note that we will frequently identify \(\mathfrak{g}\) with \(T_{e}G\) to simplify the terminology.
There is a natural mapping from \(\mathfrak{g}\) to \(G\), the so-called exponential map, of the form \(\exp:a\in\mathfrak{g}\mapsto\gamma_{a}(1)\in G\), where \(\gamma_{a}:\mathbb{R}\to G\) is the integral curve of the right-invariant vector field \(X_{a}^{\mathrm{R}}\) on \(G\) satisfying \(X_{a}^{\mathrm{R}}(e)=a\) and \(\gamma(0)=e\). If \(\mathfrak{g}=\mathfrak{gl}(n,\mathbb{K})\), where \(\mathfrak{gl}(n,\mathbb{K})\) is the Lie algebra of \(n\times n\) square matrices with entries in a field \(\mathbb{K}\) relative to the Lie bracket given by the commutator of matrices, then \(\mathfrak{gl}(n,\mathbb{K})\) can be considered as the Lie algebra of the Lie group \(\mathrm{GL}(n,\mathbb{K})\) of \(n\times n\) invertible matrices with entries in \(\mathbb{K}\). It can be proved that in this case \(\exp:X\in\mathfrak{gl}(n,\mathbb{K})\mapsto\exp(X)\in\mathrm{GL}(n,\mathbb{K})\) retrieves the standard expression of the exponential of a matrix [34], namely
\[\exp(X)=\mathrm{I}_{n}+X+\frac{X^{2}}{2}+\frac{X^{3}}{6}+\cdots=\sum_{k=0}^{ \infty}\frac{X^{k}}{k!},\]
where \(\mathrm{I}_{n}\) stands for the \(n\times n\) identity matrix.
From the definition of the exponential map \(\exp:T_{e}G\to G\), it follows that \(\exp(sa)=\gamma_{a}(s)\) for each \(s\in\mathbb{R}\) and \(a\in T_{e}G\). Let us show this. Indeed, given the right-invariant vector field \(X_{sa}^{\mathrm{R}}\), where \(sa\in T_{e}G\), then
\[X_{sa}^{\mathrm{R}}(g)=R_{g*,e}X_{sa}^{\mathrm{R}}(e)=R_{g*,e}(sa)=sR_{g*,e}( a),\qquad\forall g\in G.\]
In particular for \(s=1\), it follows that \(X_{a}^{\mathrm{R}}(g)=R_{g*,e}(a)\) and, for general \(s\), it follows that \(X_{sa}^{\mathrm{R}}=sX_{a}^{\mathrm{R}}\). Hence, if \(\gamma_{a},\gamma_{sa}:\mathbb{R}\to G\) are the integral curves of \(X_{a}^{\mathrm{R}}\) and \(X_{sa}^{\mathrm{R}}\) with initial condition \(e\), respectively, then it can be proved that, for \(u=ts\), one has that
\[\frac{d}{dt}\gamma_{a}(ts)=s\frac{d}{du}\gamma_{a}(u)=sX_{a}^{\mathrm{R}}( \gamma_{a}(ts)).\]
and \(t\mapsto\gamma_{a}(st)\) is the integral curve of \(X_{sa}^{\mathrm{R}}\) with initial condition \(e\). Hence, \(\gamma_{a}(st)=\gamma_{sa}(t)\). Therefore, \(\exp(sa)=\gamma_{sa}(1)=\gamma_{a}(s)\). It is worth stressing that Ado's theorem [2] shows that every Lie group admits a matrix representation close to its neutral element.
The exponential map establishes a diffeomorphism from an open neighborhood \(U_{\mathfrak{g}}\) of \(0\) in \(T_{e}G\) and \(\exp(U_{\mathfrak{g}})\). More in detail, every basis \(\mathcal{V}=\{v_{1},\ldots,v_{r}\}\) of \(T_{e}G\) gives rise to the so-called canonical coordinates of the second-kind related to \(\mathcal{V}\) defined by the local diffeomorphism
\[\begin{array}{ccc}U_{\mathfrak{g}}\subset T_{e}G&\longrightarrow&\exp(U_{ \mathfrak{g}})\subset G\\ (\lambda_{1},\ldots,\lambda_{r})&\mapsto&\prod_{\alpha=1}^{r}\exp(\lambda_{ \alpha}v_{\alpha})\,,\end{array}\]
for an appropriate open neighborhood \(U_{\mathfrak{g}}\) of \(0\) in \(T_{e}G\simeq\mathfrak{g}\).
In matrix Lie groups right-invariant vector fields take a simple useful form. In fact, let \(G\) be a matrix Lie group. It can be then considered as a Lie subgroup of \(\mathrm{GL}(n,\mathbb{K})\). Moreover, it can be proved that \(T_{A}G\), for any \(A\in G\), can be identified with the space of \(n\times n\) square matrices \(\mathcal{M}_{n}(\mathbb{K})\).
Since \(R_{A}:B\in G\mapsto BA\in G\), then \(R_{A*,e}(M)=MA\in T_{A}G\), for all \(M\in T_{e}G\), and \(A\in\mathrm{GL}(n,\mathbb{K})\). As a consequence, if \(X^{\mathrm{R}}(e)=M\) at the neutral element \(e\), namely the identity I, of the matrix Lie group \(G\), then \(X^{\mathrm{R}}(A)=R_{A*,\mathrm{I}}(X^{\mathrm{R}}(\mathrm{I}))=R_{A*, \mathrm{I}}(M)=MA\). It follows that, at any \(A\in G\), every tangent vector \(B\in T_{A}G\) can be written as \(B=CA\) for a unique \(C\in T_{I}G\)[20, 16].
Let us describe some basic facts on Lie group actions on manifolds induced by Lie algebras of vector fields. It is known [77] that every finite-dimensional Lie algebra, \(V\), of vector fields on a manifold \(N\) gives rise to a (local) Lie group action
\[\varphi:G\times N\to N, \tag{2}\]
whose fundamental vector fields are given by the elements of \(V\) and \(G\) is a connected and simply connected Lie group whose Lie algebra is isomorphic to \(V\). If the vector fields of \(V\) are complete, then the Lie group action (2) is globally defined. The Lie group action (2) will be crucial in the definition of our integrators, since, as can be seen, relates the Lie group \(G\) and the manifold \(N\), i.e., the manifold where we are going the define the time-evolution of our Lie systems. In fact, Lie group actions like \(\varphi\) are employed to reduce the integration of a Lie system on \(N\) to obtaining a particular solution of a Lie system on a Lie group. Let us show how to obtain \(\varphi\) from \(V\), which will be of crucial importance in this work.
Let us restrict ourselves to an open neighborhood \(U_{G}\) of the neutral element of \(G\), where we can use canonical coordinates of the second-kind related to a basis \(\{v_{1},\ldots,v_{r}\}\) of \(\mathfrak{g}\). Then, each \(g\in U_{G}\) can be expressed as
\[g=\prod_{\alpha=1}^{r}\exp(\lambda_{\alpha}v_{\alpha}), \tag{3}\]
for certain uniquely defined parameters \(\lambda_{1},\ldots,\lambda_{r}\in\mathbb{R}\). To determine \(\varphi\), we determine the curves
\[\gamma_{x}^{\alpha}:\mathbb{R}\to N:t\mapsto\varphi(\exp(tv_{\alpha}),x), \qquad\alpha=1,\ldots,r, \tag{4}\]
where \(\gamma_{x}^{\alpha}\) must be the integral curve of \(X_{\alpha}\) for \(\alpha=1,\ldots,r\). Indeed, for any element \(g\in U_{G}\subset G\) expressed as in (3), using the intrinsic properties of a Lie group action,
\[\varphi(g,x)=\varphi\left(\prod_{\alpha=1}^{r}\exp(\lambda_{\alpha}v_{\alpha}),x\right)=\left(\varphi(\exp(\lambda_{1}v_{1}))\cdot\varphi(\exp(\lambda_{2}v_ {2}))\cdot\varphi(\exp(\lambda_{r}v_{r})),x\right),\]
the action is completely defined for any \(g\in U_{G}\subset G\).
In this work we will deal with some particular matrix Lie groups, starting from the general linear matrix group \(\mathrm{GL}(n,\mathbb{K})\), where we recall that \(\mathbb{K}\) may be \(\mathbb{R}\) or \(\mathbb{C}\). As it is well known, any closed subgroup of \(\mathrm{GL}(n,\mathbb{K})\) is also a matrix Lie group [34, Theorem 15.29, pg. 392].
### Automorphic Lie systems
On a first approximation, a Lie system is a first-order system of ODEs that admits a superposition rule. A superposition rule for a system \(X\) on \(N\) (the manifold where \(X\) evolves) is a map \(\Phi:N^{m}\times N\to N\) such that the general solution \(x(t)\) of \(X\) can be written as \(x(t)=\Phi(x_{(1)}(t),\dots,x_{(m)}(t);\rho)\), where \(x_{(1)}(t),\dots,\)\(x_{(m)}(t)\) is a generic family of particular solutions and \(\rho\) is a point in \(N\) related to the initial conditions of \(X\).
A classic example of Lie system is the Riccati equation [38, Example 3.3], that is,
\[\frac{dx}{dt}=b_{1}(t)+b_{2}(t)x+b_{12}(t)x^{2},\qquad x\in\mathbb{R}, \tag{5}\]
with \(b_{1}(t),b_{2}(t),b_{12}(t)\) being arbitrary functions of \(t\). It is known then that the general solution, \(x(t)\), of the Riccati equation can be written as
\[x(t)=\frac{x_{(2)}(t)(x_{(3)}(t)-x_{(1)}(t))+\rho x_{(3)}(t)(x_{(1)}(t)-x_{(2) }(t))}{(x_{(3)}(t)-x_{(1)}(t))+\rho(x_{(1)}(t)-x_{(2)}(t))}, \tag{6}\]
where \(x_{(1)}(t),x_{(2)}(t),x_{(3)}(t)\) are three different particular solutions of (5) and \(\rho\in\mathbb{R}\) is an arbitrary constant. This implies that the Riccati equation admits a superposition rule \(\Phi:\mathbb{R}^{3}\times\mathbb{R}\to\mathbb{R}\) such that
\[\Phi(x_{(1)},x_{(2)},x_{(3)},\rho)=\frac{x_{(2)}(x_{(3)}-x_{(1)})+\rho x_{(3)} (x_{(1)}-x_{(2)})}{(x_{(3)}-x_{(1)})+\rho(x_{(1)}-x_{(2)})}.\]
The conditions that guarantee the existence of a superposition rule are gathered in the Lie theorem [36, Theorem 44], which also provides a description of the underlying geometry of a Lie system. This theorem asserts that a first-order system \(X\) on \(N\),
\[\frac{dx}{dt}=X(t,x),\qquad x\in N,\qquad X\in\mathfrak{X}_{t}(N), \tag{7}\]
admits a superposition rule if and only if \(X\) can be written as
\[X(t,x)=\sum_{\alpha=1}^{r}b_{\alpha}(t)X_{\alpha}(x),\qquad t\in\mathbb{R}, \qquad x\in N, \tag{8}\]
for a certain family \(b_{1}(t),\dots,b_{r}(t)\) of \(t\)-dependent functions and a family of vector fields \(X_{1},\dots,\)\(X_{r}\) on \(N\) that generate an \(r\)-dimensional Lie algebra of vector fields. This Lie theorem yields that every Lie system \(X\) is related to (at least) one Vessiot-Guldberg (VG) Lie algebra, \(V\), that satisfies that \(\mathrm{Lie}(\{X_{t}\}_{t\in\mathbb{R}})\subset V\). This implies that the minimal Lie algebra has to be finite-dimensional, and vice versa [12].
The \(t\)-dependent vector field on the real line associated with (5) is \(X=b_{1}(t)X_{1}+b_{2}(t)X_{2}+b_{3}(t)X_{3}\), where \(X_{1},X_{2},X_{3}\) are vector fields on \(\mathbb{R}\) given by
\[X_{1}=\frac{\partial}{\partial x},\qquad X_{2}=x\frac{\partial}{\partial x}, \qquad X_{3}=x^{2}\frac{\partial}{\partial x}.\]
Since the commutation relations are
\[[X_{1},X_{2}]=X_{1},\quad[X_{1},X_{3}]=2X_{2},\quad[X_{2},X_{3}]=X_{3}, \tag{9}\]
the vector fields \(X_{1},X_{2},X_{3}\) generate a VG Lie algebra isomorphic to \(\mathfrak{sl}(2,\mathbb{R})\). Then, the Lie theorem guarantees that (5) admits a superposition rule, which is precisely the one shown in (6).
Furthermore, the general solution of a Lie system on \(N\) with a VG Lie algebra \(V\), can be obtained from a single particular solution of a Lie system on a Lie group \(G\) whose Lie algebra is isomorphic to \(V\). These are the so-called automobile Lie system [12, SS1.4]. As the automorphic Lie system notion is going to be central in our paper, let us study it in some detail (see [12] for details).
#### 2.2.1 Lie systems on Lie groups
**Definition 1**: _An automorphic Lie system is a \(t\)-dependent system of first-order differential equations on a Lie group \(G\) of the form_
\[\frac{dg}{dt}=\sum_{\alpha=1}^{r}b_{\alpha}(t)X_{\alpha}^{R}(g),\qquad g\in G, \quad t\in\mathbb{R}, \tag{10}\]
_where \(\{X_{1}^{R},\ldots,X_{r}^{R}\}\) is a basis of the space of right-invariant vector fields on \(G\) and \(b_{1}(t),\ldots,b_{r}(t)\) are arbitrary \(t\)-dependent functions. Furthermore, we shall refer to the right-hand side of equation (10) as \(\widehat{X}_{R}^{G}(t,g)\), i.e., \(\widehat{X}_{R}^{G}(t,g)=\sum_{\alpha=1}^{r}b_{\alpha}(t)X_{\alpha}^{\rm R}(g)\)._
Because of right-invariant vector fields, systems in the form of \(\widehat{X}_{R}^{G}\) have the following important property.
**Proposition 1**: _(See [12, SS1.3]) Given a Lie group \(G\) and a particular solution \(g(t)\) of the Lie system defined on \(G\), as_
\[\frac{dg}{dt}=\sum_{\alpha=1}^{r}b_{\alpha}(t)X_{\alpha}^{\rm R}(g)=\widehat{ X}_{R}^{G}(t,g), \tag{11}\]
_where \(b_{1}(t),\ldots,b_{r}(t)\) are arbitrary \(t\)-dependent functions and \(X_{1}^{\rm R},\ldots,X_{r}^{\rm R}\) are right-invariant vector fields, we have that \(g(t)h\) is also a solution of (11) for each \(h\in G\)._
An immediate consequence of Proposition 1 is that, once we know a particular solution of \(\widehat{X}_{R}^{G}\), any other solution can be obtained simply by multiplying the known solution on the right by any element in \(G\). More concretely, if we know a solution \(g(t)\) of (11), then the solution \(h(t)\) of (11) with initial condition \(h(0)=g(0)h_{0}\) can be expressed as \(h(t)=g(t)h_{0}\). This justifies that henceforth we only worry about finding one particular solution \(g(t)\) of \(\widehat{X}_{R}^{G}\), e.g. the one that fulfills \(g(0)=e\). The previous result can be understood in terms of the Lie theorem or via superposition rules. In fact, since (11) admits a superposition rule \(\Phi:(g,h)\in G\times G\mapsto gh\in G\), the system (1) must be a Lie system. Alternatively, the same result follows from the Lie Theorem and the fact that the right-invariant vector fields on \(G\) span a finite-dimensional Lie algebra of vector fields.
There are several reasons to study automorphic Lie systems. One is that they can be locally written around the neutral element of its Lie group in the form
\[\frac{dA}{dt}=B(t)A,\qquad A\in{\rm GL}(n,\mathbb{K}),\quad B(t)\in\mathcal{M }_{n}(\mathbb{K}),\]
where \(\mathcal{M}_{n}(\mathbb{K})\) is the set of \(n\times n\) matrices o coefficients in \(\mathbb{K}\), for every \(t\in\mathbb{R}\).
The main reason to study automorphic Lie systems is given by the following results, which show how they can be used to solve any Lie system on a manifold. Let us start with a Lie system \(X\) defined on \(N\). Hence, \(X\) can be written as
\[\frac{dx}{dt}=\sum_{\alpha=1}^{r}b_{\alpha}(t)X_{\alpha}(x), \tag{12}\]
for certain \(t\)-dependent functions \(b_{1}(t),\ldots,b_{r}(t)\) and vector fields \(X_{1},\ldots,X_{r}\in\mathfrak{X}(N)\) that generate a \(r\)-dimensional dimensional VG Lie algebra. The VG Lie algebra \(V\) is always isomorphic to the Lie algebra \(\mathfrak{g}\) of a certain Lie group \(G\). The VG Lie algebra spanned by \(X_{1},\ldots,X_{r}\) gives rise to a (local) Lie group action \(\varphi:G\times N\to N\) whose fundamental vector fields are those of \(V\). In particular, there exists a basis \(\{v_{1},\ldots,v_{r}\}\) in \(\mathfrak{g}\) so that
\[\frac{d}{dt}\bigg{|}_{t=0}\varphi(\exp(tv_{\alpha}),x)=X_{\alpha}(g),\qquad \alpha=1,\ldots,r.\]
In other words, \(\varphi_{\alpha}:(t,x)\in\mathbb{R}\times N\mapsto\varphi(\exp(tv_{\alpha}),x)\in N\) is the flow of the vector field \(X_{\alpha}\) for \(\alpha=1,\ldots,r\). Note that if \([X_{\alpha},X_{\beta}]=\sum_{\gamma=1}^{r}c_{\alpha\beta}^{\gamma}X_{\gamma}\) for \(\alpha,\beta=1,\ldots,r\), then \([v_{\alpha},v_{\beta}]=-\sum_{\gamma=1}^{r}c_{\alpha\beta}^{\gamma}v_{\gamma}\) for \(\alpha,\beta=1,\ldots,r\) (cf. [8]).
To determine the exact form of the Lie group action \(\varphi:G\times N\to N\) as in (4), we impose
\[\varphi(\exp(\lambda_{\alpha}v_{\alpha}),x)=\varphi_{\alpha}(\lambda_{\alpha},x)\qquad\forall\,\alpha=1,\ldots,r,\qquad\forall x\in N, \tag{13}\]
where \(\lambda_{1},\ldots,\lambda_{r}\in\mathbb{R}\). While we stay in a neighborhood \(U\) of the origin of \(G\), where every element \(g\in U\) can be written in the form
\[g=\exp(\lambda_{1}v_{1})\cdot\ldots\cdot\exp(\lambda_{r}v_{r}),\]
then the relations (13) and the properties of \(\varphi\) allow us to determine \(\varphi\) on \(U\). If we fix \(x\in N\), the right-hand side of the equality turns into an integral curve of the vector field \(X_{\alpha}\), this is why (13) holds.
**Proposition 2**: _(see [8, 12] for details) Let \(g(t)\) be a solution to the system_
\[\frac{dg}{dt}=\sum_{\alpha=1}^{r}b_{\alpha}(t)X^{R}(g),\qquad\forall t\in \mathbb{R},\quad g\in G. \tag{14}\]
_Then, \(x(t)=\varphi(g(t),x_{0})\) is a solution of \(X=\sum_{\alpha=1}^{r}b_{\alpha}(t)X_{\alpha}\), where \(x_{0}\in N\). In particular, if one takes the solution \(g(t)\) that satisfies the initial condition \(g(0)=e\), then \(x(t)\) is the solution of \(X\) such that \(x(0)=x_{0}\)._
Let us study a particularly relevant form of automorphic Lie systems that will be used hereafter. If \(\mathfrak{g}\) is a finite-dimensional Lie algebra, then Ado's theorem [2] guarantees that \(\mathfrak{g}\) is isomorphic to a matrix Lie algebra \(\mathfrak{g}_{M}\). Let \(\mathcal{V}=\{M_{1},\ldots,M_{r}\}\) be a basis of \(\mathfrak{g}_{M}\subset\mathcal{M}_{n}(\mathbb{R})\). As reviewed in Section 2.1, each \(M_{\alpha}\) gives rise to a right-invariant vector field \(X_{\alpha}^{R}(g)=M_{\alpha}g\), with \(g\in G\), on \(G\). These vector fields have the opposite commutation relations than the (matrix) elements of the basis.
In the case of matrix Lie groups, the system (11) takes a simpler form. Let \(Y(t)\) be the matrix associated with the element \(g(t)\in G\). Using the right invariance property of each \(X_{\alpha}^{\mathrm{R}}\), we have that
\[\frac{dY}{dt}=\sum_{\alpha=1}^{r}b_{\alpha}(t)X_{\alpha}^{\mathrm{R}}(Y(t))= \sum_{\alpha=1}^{r}b_{\alpha}(t)R_{Y(t)*,e}\left(X_{\alpha}^{\mathrm{R}}(e) \right)=\sum_{\alpha=1}^{r}b_{\alpha}(t)R_{Y(t)*,e}(M_{\alpha}).\]
We can write the last term as
\[\sum_{\alpha=1}^{r}b_{\alpha}(t)R_{Y(t)*,e}(M_{\alpha})=\sum_{\alpha=1}^{r}b_ {\alpha}(t)M_{\alpha}Y(t),\]
in such a way that for matrix Lie groups, the system on the Lie group is
\[\frac{dY}{dt}=A(t)Y(t),\qquad Y(0)=I,\qquad\text{with}\quad A(t)=\sum_{\alpha =1}^{r}b_{\alpha}(t)M_{\alpha}, \tag{15}\]
where \(I\) is the identity matrix (which corresponds with the neutral element of the matrix Lie group) and the matrices \(M_{\alpha}\) form a finite-dimensional Lie algebra, which is anti-isomorphic to the VG Lie algebra of the system (by anti-isomorphic we imply that the systems have the same constants of structure but that they differ in one sign).
There exist various methods to solve system (11) analytically [59, SS2.2], such as the Levi decomposition [35] or the theory of reduction of Lie systems [10, Theorem 2]. In some cases, it is relatively
easy to solve it, as is the case where \(b_{1},\ldots,b_{r}\) are constants. Nonetheless, we are interested in a numerical approach, since we will try to solve the automorphic Lie system with adapted geometric integrators. The solutions on the Lie group can be straightforwardly translated into solutions on the manifold for the Lie system defined on \(N\) via the Lie group action (2). This is the main idea behind the numerical integrator that we begin to depict in the following 7 step method, which finally will lead us to numerically integrate Lie systems on the manifold \(N\), preserving its geometric properties.
**Definition 2** (The 7 step method: Reduction procedure to automorphic Lie systems): _The method can be itemized in the following seven steps:_
1. _Given a Lie system, we identify the_ \(r\)_-dimensional VG Lie algebra of vector fields_ \(X_{1},\ldots,X_{r}\) _associated with the Lie system the Lie system on a manifold_ \(N\)_._
2. _We look for a Lie algebra_ \(\mathfrak{g}\) _generated by_ \(\{M_{1},\ldots,M_{r}\}\in\mathcal{M}_{n\times n}(\mathbb{R})\) _that is isomorphic to the VG Lie algebra but with structure contacts differing in one sign w.r.t. the VG structure constants._
3. _We integrate the vector fields_ \(X_{1},\ldots,X_{r}\) _to obtain their respective flows_ \(\Phi_{\alpha}:\mathbb{R}\times N\to N\) _with_ \(\alpha=1,\ldots,r\)_._
4. _Using canonical coordinates of the second kind and the previous flows we construct the Lie group action_ \(\varphi:G\times N\to N\) _using expressions (_13_)._
5. _We define an automorphic Lie system_ \(\widehat{X}_{R}^{G}\) _on the Lie group_ \(G\) _associated with_ \(\mathfrak{g}\) _as in (_11_)._
6. _We compute the solution of the system_ \(\widehat{X}_{R}^{G}\) _that fulfils_ \(g(0)=e\)_._
7. _Finally, we recover the solution for_ \(X\) _on the manifold_ \(N\) _by_ \(x(t)=\varphi(g(t),x_{0})\)_._
The 7-step method provides a solution of a Lie system defined on a Lie group and then a solution on \(N\) by means of the action (2). It is important to emphasize that \(x(t)\) obtained in the last step of the 7 step method "lives" on the manifold \(N\), and therefore carries all its geometric properties. In the next section we introduce how the implementation of the 7-step method is carried out numerically. The two main methods are the Magnus expansion and Runge-Kutta-Munthe-Kaas (RKMK).
## 3 Numerical methods on matrix Lie groups
This section adapts known numerical methods on Lie groups to automorphic Lie systems, which are defined by ordinary differential equations defined on Lie groups of the form (14). For this purpose, we start by reviewing briefly some fundamentals on numerical methods for ordinary differential equations and Lie groups [19, 26, 55], and later focus on two specific numerical methods on Lie groups, the Magnus expansion and RKMK methods [27, 28, 46, 47, 68]. We will rely on one-step methods with fixed time step. By that we mean that solutions of a dynamical system
\[\dot{x}=f(t,x),\quad x(a)=x_{0},\quad x(t)\in N,\quad f\in\mathfrak{X}_{t}(N), \tag{16}\]
are approximated by a sequence of points \(x_{k}=x(t_{k})\in N\) with \(t_{k}=a+kh\), \(h=(b-a)/\mathcal{N}\), \(b>a\) and
\[\frac{x_{k+1}-x_{k}}{h}=f_{h}(t_{k},x_{k},x_{k+1}), \tag{17}\]
where \(\mathcal{N}\) is the number of steps our time interval is divided to. We emphasize here that the left hand side of (17) symbolically represents a proper discretization of a tangent vector on a manifold (note that we cannot "substract" elements of the manifold; if \(N\) is Euclidean, the minus sign recovers
its usual meaning). We call \(h\) the time step, which is fixed, while \(f_{h}:\mathbb{R}\times N\times N\to TN\) is a discrete vector field, which is a given approximation of \(f\) in (16). As usual, we shall denote the local truncation error by \(E_{h}\), where
\[E_{h}=||x_{k+1}-x(t_{k+1})||, \tag{18}\]
where \(\|\cdot\|\) is a proper norm in \(N\), and we say that the method is of order \(r\) if \(E_{h}=\mathcal{O}(h^{r+1})\) for \(h\to 0\), i.e. \(\lim_{h\to 0}|E_{h}/h^{r+1}|<\infty\). Regarding the global error
\[E_{\mathcal{N}}=||x_{\mathcal{N}}-x(b)||,\]
we shall say that the method is _convergent_ of order \(r\) if \(E_{\mathcal{N}}=\mathcal{O}(h^{r})\), when \(h\to 0\). As for the simulations, we pick the following norm in order to define the global error, that is
\[E_{\mathcal{N}}=\max_{k=1,\ldots,\mathcal{N}}||x(t_{k})-x_{k}||. \tag{19}\]
Our purpose is to numerically solve the initial condition problem for system (15) defined on a matrix Lie group \(G\) of the form
\[\frac{dY}{dt}=A(t)Y\qquad\text{with}\qquad Y(0)=I, \tag{20}\]
where \(Y\in G\) while \(A(t)\in\mathfrak{g}\cong T_{e}G\) is a given \(t\)-dependent matrix and \(I\) is the identity matrix in \(G\). That is, we are searching for a discrete sequence \(\{Y_{k}\}_{k=0,\ldots,\mathcal{N}}\) such that \(Y_{k}\in G\). In a neighborhood of the zero in \(T_{e}G\), the exponential map defines a diffeomorphism onto an open subset of the neutral element of \(G\) and the problem is equivalent to searching for a curve \(\Omega(t)\) in \(\mathfrak{g}\) such that
\[Y(t)=\exp(\Omega(t)). \tag{21}\]
This ansatz helps us to transform (20), which is defined in a nonlinear space, into a new problem in a linear space, namely the Lie algebra \(\mathfrak{g}\simeq T_{e}G\). This is expressed in the classical result by Magnus [39].
**Theorem 3** (Magnus, 1954): _The solution of the matrix Lie group (20) in \(G\) can be written for values of \(t\) close enough to zero, as \(Y(t)=\exp(\Omega(t))\), where \(\Omega(t)\) is the solution of the initial value problem_
\[\frac{d\Omega}{dt}=\operatorname{dexp}_{\Omega(t)}^{-1}(A(t)),\qquad\quad \Omega(0)=\mathbf{0}\,, \tag{22}\]
_where \(\mathbf{0}\) is the zero element in \(T_{e}G\)._
When we are dealing with matrix Lie groups and Lie algebras, the \(\operatorname{dexp}^{-1}\) is given by
\[\operatorname{dexp}_{\Omega}^{-1}(H)=\sum_{j=0}^{\infty}\frac{B_{j}}{j!} \operatorname{ad}_{\Omega}^{j}(H), \tag{23}\]
where the \(\{B_{j}\}_{j=0,\ldots,\infty}\) are the Bernoulli numbers and \(\operatorname{ad}_{\Omega}(H)=[\Omega,H]=\Omega\,H-H\,\Omega.\) The convergence of the series (23) is ensured as long as a certain convergence condition is satisfied [39].
If we try to integrate (22) applying a numerical method directly (note that, now, we could employ one-step methods (17) safely), \(\Omega(t)\) might sometimes drift too much away from the origin and the exponential map would not work. This would be a problem, since we are assuming that \(\Omega(t)\) stays in a neighborhood of the origin of \(\mathfrak{g}\) where the exponential map defines a local diffeomorphism with the Lie group. Since we still do not know how to characterize this neighborhood, it is necessary to adopt a strategy that allows us to resolve (22) sufficiently close to the origin. The thing to do is to change the coordinate system in each iteration of the numerical method (Or keeping the time step
\(h\) small enough, as we shall show when treating the Magnus methods). In the next lines we explain how this is achieved.
Consider now the restriction of the exponential map given by
\[\exp:U_{\mathfrak{g}}\subset\mathfrak{g} \rightarrow\exp(U_{\mathfrak{g}})\subset G,\] \[A \mapsto\exp(A)\]
so that this map establishes a diffeomorphism between an open neighborhood \(U_{\mathfrak{g}}\) around the origin in \(\mathfrak{g}\) and its image. Since the elements of the matrix Lie group are invertible matrices, the map \(U_{\mathfrak{g}}\rightarrow\exp(U_{\mathfrak{g}})Y_{0}\subset G:A\mapsto\exp( A)Y_{0}\) from \(U_{\mathfrak{g}}\subset\mathfrak{g}\) to the set
\[\exp(A)Y_{0}=\{Y\in G\ :\exists X\in U_{\mathfrak{g}},Y=XY_{0}\}\]
is also a diffeomorphism. This map gives rise to the so-called first-order canonical coordinates centered at \(Y_{0}\).
As well-known, the solutions of (22) are curves in \(\mathfrak{g}\) whose images by the exponential map are solutions to (20). In particular, the solution \(\Omega^{(0)}(t)\) of system (20) such that \(\Omega^{(0)}(0)\) is the zero matrix in \(T_{I}G\), namely \(\mathbf{0}\), corresponds with the solution \(Y^{(e)}(t)\) of the system on \(G\) such that \(Y^{(e)}(0)=I\). Now, for a certain \(t=t_{k}\), the solution \(\Omega^{(t_{k})}(t)\) in \(\mathfrak{g}\) such that \(\Omega^{(t_{k})}(t_{k})=\mathbf{0}\), corresponds with \(Y^{(e)}(t)\) via first-order canonical coordinates centered at \(Y^{(e)}(t_{k})\in G\), since
\[\exp(\Omega^{(t_{k})}(t_{k}))Y^{(e)}(t_{k})=\exp(\mathbf{0})Y^{(e)}(t_{k})=Y^{ (e)}(t_{k}),\]
and the existence and uniqueness theorem guarantees \(\exp(\Omega^{(0)}(t))=\exp(\Omega^{(t_{k})}(t))Y^{(e)}(t_{k})\) around \(t_{k}\). In this way, we can use the curve \(\Omega^{(t_{k})}(t)\) and the canonical coordinates centered on \(Y^{(e)}(t_{k})\) to obtain values for the solution of (20) in the proximity of \(t=t_{k}\), instead of using \(\Omega^{(0)}(t)\). Whilst the curve \(\Omega^{(0)}(t)\) could be far from the origin of coordinates for \(t_{k}\), we know that \(\Omega^{(t_{k})}(t)\) will be close, by definition. Applying this idea in each iteration of the numerical method, we are changing the curve in \(\mathfrak{g}\) to obtain the approximate solution of (20) while we stay near the origin (as long as the time step is small enough).
Thus, what is left is defining proper numerical methods for (22) whose solution, i.e. \(\{\Omega_{k}\}_{k=0,\ldots,\mathcal{N}}\), via the exponential map, provides us with a numerical solution of (20) remaining in \(G\). In other words, the general Lie group method defined this way [28, 27] can be set by the recursion
\[Y_{k+1}=e^{\Omega_{k}}\,Y_{k}. \tag{24}\]
Next, we introduce two relevant families of numerical methods providing \(\{\Omega_{k}\}_{k=0,\ldots,\mathcal{N}}\).
#### The Magnus method
Based on the work by Magnus, the Magnus method was introduced in [28, 29]. The starting point of this method is to resolve equation (22) by means of the Picard procedure. This method assures that a given sequence of functions converges to the solution of (22) in a small enough neighborhood. Operating, one obtains the _Magnus expansion_
\[\Omega(t)=\sum_{k=0}^{\infty}H_{k}(t), \tag{25}\]
where each \(H_{k}(t)\) is a linear combination of iterated commutators. The first three terms are given by
\[H_{0}(t) =\int_{0}^{t}A(\xi_{1})d\xi_{1}\,,\] \[H_{1}(t) =-\frac{1}{2}\int_{0}^{t}\left[\int_{0}^{\xi_{1}}A(\xi_{2})d\xi_{2 },A(\xi_{1})\right]d\xi_{1}\,,\] \[H_{2}(t) =\frac{1}{12}\int_{0}^{t}\left[\int_{0}^{\xi_{1}}A(\xi_{2})d\xi_ {2},\left[\int_{0}^{\xi_{1}}A(\xi_{2})d\xi_{2},A(\xi_{1})\right]\right]d\xi_{1}\] \[\qquad\qquad+\frac{1}{4}\int_{0}^{t}\left[\int_{0}^{\xi_{1}} \left[\int_{0}^{\xi_{1}}A(\xi_{2})d\xi_{2},A(\xi_{1})\right]d\xi_{2},A(\xi_{1 })\right]d\xi_{1}\,.\]
Note that the Magnus expansion (25) converges absolutely in a given norm for every \(t\geq 0\) such that [27, p. 48]
\[\int_{0}^{t}\|A(\xi)\|d\xi\leq\int_{0}^{2\pi}\frac{d\xi}{4+\xi[1-\cot(\xi/2)] }\approx 1,086868702.\]
In practice, if we work with the Magnus expansion we need a way to handle the infinite series and calculate the iterated integrals. Iserles and Norsett proposed a method based on binary trees [28, 29]. In [27, SS4.3] we can find a method to truncate the series in such a way that one obtains the desired order of convergence. Similarly, [27, SS5] discusses in detail how the iterated integrals can be integrated numerically. In our case, for practical reasons we will implement the Magnus method following the guidelines of Blanes, Casas & Ros [4], which is based on a Taylor series of \(A(t)\) in (20) around the point \(t=h/2\) (recall that, in the Lie group and Lie algebra equations we are setting the initial time \(t_{0}=a=0\)). With this technique one is able to achieve different orders of convergence. In particular, we will use the second and fourth order convergence methods [4, SS3.2], although one can build up to eighth order methods.
The second-order approximation is
\[\exp(\Omega(h))=\exp(ha_{0})+\mathcal{O}(h^{3})\]
and the forth-order one reads
\[\exp(\Omega(h))=\exp\left(ha_{0}+\frac{1}{12}h^{3}a_{2}-\frac{1}{12}h^{3}[a_{ 0},a_{1}]\right)+\mathcal{O}(h^{5}),\]
where \(\Omega(0)=\mathbf{0}\) and
\[a_{i}=\frac{1}{i!}\left.\frac{d^{i}}{dt^{i}}A(t)\right|_{t=h/2}\qquad i=0,1,2.\]
As we see from the definition, the first method computes the first and second derivative of matrix \(A(t)\). Applying the coordinate change in each iteration (24), we can implement it through the following equations:
\[Y_{k+1}=\exp\left[hA\left(t_{k}+\frac{h}{2}\right)\right]Y_{k}.\qquad\text{[ Order 2]} \tag{26}\]
\[Y_{k+1}=\exp\left(ha_{0}+h^{3}(a_{2}-[a_{0},a_{1}])\right)Y_{k},\] \[t_{1/2}=t_{k}+\frac{h}{2},\quad a_{0}=A(t_{1/2}),\quad a_{1}= \frac{\dot{A}(t_{1/2})}{12},\quad a_{2}=\frac{\ddot{A}(t_{1/2})}{24},\;\right\} \qquad\text{[Order 4]} \tag{27}\]
where \(A(t_{0}),\tilde{A}(t_{0})\) stand for the first and second derivatives of \(A(t)\) in terms of \(t\) at \(t_{0}\). Note that the convergence order is defined for the Lie group dynamics (20). That is, when we say that the above methods are convergent of order \(2\), for instance, that means \(E_{\mathcal{N}}=||Y_{\mathcal{N}}-Y(b)||=\mathcal{O}(h^{2})\), with \(h\to 0\), for a proper Lie matrix norm. Moreover, it is quite apparent in this method that keeping \(h\) small enough ensures that \(hA\in U_{\mathfrak{g}}\), i.e., the exponential of the Lie algebra element indeed belongs to the Lie group \(G\).
#### The Runge-Kutta-Munthe-Kaas method
Changing the coordinate system in each step, as explained in previous sections, the classical RK methods applied to Lie groups give rise to the so-called Runge-Kutta-Munthe-Kaas (RKMK) methods [46, 47]. The equations that implement the method are
\[\left.\begin{aligned} \Theta_{j}&=h\sum_{l=1}^{s}a_{jl}F_{ l},\\ F_{j}&=\mathrm{dexp}_{\Theta_{j}}^{-1}(A(t_{k}+c_{ j}h)),\\ \Theta&=h\sum_{l=1}^{s}b_{l}F_{l},\\ Y_{k+1}&=\exp(\Theta)Y_{k}.\end{aligned}\right\}\qquad j =1,\ldots,s,\]
where the constants \(\{a_{jl}\}_{j,l=1}^{s}\), \(\{b_{l}\}_{l=1}^{s}\), \(\{c_{j}\}_{j=1}^{s}\) can be obtained from a Butcher's table [55, SS11.8] (note that \(s\) is the number of stages of the usual RK methods). Apart from this, we have the consistency condition \(\sum_{l=1}^{s}b_{l}=1\). As the equation that we want to solve comes in the shape of an infinite series, it is necessary to study how we evaluate the function \(\mathrm{dexp}_{\Omega(t)}^{-1}\). For this, we need to use truncated series up to a certain order in such a way that the order of convergence of the underlying classical RK is preserved. If the classical RK is of order \(p\) and the truncated series of (22) is up to order \(j\), such that \(j\geq p-2\), then the RKMK method is of order \(p\) (see [46, 47] and [18, Theorem 8.5, p. 124]). Again, this convergence order refers to the equation in the Lie group (20).
Let us now determine the RKMK method associated with the explicit Runge-Kutta whose Butcher's table is
\[\begin{array}{c|cccc}0&&&&\\ 1/2&1/2&&\\ 1/2&0&1/2&&\\ 1&0&0&1&\\ \hline&&1/6&1/3&1/3&1/6\end{array}\]
that is a Runge-Kutta of order \(4\) (RK4). This implies that we need to truncate the series \(\mathrm{dexp}_{\Omega(t)}^{-1}\) at \(j=2\):
\[\mathrm{dexp}_{\Omega}^{-1}(A)\approx A-\frac{1}{2}[\Omega,A]+\frac{1}{12}[ \Omega,[\Omega,A]]. \tag{28}\]
Then, the RKMK implementation for the given Butcher's table is
\[\left.\begin{aligned} & F_{1}&=\mathrm{dexp}_{\Omega}^{-1}(A(t_{k})),\\ & F_{2}&=\mathrm{dexp}_{\frac{1}{2}hF_{1}}^{-1}\left(A\left(t_{k}+ \frac{1}{2}h\right)\right),\\ & F_{3}&=\mathrm{dexp}_{\frac{1}{2}hF_{2}}^{-1}\left(A\left(t_{k}+ \frac{1}{2}h\right)\right),\\ & F_{4}&=\mathrm{dexp}_{hF_{3}}^{-1}(A(t_{k}+h)),\end{aligned} \right\}\quad\Theta=\frac{h}{6}(F_{1}+2F_{2}+2F_{3}+F_{4}),\\ &\left.\begin{aligned} & Y_{k+1}=\exp(\Theta)Y_{k}, \end{aligned}\right\} \tag{29}\]
where \(\mathrm{dexp}^{-1}\) is (28).
It is interesting to note that the method obtained in the previous section using the Magnus expansion (26) can be retrieved by a RKMK method associated with the following Butcher's table:
\[\begin{array}{c|cc}0&\\ 1/2&1/2&\\ \hline&0&1\end{array}\]
Since it is an order \(2\) method, for the computation of \(\mathrm{dexp}^{-1}\) one can use \(\mathrm{dexp}^{-1}_{\Omega}(A)\approx A\).
## 4 Numerical methods for automorphic Lie systems
So far, we have established in Procedure 2 how to construct an analytical solution of a Lie system on a Lie group, and that it is based on the integration of the VG Lie algebra associated with the Lie system. Employing the action \(\varphi\) (2) we can transfer this solution to the manifold \(N\). On the other hand, in Section 3 we have reviewed some methods in the literature providing a numerical approximation of the solution of (20) remaining in the Lie group \(G\) (which accounts for their most remarkable geometrical property).
Now, let us explain how we combine these two elements to construct our new numerical methods and solve (12). Let \(\varphi\) be the Lie group action (13) and consider the solution of the system (20) such that \(Y(0)=I\). Numerically, we have shown that the solutions of (20) can be provided through the approximations of (23), say \(\{\Omega_{k}\}_{k=0,...,\mathcal{N}}\), and (24), as long as we stay close enough to the origin. As particular examples, we have picked the Magnus and RKMK methods in order to get \(\{\Omega_{k}\}_{k=0,...,\mathcal{N}}\) and, furthermore, the sequence \(\{Y_{k}\}_{k=0,...,\mathcal{N}}\). Next, we establish the scheme providing the numerical solution to Lie systems on Lie groups.
**Definition 3**: _Let us consider a Lie system evolving on a Lie group_
\[\frac{dg}{dt}=\sum_{\alpha=1}^{r}b_{\alpha}(t)X^{R}(g),\qquad\forall t\in \mathbb{R},\quad g\in G. \tag{30}\]
_and let_
\[\frac{dY}{dt}=A(t)Y,\qquad A(t)=\sum_{\alpha=1}^{r}b_{\alpha}(t)M_{\alpha},\]
_be its associated automorphic Lie system. We define the numerical solution to the Lie system, i.e., \(\{x_{k}\}_{k=0,...,\mathcal{N}}\), via the algorithm given next._
```
1:Initial data:\(\mathcal{N},h\), \(A(t)\), \(Y_{0}=I\), \(\Omega_{0}=\mathbf{0}\).
2:Numerically solve \(\frac{\mathrm{d}\Omega}{dt}=\mathrm{dexp}^{-1}_{\Omega}A(t)\)
3:Output \(\{\Omega_{k}\}_{k=1,...,\mathcal{N}}\)
4:for\(k=1,\dots,\mathcal{N}-1\)do \[Y_{k+1} =e^{\Omega_{k}}Y_{k},\] \[x_{k+1} =\varphi(Y_{k+1},x_{k}),\]
5:endfor
6:Output:\((x_{1},x_{2},...,x_{\mathcal{N}})\).
```
**Algorithm 1**The Lie system on Lie groups
At this point, we would like to highlight an interesting geometric feature of this method. On the one hand, the discretization is based on the numerical solution of the automorphic Lie group underlying the Lie system, which, itself, is founded upon the geometric structure of the latter. This numerical solution remains on \(G\), i.e., \(Y_{k}\in G\) for all \(k\), due to the particular design of the Lie group methods (as long as \(h\) is small). Given this, our construction respects as well the geometrical structure of the Lie system, since, in principle, it evolves on a manifold \(N\). We observe that the iteration
\[x_{k+1}=\varphi(Y_{k+1},x_{k})\]
leads to this preservation, since \(x_{k+1}\in N\) as long as \(Y_{k+1}\in G\) and \(x_{k}\in N\) (we recall that \(\varphi:G\times N\to N\)). Note as well that the direct application of a one-step method (17) on a general Lie system (12) would destroy this structure, even if applied to an ambient Euclidean space.
For future reference, in regards of the Lie group methods (24), we shall refer to (26) as Magnus 2, to (27) as Magnus 4 and to (29) as, simply, RKMK (we recall that the last two methods are order 4 convergent).
## 5 Numerical integration on curved spaces
In this section we show the power of our numerical scheme by applying it to an specific example of a \((\kappa_{1},\kappa_{2})\)-parametric family of Lie systems on curved spaces along with their geometric invariants. Naturally, these spaces shall play the role of the manifold \(N\) where the Lie system evolves. Given that their intrinsic geometry is not trivial, they represent an optimal example of how the proposed method is better suited than others for the geometric preservation by the discrete solution. In this section and the following we carry out the procedure of applying the 7 step method, plus using the algorithm in Definition 3 to construct the geometry preserving numerical method.
For this, we start by considering a two-parametric family of \(3D\) real Lie algebras, denoted by \(\mathfrak{so}_{\kappa_{1},\kappa_{2}}(3)\), which depends on two real parameters, \(\kappa_{1}\) and \(\kappa_{2}\). In the literature these Lie algebras are also known as CK Lie algebras [67, 69, 70, 71, 72, 73, 74] or quasisimple orthogonal algebras [76]. The structure constants of \(\mathfrak{so}_{\kappa_{1},\kappa_{2}}(3)\) in the basis \(\{P_{1},P_{2},J_{12}\}\) are given by
\[[J_{12},P_{1}]=P_{2},\qquad[J_{12},P_{2}]=-\kappa_{2}P_{1},\qquad[P_{1},P_{2} ]=\kappa_{1}J_{12}. \tag{31}\]
It is possible to rescale the basis of \(\mathfrak{so}_{\kappa_{1},\kappa_{2}}(3)\) and reducing each parameter \(\kappa_{a}\) (\(a=1,2\)) to either \(+1\), \(0\) or \(-1\). The vanishment of any \(\kappa_{a}\) is equivalent to applying an Inonu-Wigner contraction [75]. The Lie algebra \(\mathfrak{so}_{\kappa_{1},\kappa_{2}}(3)\) is isomorphic to the matrix Lie algebra of \(3\times 3\) real matrices \(M\) satisfying [72]
\[M^{T}\mathbf{I}_{\boldsymbol{\kappa}}\,+\mathbf{I}_{\boldsymbol{\kappa}}M=0, \qquad\mathbf{I}_{\boldsymbol{\kappa}}:=\mathrm{diag}(1,\kappa_{1},\kappa_{1} \kappa_{2}),\qquad\boldsymbol{\kappa}:=(\kappa_{1},\kappa_{2}). \tag{32}\]
If \(\mathbf{I}_{\boldsymbol{\kappa}}\) is not degenerate, then this space is indeed the so-called indefinite orthogonal Lie algebra \(\mathfrak{so}(p,q)\), where \(p\) and \(q\) are the number of positive and negative eigenvalues of the matrix \(\mathbf{I}_{\boldsymbol{\kappa}}\). In particular, the elements of the basis \(\{P_{1},P_{2},J_{12}\}\) can be identified with the matrices
\[P_{1}=-\kappa_{1}e_{01}+e_{10},\quad P_{2}=-\kappa_{1}\kappa_{2}e_{02}+e_{20},\quad J_{12}=-\kappa_{2}e_{12}+e_{21}, \tag{33}\]
where \(e_{ij}\) is the \(3\times 3\) matrix with a single non-zero entry \(1\) at row \(i\) and column \(j\) (\(i,j=0,1,2\)).
The elements of \(\mathfrak{so}_{\kappa_{1},\kappa_{2}}(3)\) generate by matrix exponentiation the referred to as CK Lie group \(\mathrm{SO}_{\kappa_{1},\kappa_{2}}(3)\). The matrix exponentials of \(\{P_{1},P_{2},J_{12}\}\) lead to the following one-parametric subgroups
of the CK Lie group \(\mathrm{SO}_{\kappa_{1},\kappa_{2}}(3)\):
\[\mathrm{e}^{\lambda_{1}P_{1}}=\left(\begin{array}{ccc}\mathrm{C}_{\kappa_{1}}( \lambda_{1})&-\kappa_{1}\,\mathrm{S}_{\kappa_{1}}(\lambda_{1})&0\\ \mathrm{S}_{\kappa_{1}}(\lambda_{1})&\mathrm{C}_{\kappa_{1}}(\lambda_{1})&0\\ 0&0&1\end{array}\right),\qquad\mathrm{e}^{\lambda_{2}P_{2}}=\left(\begin{array} []{ccc}\mathrm{C}_{\kappa_{1}\kappa_{2}}(\lambda_{2})&0&-\kappa_{1}\kappa_{2} \,\mathrm{S}_{\kappa_{1}\kappa_{2}}(\lambda_{2})\\ 0&1&0\\ \mathrm{S}_{\kappa_{1}\kappa_{2}}(\lambda_{2})&0&\mathrm{C}_{\kappa_{1}\kappa_ {2}}(\lambda_{2})\end{array}\right), \tag{34}\]
where the so-called \(\kappa\)-dependent cosine and sine functions read [72, 73, 74]:
\[\mathrm{C}_{\kappa}(\lambda):=\sum_{l=0}^{\infty}(-\kappa)^{l}\frac{\lambda^ {2l}}{(2l)!}=\left\{\begin{array}{ccc}\cos\sqrt{\kappa}\,\lambda&\kappa>0 \\ 1&\kappa=0\\ \mathrm{ch}\,\sqrt{-\kappa}\,\lambda&\kappa<0\end{array}\right.,\]
\[\mathrm{S}_{\kappa}(\lambda):=\sum_{l=0}^{\infty}(-\kappa)^{l}\frac{\lambda^ {2l+1}}{(2l+1)!}=\left\{\begin{array}{ccc}\frac{1}{\sqrt{\kappa}}\sin\sqrt{ \kappa}\,\lambda&\kappa>0\\ \lambda&\kappa=0\\ \frac{1}{\sqrt{-\kappa}}\mathrm{sh}\,\sqrt{-\kappa}\,\lambda&\kappa<0\end{array} \right..\]
From them, the \(\kappa\)-tangent and the \(\kappa\)-versed sine (or versine) take the form
\[\mathrm{T}_{\kappa}(\lambda):=\frac{\mathrm{S}_{\kappa}(\lambda)}{\mathrm{C}_ {\kappa}(\lambda)},\qquad\mathrm{V}_{\kappa}(\lambda):=\frac{1}{\kappa}\left( 1-\,\mathrm{C}_{\kappa}(\lambda)\right). \tag{35}\]
These \(\kappa\)-functions cover both the usual circular (\(\kappa>0\)) and hyperbolic (\(\kappa<0\)) trigonometric functions. In the case \(\kappa=0\), the previous functions reduce to the parabolic ones \(\,\mathrm{C}_{0}(\lambda)=1\), \(\,\mathrm{S}_{0}(\lambda)=\,\mathrm{T}_{0}(\lambda)=\lambda\), and \(\,\mathrm{V}_{0}(\lambda)=\lambda^{2}/2\).
Some relations for the above \(\kappa\)-functions read
\[\mathrm{C}_{\kappa}^{2}(\lambda)+\kappa\,\mathrm{S}_{\kappa}^{2}(\lambda)=1, \qquad\mathrm{C}_{\kappa}(2\lambda)=\,\mathrm{C}_{\kappa}^{2}(\lambda)-\kappa \,\mathrm{S}_{\kappa}^{2}(\lambda),\qquad\mathrm{S}_{\kappa}(2\lambda)=2\, \mathrm{S}_{\kappa}(\lambda)\,\mathrm{C}_{\kappa}(\lambda),\]
and their derivatives [74] are given by
\[\frac{\mathrm{d}}{\mathrm{d}\lambda}\,\mathrm{C}_{\kappa}(\lambda)=-\kappa\, \mathrm{S}_{\kappa}(\lambda),\ \ \ \ \frac{\mathrm{d}}{\mathrm{d}\lambda}\,\mathrm{S}_{\kappa}(\lambda)=\, \mathrm{C}_{\kappa}(\lambda),\ \ \ \ \frac{\mathrm{d}}{\mathrm{d}\lambda}\,\mathrm{T}_{\kappa}(\lambda)=\frac{1}{ \mathrm{C}_{\kappa}^{2}(\lambda)},\ \ \ \ \frac{\mathrm{d}}{\mathrm{d}\lambda}\,\mathrm{V}_{\kappa}(\lambda)=\, \mathrm{S}_{\kappa}(\lambda). \tag{36}\]
Let \(H_{0}:=\mathrm{SO}_{\kappa_{2}}(2)\) be the Lie subgroup of \(\mathrm{SO}_{\kappa_{1},\kappa_{2}}(3)\) obtained by matrix exponentiation of the Lie algebra \(\mathfrak{h}_{0}\). The CK family of \(2D\) homogeneous spaces is defined by the quotient
\[\mathbf{S}_{[\kappa_{1}],\kappa_{2}}^{2}:=\mathrm{SO}_{\kappa_{1},\kappa_{2}}( 3)/\mathrm{SO}_{\kappa_{2}}(2). \tag{37}\]
The (possibly degenerate) metric defined by \(\mathbf{I}_{\kappa}\) (32) on \(T_{\varepsilon}\mathrm{SO}_{\kappa_{1},\kappa_{2}}(3)\simeq\mathfrak{so}_{ \kappa_{1},\kappa_{2}}(3)\) can be extended by right translation to a metric on the whole \(SO_{\kappa_{1},\kappa_{2}}(3)\) and then projected onto \(\mathbf{S}_{[\kappa_{1}],\kappa_{2}}^{2}\). Then, the CK family becomes a symmetric space relative to the obtained metric. The contraction parameter \(\kappa_{1}\) becomes the constant (Gaussian) _curvature_ of the space. The second parameter \(\kappa_{2}\) determines the _signature_ of the metric through \(\mathrm{diag}(+,\kappa_{2})\).
The matrix realization (34) enables us to identify the elements of \(\mathrm{SO}_{\kappa_{1},\kappa_{2}}(3)\) with isometries of the bilinear form \(\mathbf{I}_{\kappa}\) (32). More specifically, given a \(3\times 3\) matrix \(g\), it follows that
\[g\in\mathrm{SO}_{\kappa_{1},\kappa_{2}}(3)\Rightarrow g^{T}\mathbf{I}_{\kappa} \,g=\mathbf{I}_{\kappa}.\]
This allows us to consider the Lie group action of \(\mathrm{SO}_{\kappa_{1},\kappa_{2}}(3)\) on \(\mathbb{R}^{3}\) as isometries of \(\mathbf{I}_{\boldsymbol{\kappa}}\).
The subgroup \(\mathrm{SO}_{\kappa_{2}}(2)=\langle\mathrm{e}^{\lambda_{3}J_{12}}\rangle\) is the isotropy subgroup of the point \(O:=(1,0,0)\), which is taken as the _origin_ in the space \(\mathbf{S}^{2}_{[\kappa_{1}],\kappa_{2}}\). Hence, \(\mathrm{SO}_{\kappa_{1},\kappa_{2}}(3)\) becomes an isometry group of the space \(\mathbf{S}^{2}_{[\kappa_{1}],\kappa_{2}}\), in such a manner that \(J_{12}\) is a rotation generator, while \(P_{1}\) and \(P_{2}\) move \(O\) along two basic geodesics \(l_{1}\) and \(l_{2}\), which are orthogonal at \(O\), so behaving as translation generators.
The orbit of \(O\) is contained in the submanifold given by \(\mathbf{I}_{\boldsymbol{\kappa}}\) of the form
\[\Sigma_{\boldsymbol{\kappa}}:=\{v:=(x_{0},x_{1},x_{2})\in\mathbb{R}^{3}: \mathbf{I}_{\boldsymbol{\kappa}}(v,v)=\ x_{0}^{2}+\kappa_{1}x_{1}^{2}+\kappa_ {1}\kappa_{2}x_{2}^{2}=1\}. \tag{38}\]
This orbit, namely the connected component of \(\Sigma_{\boldsymbol{\kappa}}\) containing the point \(O\), can be identified with the space \(\mathbf{S}^{2}_{[\kappa_{1}],\kappa_{2}}\). The coordinates \(\{x_{0},x_{1},x_{2}\}\) on \(\mathbb{R}^{3}\), satisfying the constraint (38) on \(\Sigma_{\boldsymbol{\kappa}}\), are called _ambient_. In these variables, the metric on \(\mathbf{S}^{2}_{[\kappa_{1}],\kappa_{2}}\) comes from the flat ambient metric in \(\mathbb{R}^{3}\) divided by the curvature \(\kappa_{1}\) and restricted to \(\Sigma_{\boldsymbol{\kappa}}\), namely
\[\mathrm{d}s^{2}_{\boldsymbol{\kappa}}:=\left.\frac{1}{\kappa_{1}}\left( \mathrm{d}x_{0}^{2}+\kappa_{1}\mathrm{d}x_{1}^{2}+\kappa_{1}\kappa_{2} \mathrm{d}x_{2}^{2}\right)\right|_{\Sigma_{\boldsymbol{\kappa}}}=\frac{ \kappa_{1}\left(x_{1}\mathrm{d}x_{1}+\kappa_{2}x_{2}\mathrm{d}x_{2}\right)^{2 }}{1-\kappa_{1}x_{1}^{2}-\kappa_{1}\kappa_{2}x_{2}^{2}}+\mathrm{d}x_{1}^{2}+ \kappa_{2}\mathrm{d}x_{2}^{2}. \tag{39}\]
It is worth noting that if \(\kappa_{1}=0\), then \(\Sigma_{\boldsymbol{\kappa}}\) is given by two connected components with \(x_{0}\in\{-1,1\}\) and \(\mathrm{d}s^{2}_{\boldsymbol{\kappa}}\) is well-defined.
The ambient coordinates can be parametrized on \(\Sigma_{\boldsymbol{\kappa}}\) in terms of two intrinsic variables in different ways (see e.g. [73]). In particular, let us introduce the so-called _geodesic parallel_\(\{x,y\}\) and _geodesic polar_\(\{r,\phi\}\) coordinates of a point \(Q:=(x_{0},x_{1},x_{2})\) in \(\mathbf{S}^{2}_{[\kappa_{1}],\kappa_{2}}\) which are obtained through the following action of the one-parametric subgroups (34) on \(O\)[73]:
\[(x_{0},x_{1},x_{2})^{T}=\exp(xP_{1})\exp(yP_{2})O^{T}=\exp(\phi J_{12})\exp(rP _{1})O^{T},\]
yielding
\[x_{0}=\,\mathrm{C}_{\kappa_{1}}(x)\,\mathrm{C}_{\kappa_{1}\kappa _{2}}(y)=\,\mathrm{C}_{\kappa_{1}}(r),\] \[x_{1}=\,\mathrm{S}_{\kappa_{1}}(x)\,\mathrm{C}_{\kappa_{1}\kappa _{2}}(y)=\,\mathrm{S}_{\kappa_{1}}(r)\,\mathrm{C}_{\kappa_{2}}(\phi),\] \[x_{2}=\,\mathrm{S}_{\kappa_{1}\kappa_{2}}(y)=\,\mathrm{S}_{ \kappa_{1}}(r)\,\mathrm{S}_{\kappa_{2}}(\phi). \tag{40}\]
By introducing these relations in the metric (39) and applying (36), we recover the usual (curved) metrics given by
\[\mathrm{d}s^{2}_{\boldsymbol{\kappa}}=\,\mathrm{C}^{2}_{\kappa_{1}\kappa_{2}} (y)\mathrm{d}x^{2}+\kappa_{2}\mathrm{d}y^{2}=\mathrm{d}r^{2}+\kappa_{2}\, \mathrm{S}^{2}_{\kappa_{1}}(r)\mathrm{d}\phi^{2}. \tag{41}\]
According to the different values of \((\kappa_{1},\kappa_{2})\), we can classify different spaces. For \(\kappa_{2}>0\) we have Riemannian spaces. If \(\kappa_{2}>0\) and \(\kappa_{1}<0\), it leads to a two-sheeted hyperboloid. The upper sheet of the hyperboloid is called \(\mathbf{H^{2}}\), namely the part with \(x_{0}\geq 1\), the Lobachevsky space. The contraction \(\kappa_{1}=0\) gives rise to two Euclidean planes \(x_{0}=\pm 1\). We will call Euclidean space \(\mathbf{E^{2}}\) the one with \(x_{0}=+1\). When \(\kappa_{2}<0\), we have pseudo-Riemannian spaces or Lorentzian spacetimes. In this case, for Gaussian curvature \(\kappa_{1}>0\) we obtain the \((1+1)\)D anti-de Sitter spacetime \(\mathbf{AdS^{1+1}}\); if \(\kappa_{1}<0\), we find the \((1+1)\)D de Sitter spacetime \(\mathbf{dS^{1+1}}\); or the flat case with \(\kappa_{1}=0\), aka the \((1+1)\)D Minkowskian spacetime \(\mathbf{M^{1+1}}\). In all cases for \(\kappa_{2}<0\), the \(J_{12}\), \(P_{1}\), and \(P_{2}\) correspond to the infinitesimal generators of boosts, time translations, and spatial translations, respectively. In the case that \(\kappa_{2}=0\) (\(c=\infty\)), we encounter Semi-Riemannian spaces or Newtonian spacetimes, in which the metric (39) is degenerate and the kernel of the metric gives rise to an integrable foliation of \(\mathbf{S}^{2}_{[\kappa_{1}],0}\) that is invariant under the action of the CK group \(\mathrm{SO}_{\kappa_{1},0}(3)\) on \(\mathbf{S}^{2}_{[\kappa_{1}],0}\). There appears a
well-defined subsidiary metric \(\mathrm{d}s^{\prime 2}:=\mathrm{d}s^{2}_{\mathbf{\kappa}}/\kappa_{2}\) restricted to each leaf, which in the coordinates \((x,y)\) read [73]
\[\mathrm{d}s^{2}=\mathrm{d}x^{2},\qquad\mathrm{d}s^{\prime 2}=\mathrm{d}y^{2} \quad\text{on}\quad x=\,\text{constant}.\]
For \(\kappa_{1}>0\) we find the \((1+1)\)D oscillating Newton-Hook (NH) spacetime \(\mathbf{NH^{1+1}_{+}}\), and for \(\kappa_{1}<0\) we obtain the \((1+1)\)D expanding NH spacetime \(\mathbf{NH^{1+1}_{-}}\). The flat space with \(\kappa_{1}=0\) is just the Galilean \(\mathbf{G^{1+1}}\).
### A class of Lie systems on curved spaces
We shall hereafter make extensive use of the shorthand notation \(\mathbf{\kappa}:=(\kappa_{1},\kappa_{2})\). Our procedure consists in defining a Lie system \(X_{\mathbf{\kappa}}\) possessing a Vessiot-Guldberg Lie algebra \(V_{\mathbf{\kappa}}\) consisting of infinitesimal symmetries of the metric of the CK space \(\mathbf{S}^{2}_{[\kappa_{1}],\kappa_{2}}\). The fundamental vector fields of the Lie group action of \(\mathrm{SO}_{\mathbf{\kappa}}(3)\) on \(\mathbb{R}^{3}\) by isometries of \(\mathbf{I}_{\mathbf{\kappa}}\) are Lie symmetries of \(\mathrm{d}s^{2}_{\mathbf{\kappa}}\). Since the action is linear, the fundamental vector fields can be obtained straightforwardly from the \(3D\) matrix representation (33). In ambient coordinates \((x_{0},x_{1},x_{2})\), they read [73],
\[P_{1}:=\kappa_{1}x_{1}\frac{\partial}{\partial x_{0}}-x_{0}\frac{\partial}{ \partial x_{1}},\qquad P_{2}:=\kappa_{1}\kappa_{2}x_{2}\frac{\partial}{ \partial x_{0}}-x_{0}\frac{\partial}{\partial x_{2}},\qquad J_{12}:=\kappa_{2} x_{2}\frac{\partial}{\partial x_{1}}-x_{1}\frac{\partial}{\partial x_{2}}. \tag{42}\]
Therefore, the most general Lie system in these ambient coordinates takes the form
\[X_{t}=b_{1}(t)P_{1}+b_{2}(t)P_{2}+b_{12}(t)J_{12}, \tag{43}\]
where the vector fields \(P_{1},P_{2},J_{12}\) correspond with those in (42), and the associated VG Lie algebra is (31). According to the Lie system's theory, the integral curves of the time-dependent vector field (43) are described by the system of ordinary differential equations
\[\left\{\begin{aligned} \frac{dx_{0}}{dt}&=\kappa_{1}b_{1}(t )x_{1}+\kappa_{1}\kappa_{2}b_{2}(t)x_{2},\\ \frac{dx_{1}}{dt}&=-b_{1}(t)x_{0}+\kappa_{2}b_{12}(t )x_{2},\\ \frac{dx_{2}}{dt}&=-b_{2}(t)x_{0}-b_{12}(t)x_{1}. \end{aligned}\right. \tag{44}\]
and it is easy to observe that
\[\frac{d(x_{0}^{2}+\kappa_{1}x_{1}^{2}+\kappa_{1}\kappa_{2}x_{2}^{2})}{dt}=2 \left(x_{0}\frac{dx_{0}}{dt}+\kappa_{1}x_{1}\frac{dx_{1}}{dt}+\kappa_{1}\kappa _{2}x_{2}\frac{dx_{2}}{dt}\right)=0,\]
what implies that \(I(x_{0},x_{1},x_{2})=x_{0}^{2}+\kappa_{1}x_{1}^{2}+\kappa_{1}\kappa_{2}x_{2}^ {2}\) is an invariant of the system. This invariant will be of utmost importance in order to show the efficiency of our method when preserving geometric invariants under numerical integration.
Now, consider the following set of matrices
\[M_{P_{1}}=\begin{pmatrix}0&-\kappa_{1}&0\\ 1&0&0\\ 0&0&0\end{pmatrix},\qquad M_{P_{2}}=\begin{pmatrix}0&0&-\kappa_{1}\kappa_{2} \\ 0&0&0\\ 1&0&0\end{pmatrix}, \tag{45}\]
\[M_{J_{12}}=\begin{pmatrix}0&0&0\\ 0&0&-\kappa_{2}\\ 0&1&0\end{pmatrix}, \tag{46}\]
that have the same commutation relations as the vector fields in (31)
\[[M_{P_{1}},M_{P_{2}}]=\kappa_{1}M_{J_{12}},\qquad[M_{J_{12}},M_{P_{1}}]=M_{P_{2}}, \qquad[M_{J_{12}},M_{P_{2}}]=-\kappa_{2}M_{P_{1}}.\]
These are the matrices that we will use as a basis of our Lie algebra. If we calculate the exponential of each of these matrices, we obtain (34).
Now, we multiply these exponencial matrices, and we obtain the canonical coordinates of the second kind:
\[\exp(\lambda_{1}M_{P_{1}})\exp(\lambda_{2}M_{P_{2}})\exp(\lambda_{ 3}M_{J_{12}})=\\ \begin{pmatrix}\operatorname{C}_{\kappa_{1}}(\lambda_{1}) \operatorname{C}_{\kappa_{1}\kappa_{2}}(\lambda_{2})&*&*\\ \operatorname{S}_{\kappa_{1}}(\lambda_{1})\operatorname{C}_{\kappa_{1}\kappa _{2}}(\lambda_{2})&*&*\\ \operatorname{S}_{\kappa_{1}\kappa_{2}}(\lambda_{2})&\operatorname{C}_{\kappa _{1}\kappa_{2}}(\lambda_{2})\operatorname{S}_{\kappa_{2}}(\lambda_{3})& \operatorname{C}_{\kappa_{1}\kappa_{2}}(\lambda_{2})\operatorname{C}_{\kappa _{2}}(\lambda_{3})\end{pmatrix}, \tag{47}\]
(we have omitted some matrix entries that we won't need in our calculations). In this way, given a point on the group, we can work out the parameters \(\{\lambda_{1},\lambda_{2},\lambda_{3}\}\). First, we take the entries \(g_{11}\) y \(g_{21}\) and define \(g=g_{21}/g_{11}\). So, \(\lambda_{1}\) can be expressed as
\[\lambda_{1}=\left\{\begin{array}{ll}\dfrac{\arctan g\sqrt{\kappa_{1}}}{\sqrt {\kappa_{1}}}&\text{if }\kappa_{1}>0\\ g&\text{if }\kappa_{1}=0\\ \dfrac{1}{2\sqrt{-\kappa_{1}}}\log\left(\dfrac{1+g\sqrt{-\kappa_{1}}}{1-g \sqrt{-\kappa_{1}}}\right)&\text{if }\kappa_{1}<0.\end{array}\right. \tag{48}\]
With the term \(g_{13}\) we can obtain \(\lambda_{2}\) as
\[\lambda_{2}=\left\{\begin{array}{ll}\dfrac{\arcsin g_{13}\sqrt{\kappa_{1} \kappa_{2}}}{\sqrt{\kappa_{1}\kappa_{2}}}&\text{if }\kappa_{1}\kappa_{2}>0\\ g_{13}&\text{if }\kappa_{1}\kappa_{2}=0\\ \dfrac{\log\left(g_{13}\sqrt{-\kappa_{1}\kappa_{2}}+\sqrt{-g_{13}^{2}\kappa_{ 1}\kappa_{2}+1}\right)}{\sqrt{-\kappa_{1}\kappa_{2}}}&\text{if }\kappa_{1}\kappa_{2}<0.\end{array}\right. \tag{49}\]
And lastly, analogously, defining \(g=g_{32}/g_{33}\) we can obtain \(\lambda_{3}\) as
\[\lambda_{3}=\left\{\begin{array}{ll}\dfrac{\arctan g\sqrt{\kappa_{2}}}{ \sqrt{\kappa_{2}}}&\text{if }\kappa_{2}>0\\ g&\text{if }\kappa_{2}=0\\ \dfrac{1}{2\sqrt{-\kappa_{2}}}\log\left(\dfrac{1+g\sqrt{-\kappa_{2}}}{1-g \sqrt{-\kappa_{2}}}\right)&\text{if }\kappa_{2}<0.\end{array}\right. \tag{50}\]
The next step is to integrate the fields. To find the flow associated with \(P_{1}\) we have to solve the system of equations \(\{dx_{0}/dt=\kappa_{1}x_{1},\ dx_{1}/dt=-x_{0},\ dx_{2}/dt=0\}\) with initial conditions \(\{x_{0}(0),\ x_{1}(0),\ x_{2}(0)\}\). The solution is
\[\begin{cases}x_{0}(t)=x_{0}(0)\operatorname{C}_{\kappa_{1}}(t)+\kappa_{1}x_{1 }(0)\operatorname{S}_{\kappa_{1}}(t)\\ x_{1}(t)=x_{1}(0)\operatorname{C}_{\kappa_{1}}(t)-x_{0}(0)\operatorname{S}_{ \kappa_{1}}(t)\\ x_{2}(t)=x_{2}(0),\end{cases}\]
in such a way that \(\Phi_{P_{1}}:\mathbb{R}\times\mathbb{R}^{3}\to\mathbb{R}^{3}\) associated with \(P_{1}\) can be expressed in the following way:
\[\Phi_{P_{1}}(t,(x_{0}(0),x_{1}(0),x_{2}(0)))=(x_{0},x_{1},x_{2}),\ \text{with}\ \begin{cases}x_{0}=x_{0}(0) \operatorname{C}_{\kappa_{1}}(t)+\kappa_{1}x_{1}(0)\operatorname{S}_{\kappa_{ 1}}(t)\\ x_{1}=x_{1}(0)\operatorname{C}_{\kappa_{1}}(t)-x_{0}(0)\operatorname{S}_{\kappa_{ 1}}(t)\\ x_{2}=x_{2}(0).\end{cases} \tag{51}\]
Similarly, we calculate the flows for \(P_{2}\) and \(J_{12}\).
\[\Phi_{P_{2}}(t,(x_{0}(0),x_{1}(0),x_{2}(0)))=(x_{0},x_{1},x_{2}), \text{ with }\begin{cases}x_{0}=x_{0}(0)\operatorname{C}_{\kappa_{1}\kappa_{2}}(t)+\kappa_{1} \kappa_{2}x_{2}(0)\operatorname{S}_{\kappa_{1}\kappa_{2}}(t)\\ x_{1}=x_{1}(0)\\ x_{2}=x_{2}(0)\operatorname{C}_{\kappa_{1}\kappa_{2}}(t)-x_{0}(0) \operatorname{S}_{\kappa_{1}\kappa_{2}}(t),\end{cases} \tag{52}\] \[\Phi_{J_{12}}(t,(x_{0}(0),x_{1}(0),x_{2}(0)))=(x_{0},x_{1},x_{2}), \text{ with }\begin{cases}x_{0}=x_{0}(0)\\ x_{1}=x_{1}(0)\operatorname{C}_{\kappa_{2}}(t)+\kappa_{2}x_{2}(0)\operatorname{ S}_{\kappa_{2}}(t)\\ x_{2}=x_{2}(0)\operatorname{C}_{\kappa_{2}}(t)-x_{1}(0)\operatorname{S}_{\kappa_{2}} (t).\end{cases} \tag{53}\]
The last element we need to apply our scheme is the Lie group action. We briefly review how we construct it in our particular case. Given an element of the algebra, the canonical coordinates of the second kind permit us to obtain a point in the group (near the origin of coordinates and the neutral element of the algebra, respectively). That is, we have a correspondence between a point in the algebra \(M\in\mathfrak{g}\) determined by the coordinates \((\lambda_{1},\lambda_{2},\lambda_{3})\)
\[M=\lambda_{1}M_{P_{1}}+\lambda_{2}M_{P_{2}}+\lambda_{3}M_{J_{12}}\in\mathfrak{g}\]
and the point \(g\in G\) determined by the same coordinates
\[g=\exp(\lambda_{1}M_{P_{1}})\exp(\lambda_{2}M_{P_{2}})\exp(\lambda_{3}M_{J_{1 2}})\in G.\]
With all of this, by definition, the Lie group action \(\varphi:G\times\mathbb{R}^{3}\to\mathbb{R}^{3}\) in a point \(g\in G\) and \(\boldsymbol{x}(0)=(x_{0}(0),x_{1}(0),x_{2}(0))\in\mathbb{R}^{3}\) is computed as
\[\varphi(g,\boldsymbol{x}(0))=\varphi(\exp(\lambda_{1}M_{P_{1}}) \exp(\lambda_{2}M_{P_{2}})\exp(\lambda_{3}M_{J_{12}}),\boldsymbol{x}(0))=\\ \varphi(\exp(\lambda_{1}M_{P_{1}}),\varphi(\exp(\lambda_{2}M_{P_{ 2}}),\varphi(\exp(\lambda_{3}M_{J_{12}}),\boldsymbol{x}(0))))=\\ \Phi_{P_{1}}(\lambda_{1},\Phi_{P_{2}}(\lambda_{2},\Phi_{J_{12}}( \lambda_{3},\boldsymbol{x}(0)))).\]
At this point it is interesting to observe something. It is easy to spot that the three vector fields \(P_{1}\), \(P_{2}\) y \(J_{12}\) share the same invariant with the system (44). This is,
\[I(x_{0},x_{1},x_{2})=I\left(\Phi_{i}(t,(x_{0}(0),x_{1}(0),x_{2}(0)))\right) \qquad\forall t\in\mathbb{R},\quad\forall(x_{0}(0),x_{1}(0),x_{2}(0))\in \mathbb{R}^{3}, \tag{54}\]
for each of the flows \(\Phi_{i}\) associated with \(\{P_{1},P_{2},J_{12}\}\). As we have just depicted, the action from the Lie group to the manifold is constructed as the composition of three flows. Moreover, it is apparent that, in spite we are using the "Euclidean-like" notation \(\mathbb{R}^{3}\) for our manifold \(N\) in this example, it is obvious that it carries a nontrivial geometric structure, as we have already shown. Therefore, our numerical scheme preserves the invariant.
Given all these elements, we can implement our numerical scheme. Instead of solving (44), we will solve the following differential equation on the Lie group
\[\frac{dY}{dt}=A(t)Y(t),\qquad Y(0)=I, \tag{55}\]
with \(A(t)=b_{1}(t)M_{P_{1}}+b_{2}(t)M_{P_{2}}+b_{12}(t)M_{J_{12}}\).
Let us assume that we are in the \(k\)-th interaction. This means that we know \(\boldsymbol{x}_{k}\) and \(Y_{k}\). To calculate the next point, we apply the numerical scheme (55) with the initial condition \(Y(0)=Y_{k}\), obtaining \(Y_{k+1}\), and being able to compute \(\boldsymbol{x}_{k+1}=\varphi(Y_{k+1},\boldsymbol{x}_{k})\).
### Numerical integration
Let us apply our method to (44) with the following coefficients
\[b_{1}(t)=t^{2},\qquad b_{2}(t)=\sin t,\qquad b_{12}(t)=\log(t+1),\]
and constants with values (\(\kappa_{1}=0.8,\kappa_{2}=-0.5\)), and initial condition \(\mathbf{x}_{0}=(1,1,1)\), for the interval \([3,4]\) and step size \(h=0.1\).
With these parameters our scheme provides the following solution, which is shown overlapped with another solution calculated with a very small step. We also show the solution obtained with a classical 4th-order Runge-Kutta applied directly to the system, i.e. (44).
\[\begin{array}{c}\includegraphics[width=142.364pt]{figure
Conclusions
Given the wide range of spaces, and geometries, where the mathematical and physical dynamical systems evolve, it is always worth to take care of its intrinsic properties when passing to the "discrete" side in order to obtain an approximate solution. As extensively showed in the literature, this results on some computational and dynamical benefits. This is the spirit of geometric integration, and the one we uphold in this article, where we take advantage of the geometric structure of Lie systems in order to propose a 7-step method to analitycally solve them (mainly, the possibility to reduce such systems to equivalent ones on a Lie group), plus a geometric numeric integrator. We have proven its geometric properties with a wide class of Lie systems evolving on curved spaces. As for future work, it shall be worth wondering about the numerical features of the integrator, such as consistency and convergence, besides finding new examples which may be of interest in mathematics, physics or other applied sciences.
## Data availability
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
|
2310.15901 | Enhancing Energy Efficiency for Reconfigurable Intelligent Surfaces with
Practical Power Models | Reconfigurable intelligent surfaces (RISs) are widely considered a promising
technology for future wireless communication systems. As an important indicator
of RIS-assisted communication systems in green wireless communications, energy
efficiency (EE) has recently received intensive research interest as an
optimization target. However, most previous works have ignored the different
power consumption between ON and OFF states of the PIN diodes attached to each
RIS element. This oversight results in extensive unnecessary power consumption
and reduction of actual EE due to the inaccurate power model. To address this
issue, in this paper, we first utilize a practical power model for a
RIS-assisted multi-user multiple-input single-output (MU-MISO) communication
system, which takes into account the difference in power dissipation caused by
ON-OFF states of RIS's PIN diodes. Based on this model, we formulate a more
accurate EE optimization problem. However, this problem is non-convex and has
mixed-integer properties, which poses a challenge for optimization. To solve
the problem, an effective alternating optimization (AO) algorithm framework is
utilized to optimize the base station and RIS beamforming precoder separately.
To obtain the essential RIS beamforming precoder, we develop two effective
methods based on maximum gradient search and SDP relaxation respectively.
Theoretical analysis shows the exponential complexity of the original problem
has been reduced to polynomial complexity. Simulation results demonstrate that
the proposed algorithm outperforms the existing ones, leading to a significant
increase in EE across a diverse set of scenarios. | Zhiyi Li, Jida Zhang, Jieao Zhu, Shi Jin, Linglong Dai | 2023-10-24T15:03:41Z | http://arxiv.org/abs/2310.15901v1 | # Enhancing Energy Efficiency for Reconfigurable Intelligent Surfaces with Practical Power Models
###### Abstract
Reconfigurable intelligent surfaces (RISs) are widely considered a promising technology for future wireless communication systems. As an important indicator of RIS-assisted communication systems in green wireless communications, energy efficiency (EE) has recently received intensive research interest as an optimization target. However, most previous works have ignored the different power consumption between ON and OFF states of the PIN diodes attached to each RIS element. This oversight results in extensive unnecessary power consumption and reduction of actual EE due to the inaccurate power model. To address this issue, in this paper, we first utilize a practical power model for a RIS-assisted multi-user multiple-input single-output (MU-MISO) communication system, which takes into account the difference in power dissipation caused by ON-OFF states of RIS's PIN diodes. Based on this model, we formulate a more accurate EE optimization problem. However, this problem is non-convex and has mixed-integer properties, which poses a challenge for optimization. To solve the problem, an effective alternating optimization (AO) algorithm framework is utilized to optimize the base station and RIS beamforming precoder separately. To obtain the essential RIS beamforming precoder, we develop two effective methods based on maximum gradient search and SDP relaxation respectively. Theoretical analysis shows the exponential complexity of the original problem has been reduced to polynomial complexity. Simulation results demonstrate that the proposed algorithm outperforms the existing ones, leading to a significant increase in EE across a diverse set of scenarios.
Reconfigurable intelligent surface (RIS), energy efficiency (EE), non-convex mixed-integer programming, semi-definite programming (SDP).
## I Introduction
Recently, a new concept called reconfigurable intelligent surface (RIS) has attracted enormous attention and academic interest in wireless communications society. Specifically, RIS is a large reflection array composed of numerous nearly passive elements. By controllably tuning the phase-shifts of the incident signals, these elements are capable of cooperatively reflecting the signals towards desired directions with high beamforming gain [2, 3, 4]. Due to its unique characteristics, RIS is expected to provide various performance improvements in wireless communications, including overcoming blockages, enhancing spectrum efficiency, and reducing energy consumption [5, 6, 7]. Among these, the reduction of energy consumption in RIS-assisted communication systems has been gaining increasing research interest, especially considering the growing demands for low-power massive connections and green radio in future wireless communications [8].
To realize energy-efficient RIS-aided communications, it is of practical importance to study the power consumption of RIS's hardware. Usually, RISs are manufactured with massive number of nearly passive elements, such as PIN diodes [9, 10], varactors [11], electrically controlled microelectromechanical systems (MEMS), and liquid crystals [12], which makes RIS-assisted systems energy-efficient. Among these hardware choices, PIN diodes have become the most prevailing tunable components, and have been widely applied to RISs due to their ability to serve as high-speed microwave switches with low insertion loss and low control voltage [13]. It is worth noting that, previous studies have pointed out the importance of considering the _dynamic power consumption_ while constructing the power model for PIN diodes [14]. Specifically, each PIN diode consumes a typical power of around \(10\,\mathrm{mW}\) when it is ON, and the power consumption varies based on its configuration [14, 15, 16]. For an RIS equipped with 512 elements, by assuming half of the PIN diodes are ON, the power dissipation will be \(2.56\,\mathrm{W}\). Compared to less than \(10\,\mathrm{W}\) for base station (BS) transmit power and \(10\,\mathrm{mW}\) for each user [17], the power dissipation of PIN diodes accounts for a significant proportion of the total power consumption in RIS-assisted communication systems. Therefore, when optimizing RIS configurations to meet energy-saving requirements in RIS-assisted systems, accurately modeling the power consumption attributed to PIN diodes is a crucial prerequisite.
### _Prior Works_
Traditional communication performance indicator is the _spectral efficiency_ (SE). The SE optimization of RIS-assisted systems has been extensively studied, and various optimization schemes have been proposed [18, 19]. To further reduce power consumption of RIS-assisted systems, a comprehensive performance indicator called the _energy efficiency_ (EE) has been studied in the literature [20, 21]. The EE is defined as the ratio of SE to total power consumption. Thus, different from SE optimization, EE optimization is generally more complicated due to the additional fractional structure. Therefore,
SE optimization algorithms cannot be directly applied to EE optimization problems. To address the fractional structure that appears in the objective function of EE, effective algorithms such as sequential fractional programming (SFP) method [17] and quadratic transformation method [22] have been proposed.
However, the power consumption model used in EE optimization in previous works is highly inaccurate. Most of the existing EE optimization algorithms assume the RIS power dissipation to be constant, i.e., independent of RIS configurations [17, 22]. As mentioned earlier, for real-world phase-tuning components, the power caused by PIN diodes occupies a considerable portion of the total power consumption in RIS-assisted communication systems. Furthermore, the power consumption of PIN diodes in RIS varies significantly when they are configured to different states. Specifically, when configured to the ON state, i.e., the equivalent microwave switches admit the microwave signals, the diodes become extremely more power-hungry than in the OFF states [14, 16]. Consequently, although the power consumption model employed in prior studies simplifies the algorithm design, it introduces severe inaccuracy for actual PIN diode-controlled RIS elements, leading to high additional power consumption when designing algorithms to optimize EE. This issue becomes more pronounced in scenarios involving a large number of RIS elements [23], for instance, 1100 [24] and 2304 elements [10].
Thus, if we fail to consider the impact of the ON-OFF power difference, it can result in a significant increase in power consumption and a substantial reduction in EE, especially when the number of RIS elements increases. In order to acquire a high EE, an effective design of the ON-OFF states of RIS elements is required to achieve high beamforming gain with fewer ON-state elements. Prior to designing algorithms for optimal EE, it is necessary to introduce an actual model of RIS power consumption with the consideration of the ON-OFF power difference. Unfortunately, existing works mentioned above have neglected this crucial point, leading to a severe deviation from realistic scenarios. Therefore, _how to accurately model the RIS power consumption and design its configuration_ is a critical aspect of achieving optimal EE in RIS-assisted communication systems.
### _Our Works_
In this paper, we employ a more realistic power model for RIS-assisted systems, based on which we propose an effective algorithm to obtain optimal EE1. Specifically, the contributions of this paper are summarized as follows:
Footnote 1: Simulation codes will be provided to reproduce the results in this paper: [http://oa.ee.tsinghua.edu.cn/dailinglong/publications/publications.html](http://oa.ee.tsinghua.edu.cn/dailinglong/publications/publications.html).
* First, we introduce a realistic power dissipation model for downlink 1-bit RIS-assisted multi-user multiple-input single-output (MU-MISO) communication systems, which models the ON-OFF power difference of each RIS element. The proposed model better fits the actual RIS-assisted communication systems. Based on this model, we re-formulate the EE optimization problem.
* Next, to solve the formulated EE optimization problem, we adopt an alternating optimization (AO) algorithm framework to optimize the BS and RIS precoder separately. However, obtaining the RIS precoder is NP-hard due to the non-convex mixed-integer property. To solve this problem, we apply two different methods based on maximum gradient search and SDP relaxation, respectively. The former method has lower computational complexity, while the latter one achieves better performance.
* Finally, we analyze the convergence and computational complexity of the proposed algorithms. Analysis results reveal that the exponential complexity of the original problem has been reduced to polynomial complexity. Simulation results verify the effectiveness of the proposed AO algorithmic framework with both methods, leading to a significant EE improvement in various scenarios.
### _Organization and Notation_
_Organization:_ The paper is structured as follows. In Section II, we establish the signal model and the definition of EE for downlink RIS-assisted MU-MISO systems, and then formulate the more accurate EE optimization problem with the consideration of RIS element ON-OFF power difference. In Section III, we introduce the AO algorithm framework to decouple the original problem into two subproblems, i.e., the power allocation problem, and the RIS analog beamforming problem. The analytical solution to the power allocation problem and the analysis of the computational complexity and convergence are also discussed. In Section IV, we focus on the non-convex mixed-integer RIS beamforming subproblem and provide two effective methods to acquire the near-optimal solution. The complexity and convergence analysis are also provided. In Section V, simulation results are provided to verify the performance and effectiveness of the proposed algorithm. Section VI concludes this paper.
_Notation:_\(\mathbb{R}\) and \(\mathbb{C}\) represent the sets of real and complex numbers, respectively. \(\mathbf{A}^{*}\), \(\mathbf{A}^{-1}\), \(\mathbf{A}^{\mathrm{T}}\), and \(\mathbf{A}^{\mathrm{H}}\) indicate the conjugate, inverse, transpose, and conjugate transpose of matrix \(\mathbf{A}\), respectively. \(\mathcal{CN}\left(\mu,\sigma^{2}\right)\) refers to the complex univariate Gaussian distribution with mean \(\mu\) and variance \(\sigma^{2}\). \(\|\cdot\|_{n}\) denotes the \(\mathcal{L}_{n}\)-norm of its argument, respectively. \(\text{diag}(\cdot)\) is the diagonal operation. \(\mathbf{1}_{M\times N}\) and \(\mathbf{0}_{M\times N}\) are \(M\times N\) matrices with all elements equal to \(1\) and \(0\), respectively. \(\text{tr}\left(\mathbf{X}\right)\) refers to the trace of the matrix \(\mathbf{X}\). \(\mathbf{X}\succeq 0\) denotes a positive semi-definite matrix. \(\otimes\), \(\odot\) denote the Kronecker and Hadamard product of two matrices respectively. \(\simeq\) indicates the equivalence of computational complexity order.
## II System Model
In this section, we will first specify the signal model in Subsection II-A. Then, a more realistic power model and EE will be introduced in Subsection II-B with the consideration of ON-OFF power difference. Based on this, the EE optimization problem is formulated in Subsection II-C.
### _Signal Model_
We consider a downlink RIS-assisted MU-MISO system, where \(K\) single-antenna users are served by an \(M\)-antenna BS.
The direct BS-user link is assumed to be blocked as shown in Fig. 1. To ensure signal coverage and user experience, communication from BS to users is assisted by a RIS. The RIS comprises \(N_{1}\) reflecting elements in the horizontal direction and \(N_{2}\) reflecting elements in the vertical direction, resulting in a total of \(N=N_{1}\times N_{2}\) reflecting elements. Each RIS element is assumed to be binary-controlled, i.e., the diagonal phase-shift matrix \(\mathbf{\Theta}\) of the RIS only takes two possible values
\[\mathbf{\Theta}=\text{diag}(e^{\mathbf{j\theta}})=\text{diag}\left(\left[e^{ \mathbf{j\theta}_{1}},e^{\mathbf{j\theta}_{2}},...,e^{\mathbf{j\theta}_{N}} \right]\right),\ \theta_{n}\in\{0,\pi\}. \tag{1}\]
The RIS-BS channel is denoted as \(\boldsymbol{G}\in\mathbb{C}^{N\times M}\), and the channel from RIS to the \(k\)-th user is denoted as \(\boldsymbol{f}_{k}^{\text{H}}\in\mathbb{C}^{1\times N}\). Then, the signal received by the users can be represented as
\[\boldsymbol{y}=\boldsymbol{F}^{\text{H}}\mathbf{\Theta}\boldsymbol{G} \boldsymbol{W}\boldsymbol{s}+\boldsymbol{n}, \tag{2}\]
where \(\boldsymbol{F}=[\boldsymbol{f}_{1},\boldsymbol{f}_{2},...,\boldsymbol{f}_{K}]\) represents the equivalent channel from RIS to each user, \(\boldsymbol{s}=[s_{1},s_{2},...,s_{K}]^{\text{T}}\) represents the signal transmitted to each user, \(\boldsymbol{W}\) represents the digital precoding from BS with the power constraint \(\text{tr}(\boldsymbol{W}^{\text{H}}\boldsymbol{W})\leq P_{\text{max}}\), and \(\boldsymbol{n}\sim\mathcal{CN}(\boldsymbol{0},\sigma_{n}^{2}\boldsymbol{I}_{K})\) represents the additive white Gaussian noise (AWGN) imposed at each receiver.
### _Energy Efficiency_
The total power consumption can be modeled as follows
\[P_{\text{all}}=P_{\text{static}}+P_{\text{RIS}}+\nu^{-1}P_{\text{ transmit}}, \tag{3}\]
in which \(P_{\text{static}}\) represents the overall static power consumption at BS, users, and RIS, \(P_{\text{transmit}}=\text{tr}\left(\boldsymbol{W}^{\text{H}}\boldsymbol{W}\right)\) represents the BS transmit power, and \(\nu\) represents the efficiency of the power transmit amplifiers, which is considered as 1 in the following text. It is noteworthy that the power consumption of RIS elements is considered as a fixed value in most of the prior works [15, 17, 22], which is not in accord with the actual situation discussed in Section I. Thus, a binary-control RIS is considered here, with the difference in ON-OFF power consumption of the PIN diode on each RIS element [14], i.e.,
\[P_{\text{RIS}}=\left\|\boldsymbol{\theta}\right\|_{0}P_{0},\ \theta_{n}\in\{0,\pi\}, \tag{4}\]
where \(P_{0}\) is the power dissipation of each RIS element when the corresponding PIN diode is turned ON2 by applying a bias current [9], and the \(n\)-th element \(\theta_{n}\) of \(\boldsymbol{\theta}\) is the phase-shift configuration of the \(n\)-th RIS element. With the discussions above, the SE and EE can be written as follows [25],
Footnote 2: In this paper, we assume that the reflection coefficient of each RIS element is tuned by only one PIN diode.
\[\text{SE}\left(\mathbf{\Theta},\boldsymbol{W}\right)=\sum_{k=1}^{K}\log_{2} \left(1+\frac{\left|\boldsymbol{f}_{k}^{\text{H}}\mathbf{\Theta}\boldsymbol{G} \boldsymbol{w}_{k}\right|^{2}}{\sum_{k^{\prime}\neq k}\left|\boldsymbol{f}_{k}^ {\text{H}}\mathbf{\Theta}\boldsymbol{G}\boldsymbol{w}_{k^{\prime}}\right|^{2}+ \sigma_{n}^{2}}\right), \tag{5}\]
\[\text{EE}\left(\mathbf{\Theta},\boldsymbol{W}\right)=\frac{\text{BW}\times \text{SE}\left(\mathbf{\Theta},\boldsymbol{W}\right)}{P_{\text{static}}+P_{0} \left\|\boldsymbol{\theta}\right\|_{0}+\text{tr}\left(\boldsymbol{W}^{\text{ H}}\boldsymbol{W}\right)}. \tag{6}\]
### _Problem Formulation_
In this paper, our target is to acquire the maximum EE by jointly designing the digital beamforming matrix \(\boldsymbol{W}\) and the RIS analog beamforming vector \(\boldsymbol{\theta}\), which can be expressed as
\[\mathcal{P}_{0}:\max_{\mathbf{\Theta},\boldsymbol{W}} \text{EE}\left(\mathbf{\Theta},\boldsymbol{W}\right),\] (7a) s.t. \[\text{tr}\left(\boldsymbol{W}^{\text{H}}\boldsymbol{W}\right)\leq P _{\text{max}}, \tag{7b}\] \[\text{SE}_{k}\geq\text{SE}_{\text{min}},\ \forall k\in\mathcal{K},\] (7c) \[\theta_{n}\in\{0,\pi\},\ \forall n\in\mathcal{N}. \tag{7d}\]
As shown in (7c), we set a minimum spectrum efficiency value \(\text{SE}_{\text{min}}\) for each user as a basic communication requirement.
The EE optimization problem \(\mathcal{P}_{0}\) is a non-concave mixed integer programming due to the non-convex target function (7a) and the sparse constraint (7d), which makes the problem extremely difficult and complicated. In the next part of this paper, we will develop an effective optimization algorithm with an acceptable computational complexity.
## III Algorithms
In this section, we will present an algorithm to address the EE optimization problem \(\mathcal{P}_{0}\). Firstly, we will introduce an alternating optimization procedure in order to acquire the optimal phase-shifts \(\boldsymbol{\theta}\) and BS precoders \(\boldsymbol{W}\) in Subsection III-A. Then, the solution of the power allocation problem will be discussed in Subsection III-B, i.e. the optimal \(\boldsymbol{W}\) with fixed \(\boldsymbol{\theta}\). The convergence and the computational complexity of the algorithm will be discussed in Subsection III-C. For clarity, the algorithm to solve the RIS analog beamforming problem will be designed in the next section, whose difficulty and complexity lead to a separate section to discuss.
### _Alternating Optimization_
In order to fully eliminate the interference of signal in different channels, the Zero-Forcing (ZF) digital precoder [26] can be utilized here to obtain a feasible solution, i.e.,
\[\boldsymbol{W}=\boldsymbol{H}\left(\boldsymbol{H}^{\text{H}}\boldsymbol{H} \right)^{-1}\boldsymbol{P}^{\frac{1}{2}}, \tag{8}\]
Fig. 1: RIS-assisted MU-MISO downlink system.
where \(\mathbf{H}^{\rm H}=\mathbf{F}^{\rm H}\mathbf{\Theta}\mathbf{G}\) represents the cascade channel from BS to users, the diagonal matrix \(\mathbf{P}\) represents the power allocation of each user, whose \(k\)-th diagonal element \(p_{k}\) represents the signal power received by the \(k\)-th user. With the ZF precoder, the signal received by users can be rewritten as follows:
\[\mathbf{y}=\mathbf{P}^{\frac{1}{2}}\mathbf{s}+\mathbf{n}, \tag{9}\]
and the corresponding transmit power constraints (7b) of \(\mathcal{P}_{0}\) can be rewritten as
\[\text{tr}\left(\mathbf{W}^{\rm H}\mathbf{W}\right)=\text{tr}\left(\mathbf{P}^{\frac{1}{2}} \left(\mathbf{H}^{\rm H}\mathbf{H}\right)^{-1}\mathbf{P}^{\frac{1}{2}}\right)\leq P_{\text {max}}. \tag{10}\]
Thus, the spectrum efficiency can also be rewritten as
\[\text{SE}\left(\mathbf{\Theta},\mathbf{P}\right)=\text{SE}\left(\mathbf{P}\right)=\sum_{k =1}^{K}\log_{2}\left(1+\frac{p_{k}}{\sigma^{2}}\right), \tag{11}\]
where \(p_{k}\) represents the \(k\)-th diagonal element of \(\mathbf{P}\), i.e. the received signal power of the \(k\)-th user. Then, the problem \(\mathcal{P}_{0}\) can be expressed as
\[\mathcal{P}_{0}^{\prime}: \max_{\mathbf{\Theta},\mathbf{P}}\ \frac{\text{BW}\sum_{k=1}^{K}\log_{2}\left(1+\frac{p_{k}}{ \sigma_{n}^{2}}\right)}{P_{\text{static}}+P_{0}\left\|\mathbf{\theta}\right\|_{0}+ \text{tr}\left(\mathbf{P}^{\frac{1}{2}}\left(\mathbf{H}^{\rm H}\mathbf{H}\right)^{-1}\mathbf{ P}^{\frac{1}{2}}\right)},\] (12a) s.t. \[\text{tr}\left(\mathbf{P}^{\frac{1}{2}}\left(\mathbf{H}^{\rm H}\mathbf{H} \right)^{-1}\mathbf{P}^{\frac{1}{2}}\right)\leq P_{\text{max}}, \tag{12b}\] \[p_{k}\geq p_{\text{min}},\ \forall k\in\mathcal{K}\] (12c) \[\theta_{n}\in\left\{0,\pi\right\},\ \forall n\in\mathcal{N}, \tag{12d}\]
where \(p_{\text{min}}=\sigma^{2}\left(2^{\text{SE}_{\text{min}}}-1\right)\) represents the minimum received power requirement of each user in order to ensure the basic communication spectrum efficiency \(\text{SE}_{\text{min}}\).
With the discussions above, the original optimization problem \(\mathcal{P}_{0}\) is transformed to \(\mathcal{P}_{0}^{\prime}\) with the optimization variables \(\mathbf{\Theta}\) and \(\mathbf{P}\). Considering the difficulty to acquire optimal \(\mathbf{\Theta}\) and \(\mathbf{P}\) simultaneously, an alternating optimization (AO) algorithm can be applied to solve the problem. Firstly, the power allocation matrix \(\mathbf{P}\) can be optimized with fixed \(\mathbf{\Theta}\), and then the RIS analog beamforming matrix \(\mathbf{\Theta}\) can be optimized with fixed \(\mathbf{P}\). Thus, \(\mathbf{P}\) and \(\mathbf{\Theta}\) can be updated until convergence.
### _Solution of the Power Allocation Problem_
According to the AO algorithm of problem \(\mathcal{P}_{0}^{\prime}\), the first step is to find the optimal power allocation matrix \(\mathbf{P}\) with the fixed RIS configuration \(\mathbf{\Theta}\), which leads to a power allocation problem as follows:
\[\mathcal{P}_{1}: \max_{\mathbf{P}}\ \frac{1}{P_{1}+\sum_{k=1}^{K}p_{k}t_{k}}\sum_{k=1}^{K} \log\left(1+\frac{p_{k}}{\sigma_{n}^{2}}\right),\] (13a) s.t. \[\sum_{k=1}^{K}p_{k}t_{k}\leq P_{\text{max}}, \tag{13b}\] \[p_{k}\geq p_{\text{min}}, \tag{13c}\]
where \(P_{1}\triangleq P_{\text{static}}+P_{0}\left\|\mathbf{\theta}\right\|_{0}\) and \(t_{k}\) is the \(k\)-th diagonal element of \(\left(\mathbf{H}^{\rm H}\mathbf{H}\right)^{-1}\). Although the constraints (13b) and (13c) are both affine, the problem \(\mathcal{P}_{1}\) is also a thorny matter due to the non-concave target function (13a). One feasible method is to introduce a relaxation variable in order to convert fractions to polynomials. Thus, the problem above can be solved by Dinkelbach's method as follows [17]:
\[\mathbf{P}^{(i)}=\text{arg}\max_{\mathbf{P}} \ \sum_{k=1}^{K}\log_{2}\left(1+\frac{p_{k}}{\sigma_{n}^{2}}\right)\] \[-\lambda^{(i-1)}\left(P_{1}+\sum_{k=1}^{K}p_{k}t_{k}\right), \tag{14a}\] \[\text{s.t.}\sum_{k=1}^{K}p_{k}t_{k}\leq P_{\text{max}},\ p_{k}\geq p _{\text{min}}\] \[\lambda^{(i)}=\frac{1}{P_{1}+\sum_{k=1}^{K}p_{k}^{(i)}t_{k}}\sum _{k=1}^{K}\log_{2}\left(1+\frac{p_{k}^{(i)}}{\sigma_{n}^{2}}\right). \tag{14b}\]
The optimization problem (14a) is convex, whose analytical solution will be given in **Appendix A**. Then, the procedures to solve \(\mathcal{P}_{1}\) can be summarized in **Algorithm 1**.
```
0: Numbers of users \(K\); Power fading coefficients \(t_{1},...,t_{K}\); Variance of the AWGN \(\sigma_{n}^{2}\).
0: Power allocation matrix \(\mathbf{P}=\text{diag}(p_{1},p_{2},\cdots,p_{K})\).
1: Find \(\zeta\) such that: \(\sum_{k}\big{\{}\zeta-t_{k}\sigma_{n}^{2},t_{k}p_{\text{min}}\big{\}}=P_{\text{ max}}\)
2:for\(i=1,2,...,N_{\text{iter}}\)do
3:\(\xi^{(i)}\leftarrow\min\big{\{}\zeta,1/\left(\lambda^{(i-1)}\log 2\right)\big{\}}\).
4:\(p_{k}^{(i)}\leftarrow\max\big{\{}\big{(}\xi-t_{k}\sigma_{n}^{2}\big{)}/t_{k},p_{ \text{min}}\big{\}}\).
5:\(\lambda^{(i)}\leftarrow\sum_{k}\log_{2}\left\{1+p_{k}^{(i)}/\sigma_{n}^{2}\right\}/ \left(P_{1}+\sum_{k}p_{k}t_{k}\right)\).
6:endfor
7:return Optimized \(\mathbf{P}\)
```
**Algorithm 1** Power Allocation Problem
### _Complexity & Convergence Analysis_
Firstly, the computational complexity will be derived as follows. According to **Algorithm 1**, the complexity of the power allocation problem mainly comes from the iteration step. For each step, the values of \(\xi\), \(p_{k}\), and \(\lambda\) are calculated in turn, which leads to a linear complexity \(\mathcal{O}(K)\). Thus, the computational complexity of the power allocation problem is \(\mathcal{O}(N_{\text{iter}}K)\), where \(N_{\text{iter}}\) represents the number of iterations of the algorithm. The computational complexity of the RIS beamforming problem will be discussed in Subsection IV-C.
We focus on the convergence of the power allocation problem. According to the optimality of \(p_{k}^{(i)}\) in (14a), we have
\[\mathcal{P}\left(p_{k}^{(i)}\right)= \sum_{k=1}^{K}\log_{2}\left(1+\frac{p_{k}^{(i)}}{\sigma_{n}^{2}} \right)-\lambda^{(i-1)}\left(P_{1}+\sum_{k=1}^{K}p_{k}^{(i)}t_{k}\right)\] \[\geq\] \[-\lambda^{(i-1)}\left(P_{1}+\sum_{k=1}^{K}p_{k}^{(i-1)}t_{k} \right)=0.\]
Therefore, it is obvious that \(\lambda^{(i)}\geq\lambda^{(i-1)}\), which proves the convergence of \(\mathcal{P}_{1}\).
As for the convergence of AO algorithm, consider the \(j\)-th iteration where \(\text{EE}_{1}^{(j)}\) denotes the EE after power allocation (optimizing \(\mathbf{P}\) with fixed \(\mathbf{\Theta}\)) and \(\text{EE}_{2}^{(j)}\) denotes the EE after RIS beamforming (optimizing \(\mathbf{\Theta}\) with fixed \(\mathbf{P}\)). With the discussion about the convergence of \(\mathcal{P}_{1}\), it is ensured that \(\text{EE}_{1}^{(j)}\geq\text{EE}_{1}^{(j-1)}\). As long as the RIS beamforming problem can be solved effectively, i.e. the algorithm of the problem satisfies \(\text{EE}_{2}^{(j)}\geq\text{EE}_{1}^{(j)}\), which will be explained in detail in Section IV, we will have \(\text{EE}_{2}^{(j)}\geq\text{EE}_{1}^{(j)}\geq\text{EE}_{2}^{(j-1)}\). This proves the convergence of the AO algorithm.
## IV RIS Analog Beamforming
In this section, we focus on the discussion about the algorithm of RIS analog beamforming. According to the AO algorithm in Section III, the next step is to optimize \(\mathbf{\Theta}\) with fixed \(\mathbf{P}\), which can be expressed as follows,
\[\mathcal{P}_{2}:\] (16a) s.t. \[\sum_{k=1}^{K}p_{k}t_{k}\leq P_{\text{max}}, \tag{16b}\] \[\theta_{n}\in\{0,\pi\},\ \forall n\in\mathcal{N}. \tag{16c}\]
We denote \(\mathbf{q}=\text{e}^{\mathbf{\theta}}\). Since \(\theta_{n}\in\{0,\pi\}\), we have \(q_{n}\in\{-1,1\}\) and \(\left\lVert\mathbf{\theta}\right\rVert_{0}=-\frac{1}{2}\text{1}^{\text{T}}\mathbf{q}+ N/2\). Then \(\mathcal{P}_{2}\) can be rewritten as
\[\mathcal{P}_{2}^{\prime}: \min_{\mathbf{q}}\ -\frac{1}{2}P_{0}\text{1}^{\text{T}}\mathbf{q}+\sum_{k=1}^{K}p _{k}t_{k},\] (17a) s.t. \[\sum_{k=1}^{K}p_{k}t_{k}\leq P_{\text{max}}, \tag{17b}\] \[q_{n}\in\{-1,1\},\ \forall n\in\mathcal{N}. \tag{17c}\]
It should be noted that \(t_{k}\) is a function of \(\mathbf{q}\) in \(\mathcal{P}_{2}^{\prime}\). As we have mentioned, the non-convexity of the target function (16a) and the discrete constraint (16c) makes \(\mathcal{P}_{2}\) difficult to solve. Generally, it is an NP-hard problem due to the integer constraint (16c), so it is almost impossible to acquire an optimal solution with low computational time. However, some effective methods such as heuristic search and proper relaxations are helpful to obtain a computationally feasible solution. In Subsection IV-A we propose an algorithm based on maximum gradient search, which has a low computational cost and thus can solve the problem efficiently. Due to the fact that gradient-based algorithms may converge to local optimum in some situations, an alternative algorithm based on SDP relaxation is proposed in Subsection IV-B, and the appropriate solution of problem \(\mathcal{P}_{2}\) will be given. The computational complexity and convergence will be analyzed in Subsection IV-C.
### _Search with the Maximum Gradient_
One of the most popular solutions to mixed integer programming \(\mathcal{P}_{2}^{\prime}\) is based on searching methods, such as the Branch and Bound method. The fatal defect of this type of method is that it usually has unacceptable computational complexity, e.g., \(\mathcal{O}(2^{N})\). Although gradient descent methods in continuous variable optimization cannot be applied to discrete cases directly, they still provide some insights. In discrete cases, the direction with the maximum gradient value can also be regarded as the fastest direction to make the objective function decline. Then, we can design the searching method with the maximum gradient.
Firstly, the gradient of the objective function in (17a) (defined as \(g(\mathbf{q})\)) can be expressed as
\[\frac{\partial g(\mathbf{q})}{\partial q_{n}} =-\frac{1}{2}P_{0}+\sum_{k=1}^{K}p_{k}\left[\frac{\partial\left( \mathbf{H}^{\text{H}}\mathbf{H}\right)^{-1}}{\partial q_{n}}\right]_{(k,k)} \tag{18}\] \[=-\frac{1}{2}P_{0}-\sum_{k=1}^{K}p_{k}\left[\left(\mathbf{H}^{\text{H }}\mathbf{H}\right)^{-1}\frac{\partial\mathbf{H}^{\text{H}}\mathbf{H}}{\partial q_{n}} \left(\mathbf{H}^{\text{H}}\mathbf{H}\right)^{-1}\right]_{(k,k)}\] \[=-\frac{1}{2}P_{0}-\sum_{k=1}^{K}p_{k}\left[\left(\mathbf{H}^{\text{ H}}\mathbf{H}\right)^{-1}\left(\mathbf{F}^{\text{H}}\text{diag}(\mathbf{e}_{n})\mathbf{GH}\right.\right.\] \[\left.\left.+\mathbf{H}^{\text{H}}\mathbf{G}^{\text{H}}\text{diag}(\mathbf{e }_{n})\mathbf{F}\right)\left(\mathbf{H}^{\text{H}}\mathbf{H}\right)^{-1}\right]_{(k,k)}.\]
Based on (18), we can design the searching method. Then, each \(q_{n}\) changes from large \(\partial g/\partial q_{n}\times q_{n}\) to small ones, during which the value of \(g(\mathbf{q})\) and the feasibility of \(\mathbf{q}\) will be verified. If the updated \(\mathbf{q}^{*}\) is a feasible and better solution than \(\mathbf{q}\), i.e. \(g(\mathbf{q}^{*})\leq g(\mathbf{q})\), the update will be kept and go on to the next. Details of the method are summarized in **Algorithm 2**.
```
0: Number of RIS elements \(N\); Channel matrix \(\mathbf{F},\mathbf{G}\); Energy consumption of each RIS element \(P_{\text{RIS}}\); Initial RIS state \(\mathbf{q}^{(0)}\); Ratio of each epoch \(\rho\); Threshold \(\varepsilon\).
0: RIS beamforming state \(\mathbf{q}\).
1:while\(\left\lVert\mathbf{q}^{(i)}-\mathbf{q}^{(i-1)}\right\rVert_{0}\geq\varepsilon\)do
2: Calculate the gradient of \(g(\mathbf{q}^{(i-1)})\) according to (18).
3: Sort the product of \(\mathbf{q}_{n}\) and \(\partial g(\mathbf{q})/\partial q_{n}\) in descending order \(\mathbf{d}\).
4:\(\mathbf{q}^{(i)}\leftarrow\mathbf{q}^{(i-1)}\)
5:for\(j=1,2,...,\text{round}(\rho N)\)do
6:\(\mathbf{q}_{d_{j}}^{(i)}\leftarrow-\mathbf{q}_{d_{j}}^{(i)}\)
7:if This solution \(\mathbf{q}^{(i)}\) is unfeasible or more energy consumption then
8:\(\mathbf{q}_{d_{j}}^{(i)}\leftarrow-\mathbf{q}_{d_{j}}^{(i)}\)
9:endif
10:endfor
11:\(i\gets i+1\)
12:endwhile
13:return\(\mathbf{q}\)
```
**Algorithm 2** Search with the Maximum Gradient
### _SDP Relaxation_
In the previous subsection, the RIS phase-shift vector \(\mathbf{q}\) is obtained by applying an effective heuristic search to the problem \(\mathcal{P}_{2}^{\prime}\) in **Algorithm 2**, which does not guarantee the optimality. A more reasonable approach is to construct a solution to \(\mathcal{P}_{2}^{\prime}\) by analyzing the inherent mathematical structure of this optimization problem. Note that the constraints (17b) (or (16b)) are applied to the trace of the inverse matrix of \(\mathbf{H}^{\mathrm{H}}\mathbf{H}\), which is further a quadratic function of \(\mathbf{q}\) according to the definition of \(\mathbf{H}\). Thus, if we replace \(\mathbf{q}\) by a quadratic variable \(\mathbf{X}\in\mathbb{R}^{(N+1)\times(N+1)}\) defined as
\[\mathbf{X}=\begin{bmatrix}\mathbf{qq}^{\mathrm{T}}&\mathbf{q}\\ \mathbf{q}^{\mathrm{T}}&1\end{bmatrix}, \tag{19}\]
then the constraint (17b) can be expressed as \(\text{tr}(\mathbf{Y}^{-1})\), where the elements of \(\mathbf{Y}\) are linear combinations of the elements in \(\mathbf{X}\). Since linear transform does not alter convexity, \(\text{tr}(\mathbf{Y}^{-1})\) is a convex function of the new matrix argument \(\mathbf{X}\). Through this change of variable, i.e., \(\mathbf{q}\rightarrow\mathbf{X}\), the target function (17a) of \(\mathcal{P}_{2}^{\prime}\) is made convex simultaneously. Thus, \(\mathcal{P}_{2}^{\prime}\) can be rewritten as
\[\mathcal{P}_{3}: \min_{\mathbf{X}}-\frac{1}{4}P_{\text{RIS}}\text{tr}\left(\mathbf{E}_{0} \mathbf{X}\right)+\text{tr}\left(\mathbf{F}_{0}^{\mathrm{H}}\left(\mathbf{X}\odot\mathbf{G}_ {0}\right)\mathbf{F}_{0}\mathbf{P}^{-1}\right)^{-1},\] (20a) s.t. \[\text{tr}\left(\left(\mathbf{F}_{0}^{\mathrm{H}}\left(\mathbf{X}\odot\bm {G}_{0}\right)\mathbf{F}_{0}\mathbf{P}^{-1}\right)^{-1}\right)\leq P_{\text{max}}, \tag{20b}\] \[\text{tr}\left(\mathbf{E}_{i,i}\mathbf{X}\right)=1,\ i\in 1,2,...,N+1,\] (20c) \[\mathbf{X}\succeq 0,\] (20d) \[\text{rank}\left(\mathbf{X}\right)=1, \tag{20e}\]
where \(\mathbf{E}_{0}\in\mathbb{R}^{(N+1)\times(N+1)}\), \(\mathbf{E}_{i,i}\in\mathbb{R}^{(N+1)\times(N+1)}\), \(\mathbf{F}_{0}\in\mathbb{C}^{(N+1)\times K}\), and \(\mathbf{G}_{0}\in\mathbb{R}^{(N+1)\times(N+1)}\) is defined as
\[\mathbf{E}_{0} =\begin{bmatrix}\mathbf{0}_{N\times N}&\mathbf{1}_{N\times 1}\\ \mathbf{1}_{1\times N}&0\end{bmatrix},\mathbf{E}_{i,i}=\text{diag}(\mathbf{e}_{i}), \tag{21}\] \[\mathbf{G}_{0} =\begin{bmatrix}\mathbf{GG}^{\mathrm{H}}&\mathbf{0}_{N\times 1}\\ \mathbf{0}_{1\times N}&0\end{bmatrix},\mathbf{F}_{0}=\begin{bmatrix}\mathbf{F}\\ \mathbf{0}_{1\times K}\end{bmatrix}.\]
The equivalence between \(\mathcal{P}_{2}^{\prime}\) and \(\mathcal{P}_{3}\) is proved in **Appendix B**.
After the conversion of the optimization variables, the objective function (20a) and the constraints (20b), (20c), and (20d) are all convex. Therefore, the only challenge is the non-convexity of (20e). If we relax the constraint (20e), which is called SDR [18, 27], the optimization problem will be expressed as a convex SDP problem
\[\mathcal{P}_{3}^{\prime}: \min_{\mathbf{X}}-\frac{1}{4}P_{\text{RIS}}\text{tr}\left(\mathbf{E}_{0} \mathbf{X}\right)+\text{tr}\left(\mathbf{F}_{0}^{\mathrm{H}}\left(\mathbf{X}\odot\mathbf{G}_ {0}\right)\mathbf{F}_{0}\mathbf{P}^{-1}\right)^{-1},\] (22a) s.t. \[\text{tr}\left(\left(\mathbf{F}_{0}^{\mathrm{H}}\left(\mathbf{X}\odot \mathbf{G}_{0}\right)\mathbf{F}_{0}\mathbf{P}^{-1}\right)^{-1}\right)\leq P_{\text{max}}, \tag{22b}\] \[\text{tr}\left(\mathbf{E}_{i,i}\mathbf{X}\right)=1,\ i\in 1,2,...,N+1,\] (22c) \[\mathbf{X}\succeq 0. \tag{22d}\]
The problem \(\mathcal{P}_{3}^{\prime}\) is a standard convex semi-definite programming, which can be solved with general procedures like the interior-point method by CVX solvers [28].
The next step is to acquire the low-rank solution \(\mathbf{X}\) of \(\mathcal{P}_{3}\) (i.e. \(\mathbf{q}\) of \(\mathcal{P}_{2}^{\prime}\)) from the solution \(\tilde{\mathbf{X}}\) of \(\mathcal{P}_{3}^{\prime}\). According to the positive definiteness of \(\tilde{\mathbf{X}}\), we can define \(\mathbf{V}=\left[\mathbf{v}_{1},\mathbf{v}_{2},...,\mathbf{v}_{N}\right]\in\mathbb{R}^{N\times N}\) as \(\mathbf{V}^{T}\mathbf{V}=\tilde{\mathbf{X}}(1:N,1:N)\). Randomly select \(\mathbf{u}\in\mathbb{R}^{N}\) from uniform distribution on \(N\)-dimensional sphere (an implementation is to construct \(\tilde{\mathbf{u}}\) where \(\tilde{u}_{i}\sim\mathcal{N}(0,1)\), then \(\mathbf{u}=\tilde{\mathbf{u}}/\|\tilde{\mathbf{u}}\|\)). Then, the estimation \(\tilde{q}_{i}\) is the sign of projection of \(\mathbf{v}_{i}\) to \(\mathbf{u}\), i.e. \(\tilde{q}_{i}=\text{sgn}\left(\mathbf{u}^{\mathrm{T}}\mathbf{v}_{i}\right)\). We can repeat the procedure for times to select the optimal solution. The method based on SDP relaxation can be summarized in **Algorithm 3**.
```
0: Number of RIS elements \(N\); Channel matrix \(\mathbf{F},\mathbf{G}\); Energy consumption of each RIS element \(P_{\text{RIS}}\).
0: RIS beamforming state \(\mathbf{q}\).
1: Calculate \(\mathbf{F}_{0}\), \(\mathbf{G}_{0}\).
2: Solve the problem \(\mathcal{P}_{3}^{\prime}\) with the standard SDP procedures and acquire the optimal \(\tilde{\mathbf{X}}\).
3:\(\mathbf{V}\leftarrow\left(\tilde{\mathbf{X}}\left(1:N,1:N\right)\right)^{\frac{1}{2}}\)
4:for\(i=1,2,...,N_{\text{SDP}}\)do
5: Randomly choose \(\tilde{\mathbf{u}}^{(i)}\) with \(\tilde{u}_{n}^{(i)}\sim\mathcal{N}(0,1)\).
6:\(\mathbf{u}^{(i)}\leftarrow\tilde{\mathbf{u}}^{(i)}/\left\|\tilde{\mathbf{u}}^{(i)}\right\|\)
7:\(\hat{q}_{n}^{(i)}\leftarrow\text{sgn}\left(\mathbf{v}_{n}^{\mathrm{T}}\mathbf{u}^{(i)}\right)\)
8:\(g^{(i)}\leftarrow-\frac{1}{2}P_{0}\mathbf{1}^{\mathrm{T}}\mathbf{q}^{(i)}+\sum_{k=1}^{K}p_{ k}\mathbf{f}_{k}^{(i)}\)
9:endfor
10: Select the \(\mathbf{q}^{(i)}\) as \(\mathbf{q}\) corresponding to the minimum \(g^{(i)}\).
11:return\(\mathbf{q}\)
```
**Algorithm 3** SDP Relaxation
### _Complexity & Convergence Analysis_
The original RIS analog beamforming problem \(\mathcal{P}_{2}\) or \(\mathcal{P}_{2}^{\prime}\) is NP-hard, which means the computational complexity for the optimal solution is at least \(\mathcal{O}\left(2^{N}\right)\). This is an unacceptable thing, especially for the RIS with a large number of reflecting elements. Here, the computational complexity of the two methods proposed in Subsection IV-A and IV-B will be discussed, and it will show that these methods are computationally acceptable.
Firstly, the complexity of **Algorithm 2** is derived as follows. The complexity for computing (18) is \(\mathcal{O}\left(K^{3}\left(N+K\right)\right)\), which is approximate to \(\mathcal{O}\left(NK^{3}\right)\) due to \(K\ll N\) in general. The complexity for the sorting procedure is \(\mathcal{O}\left(N\log N\right)\), which can be ignored. The complexity for determining the feasibility of the solution and the value of (17a) is \(\mathcal{O}\left(NKM+K^{2}M+K^{3}\right)\simeq\mathcal{O}\left(NKM\right)\). Thus, assuming for \(I_{\text{iter}}\) iterations, the computational complexity of **Algorithm 2** is \(\mathcal{O}\left(I_{\text{iter}}\left(NK^{3}+N^{2}KM\right)\right)\simeq\mathcal{O} \left(I_{\text{iter}}N^{2}KM\right)\).
Next, we will consider **Algorithm 3**. The worst-case complexity for solving the SDP optimization \(\mathcal{P}_{3}^{\prime}\) with interior-point algorithm is \(\mathcal{O}(((N+1)^{2}+1)^{4.5})\simeq\mathcal{O}(N^{9})\)[29]. Compared with this, all others can be ignored. Thus, the computational complexity of **Algorithm 3** is \(\mathcal{O}(N^{9})\).
From the discussion above, both methods are polynomial computational complexity, compared with the exponential computational complexity of the optimal solution. Furthermore, the computational complexity of **Algorithm 3** is much higher than that of **Algorithm 2**.
Now, the convergence will be analyzed. **Algorithm 3** contains no iteration step, which means the convergence does not need to be discussed here. According to **Algorithm 2**, we define \(\boldsymbol{q}^{(i,j)}\) as the value of \(\boldsymbol{q}\) during the \(i\)-th iteration and before the change of \(\boldsymbol{q}_{d_{j}}^{(i)}\). Due to \(g(\boldsymbol{q}^{(i,j)})\geq g(\boldsymbol{q}^{(1,j-1)})\) which is discussed in Subsection IV-A, this method is proved converge.
## V Simulation Results
### _Simulation Model_
In this section, simulation results will be provided to verify the effect of the proposed algorithm for maximizing energy efficiency in an MU-MISO communication scenario. In our simulation, the signal models and channel models are consistent with the discussion in Section II. In Subsection V-B, the convergence of the algorithm will be shown with different BS transmit power. Then, the performance of the proposed algorithm are tested under a wide range of BS transmit power, which is shown in Section V-C. Finally, the impact of the number of RIS elements on EE will be shown in Subsection V-D.
In this subsection, we will introduce the channel model and simulation parameters. We assume a Rician fading channel that consists of both line-of-sight (LoS) and non-line-of-sight (NLoS) components. Specifically, the channel matrix \(\boldsymbol{F}(\boldsymbol{G})\) can be expressed as follows:
\[\boldsymbol{F}=z\left(\sqrt{\frac{\kappa}{\kappa+1}}\boldsymbol{F}_{\mathrm{LoS }}+\sqrt{\frac{1}{\kappa+1}}\boldsymbol{F}_{\mathrm{NLoS}}\right), \tag{23}\]
where \(\kappa\) is the Rician factor, \(z\) is the path loss related to the distance, \(\boldsymbol{F}_{\mathrm{LoS}}\) represents the LoS component of the channel \(\boldsymbol{F}\), and \(\boldsymbol{F}_{\mathrm{NLoS}}\) represents the NLoS component of \(\boldsymbol{F}\), which follows the distribution \([\boldsymbol{F}_{\mathrm{NLoS}}]_{k\ell}\sim\) i.i.d. \(\mathcal{CN}\left(0,1\right)\). The LoS component \(\boldsymbol{F}_{\mathrm{LoS}}\) admits an SV channel structure, i.e.,
\[\boldsymbol{F}_{\mathrm{LoS}}=\sqrt{NM}\boldsymbol{a}_{N}\left(\theta,\varphi \right)\boldsymbol{a}_{M}^{\mathrm{H}}\left(\theta^{\prime},\varphi^{\prime} \right), \tag{24}\]
where \(\left(\theta,\varphi\right),\left(\theta^{\prime},\varphi^{\prime}\right)\) are the azimuth and elevation angles of the beam in the coordinate system of the RIS and the BS, respectively. The steering vectors \(\boldsymbol{a}_{N},N=N_{1}\times N_{2}\) and \(\boldsymbol{a}_{M},M=M_{1}\times M_{2}\) are defined as
\[\boldsymbol{a}_{\boldsymbol{N}}(\theta,\varphi)= \frac{1}{\sqrt{N}}\left[1,e^{\mathrm{j}\pi\sin\theta\sin\varphi /\lambda},\cdots,e^{\mathrm{j}\pi\left(N_{1}-1\right)\sin\theta\sin\varphi/ \lambda}\right]_{N_{1}}^{\mathrm{T}}\] (25) \[\otimes\left[1,e^{\mathrm{j}\pi\cos\varphi/\lambda},\cdots,e^{ \mathrm{j}\pi\left(N_{2}-1\right)\cos\varphi/\lambda}\right]_{N_{2}}^{\mathrm{ T}},\] \[\boldsymbol{a}_{\boldsymbol{M}}(\theta,\varphi)= \frac{1}{\sqrt{M}}\left[1,e^{\mathrm{j}\pi\sin\theta\sin\varphi /\lambda},\cdots,e^{\mathrm{j}\pi\left(M_{1}-1\right)\sin\theta\sin\varphi/ \lambda}\right]_{M_{1}}^{\mathrm{T}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
system EE as the iteration proceeds. These figures show that after several previous iterations, the SE and EE of the system all maintain a stable value, which verifies the convergence of the AO algorithm. Simulation results have demonstrated that, the AO algorithm framework achieves convergence in only two or three iterations. Moreover, this convergence rate is almost independent of the transmitted power, thus proving that the AO algorithm framework is computationally stable and widely applicable to different communication conditions.
### _EE with Different BS Transmit Power_
In this subsection, we mainly focus on the energy efficiency optimized by the proposed algorithm as a function of the transmitted power. Note that since the thermal noise power \(\sigma_{n}^{2}\) at the receiver is fixed, the transmitted power admits a constant dB difference from the transmitter signal-to-noise ratio (SNR). The simulation parameters are consistent with those in Subsection V-B, and the BS transmit power varies from \(-10\,\mathrm{dBW}\) to \(10\,\mathrm{dBW}\).
The spectrum efficiency and energy efficiency curves are provided in Fig. 4 and Fig. 5. The blue curves "Search with Gradient" in both subgraphs represent the results with the RIS analog beamforming algorithm designed in Subsection IV-A, and the red curves "SDP Relaxation" represent that in Subsection IV-B. It should be noted that both the curves "Search with Gradient" and "SDP Relaxation" belong to the AO framework. Their only difference is that, the former employs the gradient search RIS analog beamforming method, while the latter utilizes the SDP relaxation-based RIS analog beamforming method. The black curves "Random RIS" and the green curves "All-OFF RIS" are baselines, which are given by random RIS analog beamforming and the RIS beamforming as an identity matrix (i.e. all the elements of RIS are configured to the OFF states). We also plot the performance of an existing method proposed in [31], which solves \(\mathcal{P}_{2}\) via successively setting the phase shifts of all elements in order from \(n=1\) to \(n=N\), and update each \(\theta_{n}\) by fixing all other \(\theta_{k}\)'s, \(\forall k\neq n\)
Fig. 4: SE curves based on proposed algorithm, existing method [31], and baselines; \(x\)-axis denotes the BS transmit power; \(y\)-axis denotes the spectrum efficiency (bps/Hz).
Fig. 5: EE curves based on proposed algorithm, existing method [31], and baselines; \(x\)-axis denotes the BS transmit power; \(y\)-axis denotes the energy efficiency (bits/Joule).
The process does not stop until convergence.
Fig. 4 and Fig. 5 show that both two methods have better SE and EE performances in typical scenarios than the two baselines, as well as the existing method. The SDP relaxation method may achieve less performance when the BS transmit power is extremely low, but such scenarios are rare in real-world settings. The reason why "Search with Gradient" outperforms the existing method is that additional information provided by the gradient is fully utilized, thus more decrease in \(\mathcal{P}_{2}\)'s objective function can be achieved in each iteration. There is a noteworthy phenomenon that both the SE and EE curves will reach a platform, where the value will not continuously grow as the increasing BS transmit power. This is because in order to maximize energy efficiency, the power BS consumed to transmit messages is expected low. Thus, even if the potential transmitted power is relatively high, the optimization algorithm tends to transmit with insufficient power, which causes the SE and EE curves flat when BS transmit power restriction is high enough.
We can learn from the figure that the SDP relaxation method proposed in Subsection IV-B is generally better than the gradient searching method proposed in Subsection IV-A. This is reasonable because the solution of optimization problem \(\mathcal{P}_{2}^{\prime}\) is provided the approximate value in the real sense only by the latter one, while the former one only provides an effective method for a better solution. However, in Fig. 4, we notice that the results seem to be less significant when BS transmit power is relatively high, which is due to the balance between the increase of SE and the decrease of BS power consumption. In order to get a higher EE, the lower SE carrying by the lower BS power consumption is hard to avoid, and this is particularly prominent under high BS power consumption.
### _Impact of the Number of RIS Elements_
Here, we mainly show the impact of the number of RIS reflecting elements on the SE and EE. The simulation parameters are consistent with those in Subsection V-B, and the number of RIS reflecting elements varies from \(4\times 4\) to \(13\times 13\).
Fig. 6 and Fig. 7 show the SE and EE curves of the communication system by both two methods. The trends of the five curves are similar to those in Subsection V-C, which represents the performance of algorithms and the baselines respectively. From Fig. 6 and Fig. 7, it is obvious that the SE and EE of the communication system arise as the number of RIS reflecting elements increases, for the stronger beamforming effect by RIS. Also, it shows that the optimization effect of the SDP relaxation method is better than that of the gradient searching method, and both are better than that of baselines, which accords with the above results. This also shows the generality of the proposed algorithm.
For future works, the 1-bit RIS power dissipation model can be generalized to more practical multi-bit versions for improved accuracy. The corresponding joint optimization algorithms can be re-designed to adapt to this enhanced RIS manipulation capability.
## Appendix A Analytical solution of the problem (14a)
Here, we will analytically solve the optimization problem (14a), which is
\[\mathcal{P}_{A}:\mathbf{P}=\text{arg}\max_{\mathbf{P}} \sum_{k=1}^{K}\log_{2}\left(1+\frac{p_{k}}{\sigma_{n}^{2}}\right)- \lambda\left(P_{1}+\sum_{k=1}^{K}p_{k}t_{k}\right),\] (26a) s.t. \[\sum_{k=1}^{K}p_{k}t_{k}\leq P_{\text{max}}, \tag{26b}\] \[p_{k}\geq p_{\text{min}},\ \forall k\in\mathcal{K}. \tag{26c}\]
As shown above, the target is to maximize a concave function (26a) with affine constraints (26b) and (26c). Firstly, the Lagrange function of \(\mathcal{P}_{A}\) can be expressed as
\[\mathcal{L}\left(p_{k},\mathbf{\mu}\right) =\lambda\left(P_{1}+\sum_{k=1}^{K}p_{k}t_{k}\right)-\sum_{k=1}^{K }\log_{2}\left(1+\frac{p_{k}}{\sigma_{n}^{2}}\right)\] \[\quad+\mu_{0}\left(\sum_{k=1}^{K}p_{k}t_{k}-P_{\text{max}}\right) -\sum_{k=1}^{K}\mu_{k}\left(p_{k}-p_{\text{min}}\right). \tag{27}\]
Then, the KKT condition of \(\mathcal{P}_{A}\) can be written as
\[\left(\lambda+\mu_{0}\right)t_{k}-\mu_{k}-\frac{1}{\left(p_{k}+ \sigma_{n}^{2}\right)\log 2}=0,\ \forall k\in\mathcal{K}, \tag{28a}\] \[\mu_{0}\left(\sum_{k=1}^{K}p_{k}t_{k}-P_{\text{max}}\right)=0,\] (28b) \[\mu_{k}\left(p_{k}-p_{\text{min}}\right)=0,\ \forall k\in \mathcal{K},\] (28c) \[\mu_{0},\mu_{k}\geq 0,\ \forall k\in\mathcal{K}. \tag{28d}\]
These are similar to water-filling solutions in form, but the relaxation term with \(\lambda\) makes some difference. Here, we substitute (28c) into (28a), and then we obtain
\[p_{k}=\max\left\{\frac{1}{\log 2}\frac{1}{\left(\lambda+\mu_{0}\right)t_{k}}- \sigma_{n}^{2},p_{\text{min}}\right\}. \tag{29}\]
With the equation (28b), we have
\[\mu_{0}\left(\sum_{k=1}^{K}\max\left\{\frac{1}{\log 2}\frac{1}{\lambda+\mu_{0} }-t_{k}\sigma_{n}^{2},p_{\text{min}}t_{k}\right\}-P_{\text{max}}\right)=0. \tag{30}\]
Then, after introducing two auxiliary variables, the analytical solution of \(\mathcal{P}_{A}\) can be written as
\[\zeta:\sum_{k=1}^{K}\max\left\{\zeta-t_{k}\sigma_{n}^{2},t_{k}p_{ \text{min}}\right\}=P_{\text{max}}, \tag{31a}\] \[\xi=\min\left\{\zeta,\frac{1}{\lambda\log 2}\right\},\] (31b) \[p_{k}=\max\left\{\frac{1}{t_{k}}\left(\xi-t_{k}\sigma_{n}^{2} \right),p_{\text{min}}\right\}. \tag{31c}\]
## Appendix B Proof of the equivalence between \(\mathcal{P}_{2}^{\prime}\) and \(\mathcal{P}_{3}\)
In order to simplify the problem, we define the variable matrix \(\mathbf{X}\) in order to replace \(\mathbf{q}\), which can be expressed as
\[\mathbf{X}=\begin{bmatrix}\mathbf{q}\\ 1\end{bmatrix}\begin{bmatrix}\mathbf{q}^{\mathrm{T}}&1\end{bmatrix}=\begin{bmatrix} \mathbf{q}\mathbf{q}^{\mathrm{T}}&\mathbf{q}\\ \mathbf{q}^{\mathrm{T}}&1\end{bmatrix}, \tag{32}\]
where \(\mathbf{q}\mathbf{q}^{\mathrm{T}}\) can replace the quadratic term, and \(\mathbf{q}\) or \(\mathbf{q}^{\mathrm{T}}\) can replace the linear term. The expression of \(\mathbf{H}^{\mathrm{H}}\mathbf{H}\) can be derived as follows
\[\mathbf{H}^{\mathrm{H}}\mathbf{H} =\mathbf{F}^{\mathrm{H}}\mathbf{\Theta}\mathbf{G}\mathbf{G}^{\mathrm{H}}\mathbf{ \Theta}\mathbf{F} \tag{33}\] \[=\mathbf{F}^{\mathrm{H}}\text{diag}\left(\mathbf{q}\right)\mathbf{G}\mathbf{G}^{ \mathrm{H}}\text{diag}\left(\mathbf{q}\right)\mathbf{F}.\]
Then, we consider the \(\left(i,j\right)\)-element of matrix \(\text{diag}\left(\mathbf{q}\right)\mathbf{G}\mathbf{G}^{\mathrm{H}}\text{diag}\left(\mathbf{q }\right)\), which is
\[\begin{split}&\left[\text{diag}\left(\mathbf{q}\right)\mathbf{G}\mathbf{G} ^{\mathrm{H}}\text{diag}\left(\mathbf{q}\right)\right]_{\left(i,j\right)}\\ &=\sum_{k=1}^{N}\sum_{l=1}^{N}\left[\text{diag}\left(\mathbf{q} \right)\right]_{\left(i,k\right)}\left[\text{GG}\mathbf{G}^{\mathrm{H}}\right]_{ \left(k,l\right)}\left[\text{diag}\left(\mathbf{q}\right)\right]_{\left(l,j \right)}\\ &=\sum_{k=1}^{N}\sum_{l=1}^{N}q_{i}q_{j}\left[\mathbf{G}\mathbf{G}^{ \mathrm{H}}\right]_{\left(k,l\right)}\delta_{ik}\delta_{lj}\\ &=q_{i}q_{j}\left[\mathbf{G}\mathbf{G}^{\mathrm{H}}\right]_{\left(i,j \right)},\end{split} \tag{34}\]
so the matrix above can be expressed as the Hadamard product \(\mathbf{q}\mathbf{q}^{\mathrm{T}}\odot\mathbf{G}\mathbf{G}^{\mathrm{H}}\), which can be written as a linear combination of \(\mathbf{X}\). Then, the matrix \(\mathbf{H}^{\mathrm{H}}\mathbf{H}\) can be expressed as
\[\mathbf{H}^{\mathrm{H}}\mathbf{H}=\mathbf{F}_{0}^{\mathrm{H}}\left(\mathbf{X}\odot\mathbf{G}_{0} \right)\mathbf{F}_{0}, \tag{35}\]
where the \(\mathbf{F}_{0}\) and \(\mathbf{G}_{0}\) are defined in (21). Then, the constraint (17b) can be written as
\[\begin{split}&\text{tr}\left(\mathbf{P}^{\frac{1}{2}}\left(\mathbf{H}^{ \mathrm{H}}\mathbf{H}\right)^{-1}\mathbf{P}^{\frac{1}{2}}\right)\\ &=\text{tr}\left(\left(\mathbf{F}_{0}^{\mathrm{H}}\left(\mathbf{X}\odot\mathbf{G }_{0}\right)\mathbf{F}_{0}\mathbf{P}^{-1}\right)^{-1}\right)\leq P_{\text{max}}. \end{split} \tag{36}\]
The other constraint (17c) of \(\mathcal{P}_{2}^{\prime}\) can also be written as
\[q_{n}^{2}=1,\ \forall n\in\mathcal{N}, \tag{37}\]
which means the diagonal of \(\mathbf{X}\) (i.e. \(q_{n}^{2}\)) is always \(1\), which leads to \(N+1\) constraints
\[\text{tr}\left(\mathbf{E}_{i,i}\mathbf{X}\right)=1. \tag{38}\]
In addition, according to (32), the semi-definite constraint \(\mathbf{X}\succeq 0\) and the rank-one constraint \(\text{rank}\left(\mathbf{X}\right)=1\) should be considered in order to ensure the equivalence. This completes the proof.
|
2308.06289 | Fibonacci-like property of partition function | The main result of the paper is the Fibonacci-like property of the partition
function. The partition function $p(n)$ has a property: $p(n) \leq p(n-1) +
p(n-2)$. Our result shows that if we impose certain restrictions on the
partition, then the inequality becomes an equality. Furthermore, we extend this
result to cases with a greater number of summands. | Qi-Yang Zheng | 2023-08-10T18:33:07Z | http://arxiv.org/abs/2308.06289v1 | # Fibonacci-like property of partition function
###### Abstract.
The main result of the paper is the Fibonacci-like property of the partition function. The partition function \(p(n)\) has a property: \(p(n)\leq p(n-1)+p(n-2)\). Our result shows that if we impose certain restrictions on the partition, then the inequality becomes an equality. Furthermore, we extend this result to cases with a greater number of summands.
## 1. Introduction
In number theory and combinatorics, a partition of a non-negative integer \(n\), also called an integer partition, is a way of representing \(n\) as a sum of positive integers. Two sums that differ only in the order of their summands are considered to be the same partition. We denote the partition function by \(p(n)\).
One can employ elementary methods to demonstrate that \(p(n)\leq p(n-1)+p(n-2)\) for \(n\geq 2\) (see [2, (3.8)]). Our result provides an explicit formulation of this equation when certain restrictions are imposed. The theorem is outlined below:
**Theorem 1.1**.: _For \(n\geq 2\), we have_
\[p(n\mid\text{parts}\not\equiv 12,15,27\pmod{27})\] \[=p(n-1\mid\text{parts}\not\equiv 6,21,27\pmod{27})\] \[+p(n-2\mid\text{parts}\not\equiv 3,24,27\pmod{27}).\]
For example, let \(n=29\). We find that \(p(27)=3010\), \(p(28)=3718\), and \(p(29)=4565\). However,
\[p(29\mid\text{parts}\not\equiv 12,15,27\pmod{27})=4133,\]
\[p(28\mid\text{parts}\not\equiv 6,21,27\pmod{27})=2701,\]
\[p(27\mid\text{parts}\not\equiv 3,24,27\pmod{27})=1432.\]
In fact, we prove something more. Recall that the partition function possesses a recurrence formula,
\[p(n) =\sum_{k\in\mathbb{Z}\setminus\{0\}}(-1)^{k+1}p(n-k(3k-1)/2)\] \[=p(n-1)+p(n-2)-p(n-5)-p(n-7)+\cdots\]
Our result shows that we can truncate the formula at any even position, provided certain restrictions are applied to the partition. The precise statement is as follows:
**Theorem 1.2**.: _For every integer \(m\geq 1\) and \(n\geq m(3m+1)/2\), we have_
\[0=p\left(n\mid\operatorname{parts}\not\equiv 0,(2m+1)(3m+1),(2m+1)(3m+2 )\right)+\sum_{i=1}^{m}(-1)^{i}\] \[\left[p\left(n-\frac{i(3i-1)}{2}\,\right|\,\operatorname{parts} \not\equiv 0,(2m+1)(3m-3i+2),(2m+1)(3m+3i+1)\right)\right.\] \[\left.+p\left(n-\frac{i(3i+1)}{2}\,\right|\,\operatorname{parts} \not\equiv 0,(2m+1)(3m-3i+1),(2m+1)(3m+3i+2)\,\right)\right],\]
_where all three congruences are taken modulo \(3(2m+1)^{2}\)._
In fact, if we agree that \(p(n)=0\) for \(n<0\), then the theorem is satisfied for all \(n\).
Note that Theorem 1.1 corresponds to the special case \(m=1\). Additionally, the case \(m=0\) is trivial. If we set \(m=2\), then we acquire
**Corollary 1.3**.: _For \(n\geq 7\), we have_
\[p(n\mid\operatorname{parts}\not\equiv 35,40,75\pmod{75})\] \[=p(n-1\mid\operatorname{parts}\not\equiv 25,50,75\pmod{75})\] \[+p(n-2\mid\operatorname{parts}\not\equiv 20,55,75\pmod{75})\] \[-p(n-5\mid\operatorname{parts}\not\equiv 10,65,75\pmod{75})\] \[-p(n-7\mid\operatorname{parts}\not\equiv 5,70,75\pmod{75}).\]
## 2. Proofs of theorems
As mentioned before, it is sufficient to prove Theorem 1.2.
Proof of Theorem 1.2.: First, we recall the well-known Euler's Pentagonal Number Theorem
\[\prod_{n=1}^{\infty}(1-q^{n})=\sum_{n=-\infty}^{\infty}(-1)^{n}q^{\frac{n(3n+1 )}{2}}.\]
Substitute \(q\) with \(q^{1/(2m+1)}\), resulting in
\[\prod_{n=1}^{\infty}(1-q^{\frac{n}{2m+1}})=\sum_{n=-\infty}^{\infty}(-1)^{n}q^{ \frac{n(3n+1)}{2(2m+1)}}.\]
Now we divide the summation according to residue classes modulo \(2m+1\),
\[\prod_{n=1}^{\infty}(1-q^{\frac{n}{2m+1}}) =\sum_{n=-\infty}^{\infty}(-1)^{n}q^{\frac{n(3n+1)}{2(2m+1)}}\] \[=\sum_{i=-m}^{m}\sum_{n=i\pmod{2m+1}}^{\infty}(-1)^{n}q^{\frac{ n(3n+1)}{2(2m+1)}}.\]
For each value of \(i\), we have
\[\sum_{n=-\infty\atop n=i\pmod{2m+1}}^{\infty}(-1)^{n}q^{\frac{n(3n+ 1)}{2(2m+1)}} =\sum_{n=-\infty}^{\infty}(-1)^{(2m+1)n+i}q^{\frac{[(2m+1)n+i](3(2m +1)n+3i+1]}{2(2m+1)}}\] \[=(-1)^{i}q^{\frac{i(3i+1)}{2(2m+1)}}\sum_{n=-\infty}^{\infty}(-1) ^{n}q^{\frac{(2m+1)3n^{2}+n(6i+1)}{2}}\] \[=(-1)^{i}q^{\frac{i(3i+1)}{2(2m+1)}}\sum_{n=-\infty}^{\infty}(-1) ^{n}q^{3(2m+1)\frac{n(n+1)}{2}-(3m-3i+1)n}.\]
We now introduce a lemma to address this type of summation (cf. [1, Corollary 2.9]).
**Lemma 2.1**.: _For \(|q|<1\),_
\[\sum_{n=-\infty}^{\infty}(-1)^{n}q^{(2k+1)n(n+1)/2-in}\] \[=\prod_{n=0}^{\infty}(1-q^{(2k+1)(n+1)})(1-q^{(2k+1)n+i})(1-q^{( 2k+1)(n+1)-i}).\]
Therefore,
\[\sum_{n=i\pmod{2m+1}}^{\infty}(-1)^{n}q^{\frac{n(3n+1)}{2(2m+1)}}\] \[=(-1)^{i}q^{\frac{i(3i+1)}{2(2m+1)}}\prod_{n=0}^{\infty}(1-q^{3(2 m+1)(n+1)})(1-q^{3(2m+1)n+3m-3i+1})(1-q^{3(2m+1)n+3m+3i+2}).\]
It is worth noting that \(0<3m-3i+1,3m+3i+2<3(2m+1)\) for all \(-m\leq i\leq m\).
Next, we substitute \(i\) with \(-i\), resulting in
\[\sum_{n=-i\pmod{2m+1}}^{\infty}(-1)^{n}q^{\frac{n(3n+1)}{2(2m+1)}}\] \[=(-1)^{i}q^{\frac{i(3i-1)}{2(2m+1)}}\prod_{n=0}^{\infty}(1-q^{3(2 m+1)(n+1)})(1-q^{3(2m+1)n+3m-3i+2})(1-q^{3(2m+1)n+3m+3i+1}).\]
Hence
\[\sum_{i=-m}^{m}\sum_{n=i\pmod{2m+1}}^{\infty}(-1)^{n}q^{\frac{n(3n+1)}{2(2m+ 1)}}\] \[=\left[\sum_{n=\infty\pmod{2m+1}}^{\infty}+\sum_{i=1}^{m}\left( \sum_{n=-\infty\atop n=-i\pmod{2m+1}}^{\infty}+\sum_{n=-\infty\atop n=i \pmod{2m+1}}^{\infty}\right)\right](-1)^{n}q^{\frac{n(3n+1)}{2(2m+1)}}.\]
Now, we use the product formulas obtained earlier to yield the following:
\[\prod_{n=1}^{\infty}(1-q^{\frac{n}{2m+1}})\] \[=\prod_{n=0}^{\infty}(1-q^{3(2m+1)(n+1)})(1-q^{3(2m+1)n+3m+1})(1-q^ {3(2m+1)n+3m+2})+\sum_{i=1}^{m}(-1)^{i}\] \[\left[q^{\frac{i(3i-1)}{2(2m+1)}}\prod_{n=0}^{\infty}(1-q^{3(2m+1 )(n+1)})(1-q^{3(2m+1)n+3m-3i+2})(1-q^{3(2m+1)n+3m+3i+1})\right.\] \[+\left.q^{\frac{i(3i+1)}{2(2m+1)}}\prod_{n=0}^{\infty}(1-q^{3(2m+1 )(n+1)})(1-q^{3(2m+1)n+3m-3i+1})(1-q^{3(2m+1)n+3m+3i+2})\right].\]
Then, substitute \(q\) with \(q^{2m+1}\), resulting in
\[\prod_{n=1}^{\infty}(1-q^{n})\] \[=\prod_{n=1}^{\infty}(1-q^{3(2m+1)^{2}(n+1)})(1-q^{3(2m+1)^{2}n+ (2m+1)(3m+1)})(1-q^{3(2m+1)^{2}n+(2m+1)(3m+2)})\] \[+\sum_{i=1}^{m}(-1)^{i}\left[q^{\frac{i(3i-1)}{2}}\prod_{n=0}^{ \infty}(1-q^{3(2m+1)^{2}(n+1)})(1-q^{3(2m+1)^{2}n+(2m+1)(3m-3i+2)})\right.\] \[\left.(1-q^{3(2m+1)^{2}n+(2m+1)(3m+3i+1)})\right.\] \[+q^{\frac{i(3i+1)}{2}}\prod_{n=0}^{\infty}(1-q^{3(2m+1)^{2}(n+1)} )(1-q^{3(2m+1)^{2}n+(2m+1)(3m-3i+1)})\] \[\left.(1-q^{3(2m+1)^{2}n+(2m+1)(3m+3i+2)})\right].\]
Divide both sides by \(\prod_{n=1}^{\infty}(1-q^{n})\) and compare the coefficients on both sides, which yields the desired result.
## 3. Musings
Is it possible to find a combinatorial explanation for Theorem 1.1?
|
2310.03434 | Synergy of machine learning with quantum computing and communication | Machine learning in quantum computing and communication provides intensive
opportunities for revolutionizing the field of Physics, Mathematics, and
Computer Science. There exists an aperture of understanding behind this
interdisciplinary domain and a lack of core understanding renders an
opportunity to explore the machine learning techniques for this domain. This
paper gives a comprehensive review of state-of-the-art approaches in quantum
computing and quantum communication in the context of Artificial Intelligence
and machine learning models. The paper reviews the classical ML models that
have been employed in various ways for quantum computation such as quantum
error correction, quantum communication, quantum cryptography, and mapping
quantum algorithms to the existing hardware. The paper also illustrates how the
relevant current challenges can be transformed into future research avenues. | Debasmita Bhoumik, Susmita Sur-Kolay, Latesh Kumar K. J., Sundaraja Sitharama Iyengar | 2023-10-05T10:18:39Z | http://arxiv.org/abs/2310.03434v1 | # Synergy of machine learning with quantum computing and communication
###### Abstract
Machine learning in quantum computing and communication provides intensive opportunities for revolutionizing the field of Physics, Mathematics, and Computer Science. There exists an aperture of understanding behind this interdisciplinary domain and a lack of core understanding renders an opportunity to explore the machine learning techniques for this domain. This paper gives a comprehensive review of state-of-the-art approaches in quantum computing and quantum communication in the context of Artificial Intelligence and machine learning models. The paper reviews the classical ML models that have been employed in various ways for quantum computation such as quantum error correction, quantum communication, quantum cryptography, and mapping quantum algorithms to the existing hardware. The paper also illustrates how the relevant current challenges can be transformed into future research avenues.
## 1 Introduction
One of the most notable events occurring in the fields of Computer Science and Physics is quantum computation. It can provide a faster solution to some of the real-life problems such as factoring a large integer [1], searching a key in a large unsorted database [2], simulating a Hamiltonian of a complex system [3]. With \(N\) quantum bits (qubits), a quantum computer represents \(2^{N}\) states. This exponential increase in the state space is a promising potential for many optimization problems which quickly become intractable even in modern supercomputers.
About three decades ago, the quantum computation was primarily of interest to theoretical physicists and computer scientists. However, this field has made rapid progress in last few years. Both theory and practice have rapidly advanced in quantum computing and the interesting seminal area is how quantum computing may influence the domain of machine learning. The capability of quantum computers over classical computers was demonstrated by Google [4] where their 53-qubit quantum computer was shown to execute random operations on a circuit within a few seconds which modern supercomputers would have required years. IBM showed the reliable execution of the largest quantum circuit till date - which is a trotterized circuit corresponding to a nearest-neighbour Ising model [5] with 127 qubits and a 2 qubit depth of 60. In the past few years, IBM has moved on [6] to a 433-qubit one with an announcement for the launch of their new quantum device with 1,121 qubits in 2023.
It was only a matter of time for quantum computation to cross paths with another domain of utmost interest and application in modern times: machine learning (ML), where each benefits from the other. ML essentially trains a computer to learn information from data without explicit programming. ML can be broadly classified into three categories: (i) _supervised machine learning_ (training data to predict output) such as Support Vector Machine (SVM), Neural Network (NN), Naive Bayes etc; (ii) _unsupervised machine learning_ (acting on unlabelled data) such as k-means clustering algorithm, and (iii) _reinforcement learning_ (learning from environment and mistakes through agent). Few popular examples of applications of classical ML are self-driven cars, fraud detection, natural language processing, online recommendation (from Amazon, Walmart, Hulu, Netflix).
In recent literature, we note that classical ML models have been applied in different portions of quantum computation such as quantum error correction, quantum communication, quantum cryptography, mapping quantum algorithms to the existing hardware etc. On the other hand, quantum counterparts of existing ML algorithms have been designed, such as Quantum Neural Network (QNN), Quantum Support Vector machine (QSVM) which are expected to perform better than the classical computers due to the exponential increase in the search space. Moreover, near-term quantum-classical hybrid algorithms have been designed which, in some sense, mimic the working principle of ML algorithms. They searched for the global optimum for a cost function, but uses quantum computers for the same. There is already an interesting survey paper about how machine learning meets quantum foundations (mathematical and conceptual understanding of quantum theory) [7]. In [8], [9], [10] and [11], the authors provided a detailed review of recent advancement in quantum machine learning. In this article, we mainly focus on how machine learning meets quantum computation ( branch of computer science is based on the principles of the quantum mechanics).
We briefly review each of the topics mentioned above. This is not an in-depth review of quantum computation and machine learning. Rather, the aim is to provide the readers with a broad overview of the various directions in which these two domains have interlaced. We enlist relevant references for the readers to delve deeper in some of the directions covered in this article, which does not assume prior knowledge of either quantum computation or machine learning.
Section 1.1 introduces quantum computation briefly and Section 1.2 presents the basic concepts of machine learning. In Section 2 we discuss how classical machine learning can be applied effectively for efficient design of quantum computing circuits, particularly logic synthesis, physical mapping, and error decoding. Section 3 addresses the techniques adopted for the present-day technology having small noisy quantum devices, such as variational algorithms and quantum approximate optimization algorithms (QAOA). In Section 4, we discuss the key aspects of quantum machine learning. Next, Section 5 outlines the essence of quantum communication systems, quantum cryptography protocols and application of classical machine learning to design these. The concluding remarks appear in Section 6.
### A layman's introduction to quantum computation
In classical computers, the basic unit of information is a bit (value = 0 or 1), whereas a quantum bit (or qubit) is its counterpart in quantum computing. It is represented as a unit vector in a 2-dimensional Hilbert space which is a complex vector space with two orthogonal basis states. These two are represented as \(\left|0\right\rangle=(1\quad 0)^{T}\) and \(\left|1\right\rangle=(0\quad 1)^{T}\). The state of a qubit may also be any linear _superposition_ of the two basis states. Thus, the state of a qubit can be of the form \(\left|\psi\right\rangle=\alpha\left|0\right\rangle+\beta\left|1\right\rangle\), where \(\alpha,\beta\in\mathbb{C}\) are termed as probability amplitudes and \(\left|\alpha\right|^{2}+\left|\beta\right|^{2}=1\).
A _quantum computer_ employs operations which involve _superposition_, _entanglement_ and _interference_, which are not observed in the macroscopic classical domain.
* a binary phenomenon. Similarly, the quantum superposition for a state is lost upon measurement as the outcome is exactly one of the two basis states. Therefore, any operation on a superposition state can be considered as a simultaneous operation on all the basis states that form the superposition. This property leads to speedup in quantum algorithms.
* Entanglement is another interesting phenomenon in quantum systems. Two qubits are said to be entangled if the measurement of one qubit disturbs the state of the other, irrespective of the spatial distance between them [12]. This property is valuable for quantum cryptography.
* Interference is usually observed in waves. But since quantum states have wave-particle duality, i.e., a qubit can behave both as a wave and particle at the same time, two qubits can interact either constructively or destructively. When two qubits interact constructively, the corresponding probability amplitudes increase beyond the normal additive value. On the other hand, when they interact destructively, the probability amplitudes cancel each other out [13]. The design of a quantum algorithm exploits this phenomena. The states corresponding to the correct solution(s) are made to interact constructively, whereas the other outcomes are made to interact destructively.
A brief summary of operations (gates) and the notion of a quantum circuit is given in Appendix.
Since the early days of quantum algorithms, two quantum algorithms which demonstrated the superiority over their classical counterparts, have been of utmost interest due to their applicability. Grover's Search [2] exploits interference to search for an element (called the marked state) in an unordered database faster. In particular, a classical computer would require \(\mathcal{O}(N)\) queries to search for a marked state among \(N\) states, whereas Grover's search on a quantum computer suffices with \(\mathcal{O}(\sqrt{N})\) queries to provide a quadratic speedup. For example, in order to find an item in a list of one trillion items, where checking each item requires one microsecond, a classical computer would take approximately a week, but a quantum computer only about a second.
The second quantum algorithm by Shor [1] that toasts the triumph of quantum computing is for factorizing an integer. Till now, there is no known classical algorithm that can achieve this task in polynomial time. The security of cryptosystems such as RSA relies on the classical hardness of factorization. Shor's algorithm thus renders such classical cryptosystems vulnerable in a quantum world.
These two and many other ones designed later on [14], have intensified the interest in quantum computing and the impetus to build required hardware.
### Machine learning algorithms relevant to quantum computation
For the sake of completeness, before discussing the amalgamation of quantum computing and machine learning, we briefly discuss a few machine learning (ML) algorithms which are used extensively in the domain of quantum computation. As stated before, the goal of machine learning is to train a computer to learn certain properties from a given dataset without explicit coding or a set of rules, and then use the outcome to study those properties in new data for prediction or classification purposes. For example, we expect that a computer that has previously seen a large number of pictures of tumors, and has been trained on the malignant ones, would be able to identify an unseen photo of a tumor correctly. This type of ML algorithm is called supervised learning, where the machine is previously trained with some labeled data. Other forms of learning such as unsupervised and reinforced have also been studied widely and applied to various domains.
Three of the most widely used supervised machine learning algorithms related to quantum computing are (a) Neural Network (NN) [15] for quantum logic synthesis, physical mapping, and quantum error decoding in Section 2.1), QKD protocol in Section 5.1.1, quantum accelerator of ML in Section 4.6, the quantum neural network in Section 4.2, (b) Reinforcement learning (RL) [16] for quantum error decoding in Section 2.1, and (c) Support Vector Machine (SVM) [17] for quantum machine learning in Section 4.3. The study also discusses various ML learning models including the random forest method on quantum communication in Section 2.1.
#### 1.2.1 Neural Network
The human brain contains 200 billion neurons and each neuron consists of four parts: dendrites, soma, axon, and synapses. Signals are collected by neurons through dendrites, and then all the signals collected are summed up by soma. After reaching the threshold, the signal is passed to the other neurons through the axon. The power of the connection between the neurons is indicated by the synapses.
Similarly, an Artificial Neural Network (ANN or sometimes called just NN) mimics this biological neural network. ANN was discovered in the year of 1943 by the neurophysiologists Warren McCulloch and the logician Walter Pitts [15]. In an ANN there are multiple layers representing the neurons. Simpler ANN algorithms have no feedback between the layers and are called feed-forward neural networks (FFNN). A single-layer FFNN consists of an input layer of neurons and an output layer of neurons. In a multi-layer feed-forward network, the first layer is the input layer which receives an input signal and the last layer is the output layer. In between these two layers, there can be multiple hidden layers. The signal from the input layer passes through these hidden layers to the output layer. The connection between a pair of nodes (neurons) in two adjacent layers has an associated weight, which indicates the connectivity strength between these. The input to a particular layer is multiplied by the weight to create the internal value, which is altered by a threshold value before feeding to an activation function to get the output of that layer. That output is passed on to the next layer as its input. The final layer provides the outcome of the network. In each iteration, the weights and the threshold values are updated to produce a more accurate value. Figure 1 (a) shows a neural network that has an input layer with \(m\) nodes, one hidden layer with \(l\) nodes, and an output layer with \(n\) nodes. The inputs are \(x_{1},x_{2},...,x_{m}\), the weights on connections between the input layer and the hidden layer are \(w_{11}^{h},w_{12}^{h},...,w_{lm}^{h}\). If the outputs can be directed back as inputs to the same layer or previous layers, then it results in a feedback neural network such as Recurrent Neural Network (RNN) [18].
#### 1.2.2 Reinforcement learning
Reinforcement learning (RL) is a sub-field of machine learning that trains an agent to choose an action from the action space (where the environment is fixed), in order to maximize rewards over a specific time [16]. It is neither supervised nor unsupervised. Rather it receives a reward or penalty based on its choice of action. The algorithm learns to choose those actions which maximize its reward. There are four important elements in RL:
* a program which is trained to do a specific job;
* the real or virtual world where the agent lies;
* a move which is made by the agent causing a change of status in the environment;
* the evaluation after an action made by the agent and this may be either positive or negative.
While building an optimal strategy, an agent, can run into a dilemma between exploring new states and maximizing the overall reward at the same time. For example, it is impossible for the agent to know whether it has already reached a good enough reward and further exploration of new states will simply reduce its reward value. This is known as _Exploration vs Exploitation_. The best overall strategy involves short term sacrifices to make the best overall decision. Reinforcement learning finds applications in multiple domains where training data may not be readily available.
#### 1.2.3 Support Vector Machines
Support Vector Machine (SVM) is a popular supervised ML algorithm. In this algorithm, the training set is of the form \((x_{1},y_{1})\), \((x_{2},y_{2})\), \(\ldots\), \((x_{n},y_{n})\) where \(x_{i}\in\mathbb{R}_{d}\), the \(d\)-dimensional feature space, and \(y_{i}\in\{-1,+1\}\), the class label, with \(i=1\ldots n\)[17]. An optimal separating hyperplane is built by the algorithm, based on a kernel function \(K\). Depending on the feature vector, the data which lies on one side of the hyperplane belong to class \(-1\), and the rest belong to class \(+1\) (Fig. 2). If a straight line is unable to separate all the data points, then we need a nonlinear SVM classifier. It uses kernel functions (\(\phi\)) such as linear, polynomial, radial basis function (RBF), and sigmoid kernel. These functions project the data to a higher dimensional space so that they become linearly separable in that space. For multi-class SVM, two techniques have been used: (i) one-against-one which integrates several binary classifiers, and (ii) one-against-all which examines all the data at once.
In the next section, we study the application of various ML methods for different aspects of quantum computation.
## 2 Classical machine learning in quantum computing
Classical machine learning approaches find applications in different aspects of quantum computing such as logic synthesis, mapping a quantum algorithm to the underlying hardware, and decoding a quantum
Figure 1: Schematic diagram of (a) Artifical Neural Network [Image source: [https://medium.com/swlh/neural-networks-4b6f719f9d75](https://medium.com/swlh/neural-networks-4b6f719f9d75)], and (b) Reinforcement Learning [Image source: [https://www.inwinstack.com/blog-en/blog_ai-en/6262/](https://www.inwinstack.com/blog-en/blog_ai-en/6262/)]
error correcting code. In the following subsections, we briefly touch upon each of these application domains, and show how machine learning leads to enhanced performance.
### In efficient design of quantum computing systems
A quantum circuit has a certain number of qubits and the algorithm is executed by a sequence of operations by quantum gates on these qubits. After designing a circuit, it has to be mapped to a quantum hardware (also termed as device which realizes the qubits as quantum systems and the gates by application of electromagnetic pulses). Now the device layout can constrain the qubit connections for two-qubit gates. This problem can be solved by layout synthesis. It produces an initial mapping in the quantum computing devices from the circuit qubits to the physical qubits. It adjusts the mapping by legalizing two-qubit gates with insertion of a chain of SWAP gates as needed to bring the two qubits under operation in close proximity for propoer gate operation. It schedules all the gates with a constraint that the original functionality has to be invariant and is executable on the quantum computer with minimal quantum resources and execution time.
Mapping a quantum circuit on hardware typically consists of the following steps:
1. _Virtual circuit optimization_: In this step the input quantum circuit is optimized. This is obtained by applying various logic identities. For example, \(HXH=Z\); so the three gates in a series can be replaced by a single gate only. Furthermore, if a gate \(U\) is followed by \(U^{\dagger}\), then those two can be replaced with identity.
2. _Decomposition of gates with 3 or more qubits_ Most of current quantum hardware are designed to execute only one and two qubit gates. Therefore, all gates involving three or more qubits are decomposed into a cascade of single and two qubit gates.
3. _Placement_: In this step each virtual qubit from the quantum circuit is allocated to a physical qubit of the hardware.
4. _Routing_: The gates are scheduled for each qubit so that the depth of the circuit is minimized while maintaining their temporal ordering in the original circuit. This step takes the underlying hardware connectivity constraints into consideration and inserts necessary swap gates. Routing and initial qubit allocation are interdependent and both optimization are computationally hard problems.
5. _Translation to basis gates_: Each gate of the circuit are translated to the basis gate set of the underlying hardware. For IBM Quantum devices, the basis gate set is \(\{X,SX,RZ,CX\}\).
6. _Physical Optimization_: Finally the rewritten or _transpiled or transformed_ circuit obtained so far is further optimized for resources and depth.
The flow of quantum circuit mapping is shown in Fig. 3[19]. Most of the placement algorithms resort to heuristic methods [20, 21, 22], including ML [23]. With the increase in the size of quantum device to thousands of qubits in future for meaningful computation, it may be of importance to design ML algorithms that can accomplish this task within a reasonable time. This canemploy either reinforcement learning or supervised learning, where the model is trained on smaller devices and is applied for placement on larger devices.
Figure 2: Support vector machine for classification. (a) Linear SVM. (b) Non-linear SVM and use of Kernel function to project the data in higher dimensions. Here \(\phi\) denotes a Kernel function
#### 2.1.1 Optimization in quantum circuit mapping
In the early days, quantum circuits comprised of only a dozen of gates and upto 5 qubits. Recently, experiments on circuits having 127 qubits and gate depth of 60 [5], with single qubit gates in the range of thousands, and two-qubit gates in the range of hundreds [4], as well as circuits with 433 qubits have been carried out. The number of possible initial mappings for the former circuit will be 127! and the subsequent scheduling and legalization steps have a huge search space.
A circuit mapping method based on the \(A^{*}\) algorithm was proposed which intends to find the best placements by comparing it to the desired outcome [24]. A general drawback of this method is that the desired outcome is often unknown. The optimality of a layout method can be expressed in many ways, for example, by optimizing the depth of the circuits [23]. The QCL takes an input circuit \(C_{in}\) to transform it to the functionally equivalent \(C_{out}\) circuit. The output circuit depth may be greater than the input circuit and the ratio between the depth of the output and the input circuit is the benchmark of the QCL methods. The minimization of the ratio is known as the QCL optimization. In [23] the authors have introduced a layout method QXX, including a configurable Gaussian function for estimating the depth of the generated circuits and to determine the circuit region that impacts most of the depth. The parameters of the QXX model is optimized using an improvised weighted random search. They introduced QXX-MLP which is a multi layer perceptron neural network for predicting the depth of the circuit which is generated by QXX. After comparing QXX and QXX-MLP with the baseline they conclude that QXX performs equally with state-of-the-art layout methods [25].
There is a gap between the quantum resources required for the execution of modern quantum algorithms and the resources available in current Noisy Intermediate-Scale Quantum (NISQ) devices, because the processors consist of a sparsely interconnected coupling map, thereby limiting the interactions among the qubits. In Fig.4(a), we show the coupling map of the 5 qubit Quito IBM Q processor. This coupling map defines the set (0,1),(1,0),(1,2),(2,1),(1,3),(3,1),(3,4),(4,3) where the tuples are target and control respectively in a possible CNOT operation. Therefore, eight of the twenty (i.e., \(n^{2}-n\)) pairs can be used for CNOT operation in a circuit. This limits the opportunities offered by quantum devices. Hence efficient circuit mapping techniques are needed for SWAP gate minimization to perform CNOT operations between other pairs of qubits.
In [26] the authors have proposed a method employing deep neural networks (DNN) for quantum circuit mapping to improve the performance of current heuristic cost based methods. Their method comprises three steps:
* formulating the circuit mapping problem as a classification problem;
* training the DNN for the classification task of circuit mapping with appropriate dataset;
* applying fine-tuning to make correct predictions amenable with some logical constraints which is used to characterize the quantum processors.
Their formulation of the circuit mapping problem [26] is as follows. Let \(C_{n}=\{c_{i}\}_{i\in\mathbb{N}}\) be the set of all \(n\)-qubit quantum circuits belonging to the set \(Q_{c}=\{q_{1}^{c},q_{2}^{c},...,q_{n}^{c}\}\), \(P\) be a \(m\)-qubit (\(\geq n\)) NISQ processor \(\in Q_{P}=\{q_{1},q_{2},...,q_{m}\}\) with the corresponding coupling map is \(M_{P}=\{(q_{i},q_{j})|q_{i},q_{j}\in Q_{P}\) and
Figure 3: Mapping of quantum circuit on hardware by QisKit transpiler [19]
\(i\neq j\)). The initial circuit mappings consist of a set of functions \(F=\{f_{k}:C_{n}\rightarrow\wp(Q_{C}\times Q_{P})\}_{k\in\mathbb{N}}\) where each \(f_{k}\) associates the qubits of each circuit \(\in C_{n}\) to the qubits of the processor \(P\). The final solution of the optimization problem is a function \(f^{*}\in F\) which is used to minimize the number of swap operations needed. For example, consider a \(n=5\)-qubit circuit to be mapped on a processor composed of \(m=20\)-qubits and a coupling map given in Figure 4(a). A mapping \(f\) is shown in Figure 4(b).
A classification problem is defined as the task of evaluating a class label \(y\in Y=\{L_{1},L_{2},...L_{Q}\}\) for a \(K\)-dimensional input vector \(\mathbf{x}\in X\subseteq R^{K}\). For a given \(n\)-qubit quantum circuit \(c\in C_{n}\), and a \(m\) qubit quantum processor \(P\in Q_{P}\) distinguished by a specific coupling map \(M_{P}\), the quantum circuit mapping problem can be framed as classification in the following manner. It is implemented by a function \(\phi\) which for an input vector \(\mathbf{x}\in X\subseteq R^{K}\) having a set of features for the circuit \(c\) and the processor \(P\) where the circuit is to be mapped and executed. The features estimate the array \(\mathbf{y}\) composed of \(m\) elements, and each element belongs to the mapping label set \(Y=\{-1\}\cup\{1,2,...,n\}\in\mathbb{N}\). The function \(\phi\) is learnt with a training set of \(N\) points which consists of pairs of feature set and mapping label representing a mapping from \(c\) to \(P\), by a DNN based method. This DNN is composed of one input layer, few hidden layers, and one output layer.
Along with minimizing the number of swap gates, the authors also considered the error rate and the latency of each two-qubit gate with the aim to minimize both. The data set size for training and testing the DNN is composed of \(\sim 40000\) random quantum circuits, which are operating on 5 qubits and characterized by at most 10 CNOT gates. From each circuit, the number of extracted features is 22. There is an output label attached to each instance of the data set. They have demonstrated experimentally that this DNN can speed up the state-of-the-art circuit mapping algorithms used by IBM Qiskit Transpiler [19] for performing circuit mapping operations on 5-qubit IBM quantum processors, although popular algorithms available in IBM Qiskit, such as Dense Layout and Noise Adaptive Layout, can produce the best circuit mapping, with fewer number of SWAP gates. The DNN based method outperforms other popular machine learning techniques as the overall accuracy value of the best classifier, Random Forest is 15% lower than that for this.
### In decoding error syndrome of a quantum error correcting code (QECC)
The fragility of the qubits is the problem in realising a large-scale quantum computer. The suggested solution to this problem is quantum error correction (QEC). Like classical error correction, quantum error correction also has an encoding process where the information of a single qubit is distributed into more than one qubit followed by a decoding process that identifies the error and corrects the noise that is inserted in the quantum system. Classically, this is easily achieved by the simplest 3-bit repetition code, where the encoder maps bit \(0\to 000\) and \(1\to 111\). The encoded bit-strings \(000\) and \(111\) are termed as the logical code-words. If the message incurs a single bit-flip error during transmission, then the receiver may get \(010\). Hence, the receiver can interpret (decode) that the original code word was \(000\) via majority voting. But if the code word is subject to more than one bit-flip errors, the majority voting leads to an incorrect code word. If it consists of 3 bit flips, then \(000\) becomes \(111\) which is also
Figure 4: (a) Coupling map of a 5 qubit IBM-quantum processor Quito, (b) graphical representation of a circuit mapping [26] where a 5-qubit circuit is mapped on a 20-qubit quantum processor with a given coupling map
a valid code word, thus rendering error detection to be impossible. The distance \(d\) of a code is defined as the minimum number of errors that can change a valid code word to another valid one; hence it is \(3\) here. It can be proved that the relation between the distance \(d\) and the number of correctable errors \(e\) is \(e=(d-1)/2\). Decoding an error syndrome implies mapping it to the error so that appropriation correction can be applied to eliminate the error.
### Shor's QECC
For quantum systems however, if \(\ket{\psi}\) denotes a general qubit in superposition, encoding of the form \(\ket{\psi\psi\psi}\) is prohibited due to the no-cloning theorem. Hence it is necessary to design other forms of encoding to distribute the information of a single qubit into multiple qubits to form a logical qubit and is less prone to error. But errors can still occur at the physical level. Therefore, detection of those errors, often called decoding, are necessary to eliminate these and keep the state error free. However, the encoded state cannot be measured directly to detect the presence of error without destroying the information content in it. Therefore, extra qubits, called ancilla qubits, are required. They do not play a role in the actual encoding and computation, but are required to store the error information, also called syndrome, to be obtained via measurement and then decoded.
An error on a qubit may be represented as an operator acting on it. In Shor [27], it has been proven that a quantum error which can be expressed as a unitary operator, is a linear combination of the Pauli matrices, defined by the two-dimensional matrices given by:
\[I=\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\qquad X=\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\qquad Z=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\qquad Y=i\cdot Z\cdot X\]
If a quantum error correcting code (QECC) can correct the Pauli errors, then any unitary error on the system can be corrected by it. A bit-flip error on a qubit or a Pauli \(X\) error is given by \(X\ket{0}=\ket{1}\), \(X\ket{1}=\ket{0}\). Simailrly, a phase-flip on a qubit or a Pauli \(Z\) error maps \(Z\ket{0}=\ket{0}\), \(Z\ket{1}=-\ket{1}\). Creation of a logical qubit can be explained with a simple example. Let us consider a three-qubit code designed to detect a single-bit flip error.
Let the quantum state be \(\ket{\psi>=\alpha\ket{0>+\beta\ket{1>}}\) be encoded as
\[\ket{\psi}=\alpha\ket{0}+\beta\ket{1}\xrightarrow{three-qubit\ encoder}\ket{ \psi}_{L}=\alpha\ket{000}+\beta\ket{111}=\alpha\ket{0}_{L}+\beta\ket{1}_{L}.\]
Therefore, \(\ket{0}_{L}=\ket{000}and\ket{1}_{L}=\ket{111}\). This does not violate the no-cloning-theorem as \(\ket{\psi}_{L}=\alpha\ket{000}+\beta\ket{111}\neq\ket{\psi}\otimes\ket{\psi}\). Suppose there is a bit-flip error on the first physical qubit of the logical qubit which gives the state \(X_{1}\ket{\psi}_{L}=\alpha\ket{100}+\beta\ket{011}\), where \(X_{1}\) is a bit-flip error on the first qubit. In the circuit of Figure 5, the first part is for encoding, then there is an error channel, followed by the decoding part and finally the measurement of the ancilla.
In the decoding circuit, the two ancilla qubits are the target qubits of the CNOT gates. Thus, \(\alpha\ket{100}\ket{00}+\beta\ket{011}\ket{00}\xrightarrow{CNOT3k4}\)\(\alpha\ket{100}\ket{10}+\beta\ket{011}\ket{10}\xrightarrow{CNOT5k6}\)\(\alpha\ket{100}\ket{10}+\beta\ket{011}\ket{10}=\left(\alpha\ket{100}+\beta\ket{011}\right)\ket{10}\).
Figure 5: Encoding and decoding circuit for error correction : using Qiskit [28], where the bit-flip error is acting on the first qubit
From Table 1, we can see that the decoder can predict the location of the error by observing the ancilla outcomes.
For correction, we can simply apply the bitflip again at the detected position. Similar to the classical coding, the distance \(d\) of a quantum code is given by the minimum Hamming distance between two logical qubits [29]. A logical Pauli operator transforms a code-word state to another one. Shor first proposed the 9-qubit code [27] that corrects a single unitary error. Later on, a 7-qubit QECC by Steane [30] and a 5-qubit QECC by Laflamme [31] were proposed and the latter was shown to be optimal in the number of qubits for correcting a single unitary error.
### Topological QECCs
The design of the encoder and decoder circuits for these QECCs often involve operations between non-adjacent qubits. This is costly as it requires one or more swaps among the neighbouring qubits, thus making the process slower and more error prone. This is called the Nearest Neighbour (NN) problem. In order to solve this drawback, topological code [32, 33] came into place. The simplest topological code is a toric code, where the qubits are placed in a square lattice on the surface of a torus (Figure 6 (a)) having periodic boundaries.
Consider an \(L\times L\) square lattice on the surface of a torus, consisting of edges, vertices (points where edges meet) and plaquettes (individual square tiles enclosed by a set of edges or 4-cycle faces). A qubit is associated with every edge on the lattice (indicated by circles on Figure 6 (b). On an \(L\times L\) lattice with periodic boundary conditions (the right-most edge is wrapped around and identified with the left-most edge and the upper edge with the lower edge), there are \(2L^{2}\) edges.
Later, the toroidal structure developed by Kitaev was simplified to a planar version by Bravyi and Kitaev [33], and by Freedman and Meyer [34]. This gave us **surface codes**. A schematic diagram of surface code [35, 36, 37, 38, 39] is shown in Figure (a)a.
But with surface codes, the mapping from syndromes to error configurations is not one-to-one. Among the syndrome decoding approaches union-find [41], belief propagation [42], tensor network [43] and the minimum weight perfect matching (MWPM) [44, 35, 45, 46] are used widely. For topological code, MWPM achieves good decoding performance, which is assessed in terms of its _threshold_, which is the physical error probability beyond which increasing the distance of the code leads to poorer accuracy [35]. For MWPM, this threshold for surface code is \(\sim 6\times 10^{-3}\) (for logical -X errors).
\begin{table}
\begin{tabular}{|c|c|} \hline Ancilla & Location of bit-flip error \\ \hline
00 & No error \\
01 & \(Physical_{2}\) \\
10 & \(Physical_{0}\) \\
11 & \(Physical_{1}\) \\ \hline \end{tabular}
\end{table}
Table 1: Error detection after measuring the two ancilla qubits
Figure 6: (a) A torus and a schematic lattice on it, (b) Lattice with \(L=5\) for a Toric code
#### 2.4.1 ML based decoding of surface codes
The worst case time complexity of MWPM scales as \(O(N^{3}\log N)\) where \(N\) is the number of physical qubits (i.e., \(O(d^{2})\), \(d\) being the distance of the QECC) for a logical qubit. For example, \(N\)=9 when \(d\)=3. A faster alternative is to apply machine learning techniques for identifying errors, as the decoding time scales linearly with the number of qubits [47] after the ML model has been trained and validated. These provide at least asymptotically similar decoding performance to traditional decoding algorithms [48, 49, 47, 46, 50, 51, 52]. In [47] the threshold for surface code was \(\sim 3.2x10^{-3}\) using Recurrent Neural Network (RNN). Hence Recurrent Neural Network shows a better trade-off between decoding performance and execution time.
For depolarizing noise model, where the gates \(X\), \(Y\) or \(Z\) apply to the data bits with a probability of \(p/3\), and feed forward neural network based decoder, the threshold of the rotated surface code was reported as 0.146 whereas that of the Blossom (algorithm for MWPM) decoder is 0.142; so the ML model performs better for this noise model also.
There are a number of machine learning based decoders and the major differences lie in the structure of the ML model and its training algorithm, and their decoding performance and execution time differs depending on the size of the training data set available and other related aspects. In [53], the authors present a decoding algorithm suitable for topological code where the decoder employs the simplest type of stochastic neural network to learn the structures which can make the approximate decoding problem easier than the general NP-hard decoding problem. They employ the restricted Boltzmann machine [54] for unsupervised learning and test the decoder numerically on a simple two-dimensional surface code with phase-flip errors. Given an error chain \(e0\) with syndrome \(S_{0}\), the Boltzmann machine generates an error chain compatible with \(S_{0}\) which can be used for the recovery. For this purpose, the network is trained on different datasets obtained for various values of \(p_{err}\), the probability of error. Their Boltzmann machine based neural decoder has achieved the error threshold of 0.109.
In [47], the error decoder has two levels. The aim of the low level decoder (LLD) is to correct the errors in the individual physical data qubits, which makes the process extremely granular owing to the large number of data and measurement qubits. The high level decoder (HLD) tries to fix the errors in the logical qubit as a whole without considering the errors in the individual physical qubits. This eases the training processes for neural networks and thus provides better results. For the high-level decoder, a neural network and a non-neural network based _simple_ decoder run in parallel to make an accurate prediction of the error correction required to rectify the errors due to noise. The authors considered two kinds of errors for creating the dataset to train and test the decoders on. First is the depolarizing noise and second is the circuit noise where both the gates and measurements are considered to be noisy. The results demonstrated in the paper suggest that the HLDs perform better than the LLD decoders. Furthermore, the RNN based HLDs perform better than the feed forward NNs HLDs in terms of decoding accuracy, but are slower than the FFNNs due to a larger number of parameters. The authors finally concluded that the RNN based HLDs create the best balance between decoding accuracy and prediction time for moderate variances in error probability.
Few papers have introduced reinforcement learning (RL) framework for topological codes [55, 56, 57]. In [55], the goal of the RL is to optimize and fault-tolerantly adapt surface code. In [56], the authors apply deep RL techniques to design decoders with high threshold for the toric code, which can operate under uncorrelated noise. They have found near-optimal performance around the theoretically optimal threshold of 0.11.
Figure 7: Surface code: (a) Distance 3 rotated surface code [40], (b) quantum circuit for one surface code cycle for a measure-Z and measure-X qubit [35]
#### 2.4.2 ML based decoding of heavy hexagonal codes
This latest QECC encodes a logical qubit over a hexagonal lattice. As qubits are present on both the vertices and edges of the lattice, the term heavy is used. This is a combination of degree-2 and degree-3 qubits hence there is a huge improvement in terms of average qubit degree in comparison with surface code structure which has qubits of degree-4 [46]. Fig. 8 shows the lattice for a distance-3 heavy hexagonal code encoding one logical qubit.
The heavy hexagonal code is a combination of surface code and subsystem code (Bacon Shor code) [46]. A subsystem code is defined by \(G\), a set of gauge operators where \(\forall\ g\in G\), \(\left|\psi\right\rangle\equiv g\left|\psi\right\rangle\)[2]. A gauge operator takes a codeword to an equivalent subsystem. In other words, a codespace in a subsystem code consists of multiple equivalent subsystems. It is to be noted that the gauge operators are not necessarily commutative. The product of two or more gauge operators forms a stabilizer, which keeps the codeword unchanged.
In [58], the authors propose a feed forward network based decoder to show that their decoder can decode a topological code, namely heavy hexagonal code efficiently, in terms of threshold and pseudo-threshold, for different error models. Their machine learning based decoding method achieves \(~{}5\times\) higher values of threshold than that by MWPM. The novelty of their work is exploiting the property of subsystem code to define gauge equivalence of errors which leads to reduction in the number of error classes. They have proposed two methods to improve the threshold further by another 14% by obtaining a quadratic reduction in the number of error classes for bit flip and phase flip errors.
Since training in machine learning can become expensive for higher distance codes, it may be possible to use a divide-and-conquer method to divide the error-correcting code lattice into multiple smaller (and possibly overlapping) sublattices so that each of them can be trained efficiently. However, this method has the challenge of _knitting_ the results from these sublattices into the final result corresponding to the lattice.
## 3 Classical ML in Noisy Intermediate Scale Quantum (NISQ) era
Intermediate-scale quantum computers, having less than 1000 qubits, are available with many industry research labs such as IBM, Google, IONQ, etc. These devices are characterized by a small number of qubits, noisy gates and low coherent time., and have been termed as Noisy Intermediate-Scale Quantum (NISQ) devices [59]. It is not possible to achieve fault-tolerance with these devices as (i) the number of qubits is not large enough, and (ii) the noise-profile of the device is still too high for concatenation to reduce the noise in the system [60]. In Fig. 9 we show the noise-profile of a 27-qubit IBM Quantum hardware. The threshold of surface code is 1% [35] and that of the heavy-hexagonal code [46], which is designed specifically for the heavy hexagonal structure of IBM Quantum hardware, is \(\sim 0.4\%\). As
Figure 8: Distance 3 heavy hexagonal code encoding one logical qubit: (a) the hexagonal structure, (b) the circuit illustration of the heavy hexagonal code with the CNOT gates. Here yellow, white and black circles represents data, flag and ancilla qubits respectively; black ancilla qubits are for measuring the \(X\) (red face or plaquette) and \(Z\) (blue face or strip) gauge generators. The product of two \(Z\) gauge generators at each white plaquette forms a \(Z\) stabilizer [46].
evident from Fig. 9, the noise-profile of current quantum devices is much higher than the threshold of the quantum error correcting codes, and hence fault-tolerance using concatenation will not be useful as of now.
While these quantum computers are not yet capable of general-purpose large-scale computing, researchers have studied hybrid quantum-classical algorithms which can be executed in these devices. These algorithms are, in general, divided into modules, some of which are outsourced to Classical Processing Units (CPU). Thus, the quantum circuit have shallow depth and is less susceptible to noise. In other words, these algorithms can still produce acceptable outcomes under noise. These algorithms find applications in Quantum Chemistry, Combinatorial Optimization, Quantum Machine Learning, etc. In Fig. 10 we show a blueprint of the working principle of quantum-classical hybrid algorithms.
Interestingly, we find the application of classical ML models on these algorithms, as well as some of these algorithms themselves working as ML models. In the following subsections, we briefly touch upon some of these approaches.
Figure 10: An overview of Quantum-Classical Hybrid Algorithms [61]
Figure 9: Noise profile of a 27-qubit IBM Quantum device
#### 3.0.1 ML in Variational Quantum Eigensolver (VQE)
Quantum-classical hybrid algorithms are shown to have applications in Quantum Chemistry, especially for finding the ground state energy of a system of molecules. These algorithms, which find an approximate solution to the ground state energy of molecular systems, are termed Variational Quantum Eigensolvers (VQE). The parameterized quantum circuit is termed as ansatz. The ideal requirement is that the ansatz should contain the solution to the problem. However, it is not so mostly. The search space of the problem is usually exponentially large in the number of qubits. Therefore, an ideal ansatz that would contain the perfect solution would have a significant circuit depth and a large number of parameters. Increasing the number of parameters slows down the classical optimization, and hence the entire algorithm. Using a highly complicated quantum circuit makes it more susceptible to noise. Some standard ansatz circuits such as Unitary Coupled Cluster (UCC) are widely used for problems in Quantum Chemistry [62]. This ansatz is designed from the knowledge of the problem domain, but does not consider the hardware connectivity of the qubits. A few later studies have looked into Hardware efficiency (HE) ansatz [63] for finding the ground state energy. These ansatzes ensure that the mapping of the circuit on the hardware does not result in too many SWAP gates.
The ideal scenario is to have a low-depth ansatz that provides a good approximate solution to the problem at hand. In [64], the authors used reinforcement learning (RL) to incrementally obtain better ansatz having low depth but can still provide a good approximate solution. In Table 4, the authors showed that the RL-based ansatz has a lower gate cost, and depth compared to both HE and UCC ansatz. The authors took a trial over 10 steps, and the average is over all the trials. They showed that the energy estimate obtained is at par with the other ansatz, and in 2 out of 10 trials, they even obtained the perfect chemical accuracy.
#### 3.0.2 ML in QAOA design
VQE mostly finds applications in Quantum Chemistry. Quantum Approximate Optimization Algorithm (QAOA) is another family of hybrid quantum-classical algorithms that is aimed at finding good approximate solutions to combinatorial optimization problems [65]. QAOA is essentially a subclass of VQE where the ansatz design is governed by the Quadratic unconstrained binary optimization (QUBO) formulation of the combinatorial optimization problem. These algorithms are characterized by a problem hamiltonian \(H_{P}\) which encodes the problem to be solved, (e.g., Max-cut, minimum vertex cover), and a mixer hamiltonian \(H_{M}\) which should anti-commute with \(H_{P}\). A depth-\(p\) QAOA is represented as:
\[|\gamma\beta\rangle=\Pi_{l=1}^{p}e^{-i\cdot\beta_{l}\cdot H_{M}}e^{-i\cdot \gamma_{l}\cdot H_{P}}\;|\psi_{0}\rangle \tag{1}\]
where \(\psi_{0}\) is the initial state (usually an equal superposition state), and \(\gamma=\{\gamma_{1},\gamma_{2},\ldots,\gamma_{p}\}\) and \(\beta=\{\beta_{1},\beta_{2},\ldots,\beta_{p}\}\) are the set of parameters. The objective of the algorithm is to maximize (or minimize) the expectation of \(\langle\gamma\beta|H_{P}|\gamma\beta\rangle\). Upon obtaining the value of \(\langle\gamma\beta|H_{P}|\gamma\beta\rangle\), the classical optimizer suggests a new set of parameters, and the algorithm is repeated with the new state \(|\gamma\beta\rangle\). This process is repeated till convergence.
The time complexity of the algorithm is determined by the value of \(p\). However, recent results show that the time duration of the classical optimizer is non-negligible [66]. In fact, for larger problem size, it is the classical optimizer that takes up the majority of the absolute runtime of the algorithm. In order to avoid this issue, in [67] the authors proposed an ML-based technique for faster convergence of the classical optimizer. The ML aims to learn the correlation between QAOA parameters from smaller and larger values of \(p\). After exhaustively finding the optimal parameters for lower depth, their ML model could predict a good starting point for the classical optimizer for higher \(p\) so that the optimizer could converge quickly. The authors used different ML methods, of which Gaussian Process Regression (GPR) provided the best result. With this ML model, they could reduce the runtime of the classical optimizer by 44.9% on average. Their result is over 330 different graphs and 6 QAOA instances for each graph.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & Avg Depth & Min Depth & Avg \# Gates & Min \# Gates \\ \hline RL & 14 & 12 & 36 & 29 \\ \hline HE & 17 & 17 & 63 & 63 \\ \hline UCC & 377 & 377 & 610 & 610 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of gate cost and depth for different VQE ansatz
This finding is later supported by [68] who showed that the optimal parameters of a QAOA instance concentrate. In other words, if the optimal parameters for problem instances with \(n\) and \(n+1\) qubits are \(\{\gamma_{n},\beta_{n}\}\) and \(\{\gamma_{n+1},\beta_{n+1}\}\) respectively, then, :
\[\exists l>0\ |\beta_{p+1}-\beta_{p}|^{2}+|\gamma_{p+1}-\gamma_{p}|^{2}=\mathcal{ O}(\tfrac{1}{n!})\]
This paper depicts that the ML model in [67] is indeed learning this concentration of parameters. The original QAOA proposed by Farhi et al [65] was for unconstrained optimization problem. Later on, Hadfield proposed a variant of QAOA for constrained optimization problems as well [69]. This approach is similar to the original QAOA, except for the mixer Hamiltonian, which now becomes more complicated to ensure that a valid solution is mapped into superposition of valid solutions only. Modification of the initial state [70], mixer hamiltonian [71] and problem hamiltonian [72] have been proposed to obtain faster convergence or reduce the effect of noise on QAOA. However, the ML method to predict the parameters holds good for these variants as well.
#### 3.0.3 Variational Approach to Error Correction
Previously, we have briefly touched upon classical ML methods to further optimize the VQE and QAOA algorithm. However, now we show that these ansatz have the ability to act as ML models themselves. Near-term quantum devices cannot afford quantum error correction due to requirement of a large number of qubits to achieve error correction and fault tolerance. Therefore, error mitigation techniques have been proposed in the literature. These methods cannot nullify, but can minimize the effect of errors.
There are different sources of errors, and error mitigation techniques are usually aimed to reduce the effect of a few of them. In [73], the authors proposed a method using hybrid quantum-classical algorithms to reduce the overall effect of the error on the quantum circuit. Their motivation was not to reduce each error but rather to learn the overall effect of error on the system, and then minimize it. Therefore, this leads to a model-free method of reducing the effect of noise. In Fig. 11, we show the schematic diagram of the variational QECC proposed in [73]. The system is initialized in a 2-design circuit, which is a random set of Clifford gates[2]. This is followed by an encoding by a parameterized circuit \(V_{\vec{p}}\). The system undergoes noisy evolution by a \(W_{\vec{q}}\), and the fidelity \(\langle 0^{\otimes n}|S^{\dagger}V_{\vec{p}}^{\dagger}W_{\vec{q}}V_{\vec{p}}S| 0^{\otimes n}\rangle\). The classical optimizer tunes the parameter to maximize the Fidelity. Fig. 12 depicts how the authors establish that this proposed variational model provides a performance which is almost the same as the optimal recovery technique, e.g., the 5-qubit QECC [74]. Therefore, this model of variational algorithm readily demonstrates that it can learn the underlying noise model of the channel (which is unknown) and tune the parameters to propose a recovery scheme that is comparable to the optimal QECC.
Machine learning can play a big role in QECC by predicting the best QECC for use under a current noise scenario. An approach for this has been explored in [75] for biased Pauli noise. It will be interesting
Figure 11: A schematic diagram of the variational QECC proposed in [73]
to see whether it is possible to further fine-tune this method by learning the noise in the system [76].
## 4 Quantum Machine Learning
Both machine learning and quantum computing have progressed hand-in-hand, each benefiting from the other. In this section, we briefly touch upon the usage of quantum computation to search for more powerful and efficient machine learning models. Quantum Machine Learning (QML) is a contemporary theoretical field that resides at the intersection of quantum computing (QC) and machine learning (ML).
In classical machine learning, to model any problem, our target is to find a function \(f\), given \(x\) and \(y\), such that: \(y=f(x)\). We provide the input and expected output (label) to a ML model, which learns the rules without explicitly telling the computer how to solve the problem. The program learns to do so itself. A loss function (mathematical expression which calculates the amount by which the algorithm has missed the accurate target) checks on how correct a machine learning solution is. During ML model training, it is obvious that at the beginning not all of them are correct. QML also targets to minimize the loss function using a property called Quantum Tunneling (QT). QT searches through the loss function space fully to find the value with minimum loss. There are some popular methods that QC uses to solve ML problems, such as Quantum Neural Network (QNN), Quantum Principal Component Analysis (QPCA), Quantum Support Vector machines (QSVM), Quantum Reinforcement Learning (QRL), Quantum Optimization (QO) etc.
Ideally, a QML algorithm may work on classical or quantum data. When a QML algorithm works on classical data, it becomes necessary to encode the data into qubits, which are fed to the QML. Therefore, we first discuss a few widely used encoding methods that are utilized to encode the information of classical data into qubits. Then we follow up with some algorithms such as QNN, QSVM and talk about two practical usage of QML in classifying facial expression [77] and handwriting from MNIST dataset [78].
### Encoding classical data into qubits
A quantum system cannot directly read from classical data. So it is necessary to encode the information of the data into qubits. Note that this idea of _encoding_ is not to be confused with the _encoding_ in quantum error correction. Here, encoding simply implies uploading the classical data on the qubits. While there is no steadfast rule on the nature of encoding, the following three methods are used most widely.
1. **Basis Encoding**: The classical data is represented as a sequence of bits (similar to One Hot Encoding in classical machine learning). For example, if \(x\in\{0,1\}^{n}\) is the One Hot Encoding of a particular classical data, its corresponding quantum encoding is \(|x\rangle\). For two distinct \(|x\rangle\neq|x^{\prime}\rangle\), we have \(\langle x|x^{\prime}\rangle=0\). Let \(x_{1},\ldots,x_{m}\) be the \(m\) classical training inputs. Then the corresponding
Figure 12: Average Fidelity as a function of time for the variational QECC, optimal QECC and no error correction
quantum data is of the form \[\left|\psi\right\rangle_{init}=\frac{1}{\sqrt{m}}\sum_{i=1}^{m}\left|x_{i} \right\rangle.\] This method is perhaps the simplest form of encoding classical data in a quantum device. Nevertheless, it often requires a significant number of qubits, especially if fractions are to be encoded, and thus defeats the purpose of using fewer qubits.
2. **Amplitude Encoding**: The value of the classical data is encoded as the probability amplitude of the basis states. For example, if a classical data is represented as \(\{\alpha_{1},\alpha_{2},\ldots,\alpha_{k}\}\), \(\alpha_{i}\in\mathbb{R}\ \forall\ i\), then the corresponding quantum input is \[\left|\psi\right\rangle=\sum_{i=1}^{k}\frac{\alpha_{i}}{\left|\alpha_{i} \right|^{2}}\left|x_{i}\right\rangle\text{ where }x_{i}\in\{0,1\}^{n}.\]
3. **Angle Encoding**: The values of the features can be encoded into the angles of the amplitude of qubits as well. Consider a dataset with \(2n\) features. This method requires only \(n\) qubits to encode \(2n\) features, and the qubits can be implemented with low depth circuits. The values \(x_{i}\) and \(x_{i+1}\) for features \(i\) and \(i+1\) respectively can be encoded into a single qubit as \[\left|\psi\right\rangle_{i,i+1}=cos(x_{i})\left|0\right\rangle+e^{i\cdot x_{i +1}}sin(x_{i})\left|1\right\rangle.\]
Till now, no encoding system provably outperforms the others for QML. Researchers have used one or more of these encoding methods according to the problem at hand. After encoding the data into qubits, the _QML algorithm_ tries to learn from that data to generate the desired results. Next, we discuss two broad classes of QML algorithms, namely Quantum Neural Network, and Quantum Support Vector machine.
### Quantum Neural Network (QNN)
QNN is a class of ML models that are used in quantum computers. These deploy quantum properties of entanglement, superposition, and interference for the computation. In [79] the authors study the trade-offs, i.e., whether QNNs are more "powerful" than classical NNs. They found that well-designed QNNs are able to achieve a higher capacity captured by the effective dimension, and faster training ability influencing the Fisher information-theoretic property than comparable classical feed forward NNs. But, how do we define this term "powerful"?
In a classical NN, we can naively count the number of parameters of our model. A higher number of parameters can capture more information about the relationship between the data and variables of the model. On the other hand, all the parameters may not be useful. Another popular method to estimate the power is the VC (Vapnik-Chervonenkis) dimension [17]. It measures the model capacity, expressibility and complexity, and derives the error bounds on how well a model generalises (i.e., performs on unseen data). But although it has attractive properties in theory, in practice determining the VC dimension is difficult, as the generalized bound is loose in case of deep NNs. There is a third metric, named as the Effective-dimension, which captures the size of the ML model in a higher dimensional space, rather than simply counting the number of parameters. It estimates the size which a model holds in model space where the Fisher information matrix [80] serves as the metric (instead of concentrating on parameter space like classical ML). This shift from the parameter space to the model space, is dependent on the Fisher information \(F(\theta)\), which gives us a notion of distance in the model space, and \(\sqrt{\det F(\theta)}\) is the actual volume in the model space. Fisher information is a way of quantifying the amount of information that an observable random variable \(R\) carries about an unknown parameter \(\theta\) of a distribution that models \(R\). In other words, it is the variance of the expected value of the observed information. Hence the effective dimension is used to look into the size of the model space. Moreover this effective dimension is a data dependent notion; hence more data means that the model space can be observed more clearly.
In classical ML, the error of the model on new data is called the generalization error. A capacity measure gives a bound on this generalization error. In [79], the authors proved that this generalization error can be bounded by the effective dimension as \(P(\sup\limits_{\theta\in\Theta}|error_{true}-error(n)_{empirical}|\geq\epsilon) \leq Capacity(|\Theta|,n)\), where \(error_{true}\) is the inherent true error, \(error(n)_{empirical}\) is the empirical error which we approximate using the data. The difference between these two is the generalisation gap, called
generalization error, and is bounded by the effective dimension which is \(Capacity(|\Theta|,n)\). It quantifies on the size of the model space \(|\Theta|\) and on a datapoint \(n\). It was also shown that with increasing noise (randomness) in the data, the effective dimension increases. Hence it is able to accurately capture the generalisation behaviour of an ML model.
QNNs are a subclass of variational quantum algorithms, which consists of quantum circuits that contain parameterised gate operations. We can see a schematic diagram comparing the classical and quantum NN in Figure 13, and these can be comparable if these have same size of input and output, and the number of trainable parameters.
In a QNN, \(\phi(x)\) is the feature map that encodes the data [81]. Then comes the variational part of the circuit which consists of \(CNOT\) and \(R_{Y}\) gates. Next, there is post processing where every qubit is measured in the \(\sigma_{Z}\) basis and the parity of the output bit strings are checked to map the corresponding probabilities to certain labels. In [79], the authors have shown that between a classical model and a quantum model of neural network with same sizes of input and output and parameter space, the effective dimension of the QNN is much higher than its classical counterpart.
A very simple and easy quantum model has a straight forward feature map and data encoding strategy. If there are four features in the feature vector \((x_{1},x_{2},x_{3},x_{4})\), these are encoded with Hadamard gate followed by an \(R_{Z}\) rotations with angles \((x_{1},x_{2},x_{3},x_{4})\) respectively. But in a proper QNN after mapping the feature values using Hadamard and \(R_{Z}\) rotations, there exist entanglement between the qubits, and higher orders of the data (product of the feature values) are also encoded. Hence they proved that the effective dimension of this QNN is much higher than the easy quantum model.
In ML, the Hessian matrix is the second derivative of the cost function which is to be minimized, and provides the intuition of the curvature of the landscape of the ML model to calculate the minima. This Hessian matrix coincides with the Fisher information matrix under certain conditions. An inherently quantum phenomenon, named as barren plateau, is a trainability problem that occurs in machine learning based optimization algorithms when the search space turns flat. Hence the algorithm cannot find the downward slope in the landscape and there is no clear path to the minimum due to the gradients vanishing. As a result, the entries to the Hessian matrix also goes to zero. This idea can be extended
Figure 14: Quantum Neural Network: an overview [79]
Figure 13: Classical Neural Network vs Quantum Neural Network
to Fisher information matrix also! They have shown that a model suffering from a barren plateau has a Fisher information spectrum with an increasing concentration of eigenvalues approaching zero as the number of qubits in the model increases, which happens in the classical NN. Conversely, a model with a Fisher information spectrum that is not concentrated around zero is unlikely to experience a barren plateau, which is the case for a QNN. Lastly, via numerical simulations they have shown that the loss for a QNN is much less (39%) than the classical counterpart.
For efficient Artificial Neural Network (ANN) architectures, realization of newly designed hardware devices which exploit the inherent parallelism in the NNs are necessary. These are known as Hardware Neural Networks (HNN) [82]. In case of classical NNs, we have observed a significant development directly in terms of hardware over a long period. One of the most widely used geometry of hardware architecture is known as reservoir network. Here the weights of all the connections are controlled by a large set of randomly connected nodes called reservoir. The input is fed to the reservoir with random connections, and only the weights of the output layer are controlled. For creating QNNs, the classical reservoir network nodes are replaced with quantum nodes, hence the reservoir becomes a quantum physical system and the input and output data can be either classical or quantum information. Here they use a RNN to make the feature space, or reservoir. The connection between the reservoir output and the final output is used for training (minimization of cost function). The training, therefore happens entirely outside of the reservoir, making it easy for physical implementation. The quantum systems have a huge Hilbert space, which can provide the quantum reservoir a high memory capacity. Currently quantum reservoir computers are used to couple quantum states either in the input or at the output. It is still an open problem if both input and output can be in quantum states.
Using this reservoir technique, in [83] the authors have shown that the QNNs can be used for quantum information processing (QIP) tasks (eg., quantum cryptography, quantum secret sharing, quantum memory, etc.) in a novel way. QIP needs exotic quantum states as a basic necessity, which are usually created with versatile methods customized to the determined set of resource states. The authors proposed an adaptable integrated scheme for the preparation of state which is based on a driven quantum network and is made of randomly-coupled fermionic nodes. The outcome of this kind of system is overlaid by a linear mixing where the model trains phases and weights to get the output quantum states. They have shown that their method is robust, and this method can create near-perfect maximally-entangled states and other states like GHZ states [84], W [85] and NOON [86] states. The authors have also considered the noisy environments such as energy decay, dephasing, and depolarization to get the target state with high fidelity ( \(\sim\) 0.999), up to a permissible limit. Beyond that limit, a method is proposed for concentrating entanglement by mixing with other states which are present in a bigger network. This system has the same quantum network explained above as a quantum reservoir, which is created by few fermionic nodes (eg., quantum dots) which interact with each other with random coupling strengths.
### Quantum Support Vector machine
As mentioned earlier, for classification using a support vector machine, the kernel is an important concept because data cannot simply be separated by a hyperplane in the original space of the data points. Hence, there is the need of non-linear transformation functions. There are few classification problems that need
Figure 15: Quantum Reservoir Network [82]
a feature map for which the kernel cannot be computed efficiently in a classical setup as it needs large computational resources (exponential with the size of the problem).
In [81], the authors have shown that this issue can be resolved on a quantum processor by estimating the kernel in the feature space. For classification by a quantum computer, first the classical data point \(X\) are mapped to a quantum datapoint \(\ket{\phi(X)}\) by the circuit \(V\phi(X)\), where \(\phi(X)\) could be any classical function applied on the classical data \(X\). Then, a parameterized quantum circuit \(W(\theta)\) with parameter \(\theta\) that processes the data, is needed. Finally, a measurement circuit is applied that returns a classical binary value for each classical input \(X\) to identify the class label. They propose two SVM-type classifiers to process the provided data classically. Then for obtaining the quantum advantage, the quantum state space is used as the feature space, by non-linear data mapping to a quantum state.
One approach uses a variational circuit for generating the separating hyperplane. Another approach uses a quantum processor to calculate the kernel function to implement a conventional Support Vector machine. They use self-generated artificial data so that it can be properly classified by the feature map, consisting of 20 data points per class for both training and testing. In the result, they have shown that even for noisy datasets it can achieve a success rate of 100%. Although the main question in these quantum-enhanced feature spaces is the amount of _enhancement_. Only because a task is hard in classical computers does not make the quantum version advantageous. The open problem is to find cases to prove the advantages of this (or other QML) protocol. So if a quantum feature map is chosen that is hard to simulate with a classical computer, then quantum advantage might be obtained.
In [87], the authors proposed a quantum support vector machine based on amplitude estimation (AE-QSVM) which does not have the constraint of repetitive process and thus saves the quantum resources. Although there is extensive research work going on in the domain of quantum machine learning, there are many questions regarding the prospective of machine learning on quantum computers that are still unanswered. For example, does the current QML algorithm work better than classical in practice?
In [88], the authors survey how QML can be used for solving small hands-on problems. They present the experimental analysis of kernel-based quantum SVM and QNN using 5 different datasets. They show that quantum SVM outperforms their classical counterparts on average by 4% in accuracy both on a simulator and real quantum machine. QNN executed on a quantum computer outperforms quantum SVM on average by upto 5% and classical neural networks by 7%.
In [89], the authors propose a general-purpose framework combining a classical Support Vector Classifier algorithm with quantum kernel estimation for ligand-based virtual screening (LB-VS) on real-world databases, for discovering new drugs in a faster and cost-effective manner, especially for emerging diseases such as COVID-19. In. They show that it performs 13% better than the classical counterpart.
### Quantum Reinforcement Learning
Classical reinforcement learning models are sensitive to errors during training [90], hence a robust reinforcement learning framework that enables agents to learn in noisy environments is needed. The authors in [91] presented the idea of Quantum Reinforcement Learning (QRL) inspired by the state superposition principle and quantum parallelism. In [92], the authors give a formal QRL algorithm framework. They demonstrate the advantages of QRL for faster learning and obtaining a good tradeoff between exploration and exploitation through simulated experiments.
Reinforcement learning with VQAs has been proposed in [93] for an error-free environment. The results are similar to neural networks on small classical benchmark tasks [94]. In [95], they address the effect of training quantum reinforcement learning models under the influence of hardware-induced noise, and report the effect on the performance of the agents and on the robustness of the learned policies.
### Quantum Image Processing
Nowadays classical image processing is widely used in commercial sectors such as autonomous vehicles [96], facial recognition [97], motion detection and object recognition [98, 99] etc. Quantum image processing (QIP) employs the advantage of quantum mechanical properties for representing pixels of the image in a quantum computer. Depending on the image format in the quantum computer, various image operations can be implemented. QIP can have certain advantages over classical image processing, because it can exploit the quantum parallelization which inherently comes from superposition and entanglement. It holds the promise of substantial speed-up for a few common operations like edge detection [100].
Facial Expression identification is an important job needed for human-computer interaction in different scenarios. It comprises a classification problem that categorizes face images with different expressions
such as happy, sad, angry, scared, etc. The dataset of the human face is extremely heterogeneous due to facial features, different poses, and also the background. Classical Facial Expression identification consists of image pre-processing followed by feature extraction and classification of the expression. In [77], the authors have described the steps for the same on a quantum computer. The first step is image pre-processing, which is done classically using the FFHQ dataset [101]. During feature extraction, the pre-processed images are mapped into graphs. The classification step is mainly the quantum part where the features are mapped into the amplitudes of the quantum state, which in turn forms the input to the quantum circuit (by using a technique like the nearest centroid method) for representing the facial expression image classifier, depending on the Euclidean distance on the graphs. The authors compare the results with the classical algorithm. When there are four vertices, the classical method achieves a 99% accuracy, while the quantum counterpart achieves an 88% accuracy for a complete graph. This results in a gap of 11%. However, as the number of vertices increases to 20, both the classical and quantum methods achieve a 100% accuracy for complete graphs. Consequently, this gap gradually diminishes to 0%.
They have also mentioned that if the graph dimension is of higher magnitude than a complete graph, classification may not be feasible. Instead, a meshed graph strategy can be used with a trade-off in terms of accuracy.
### Quantum accelerator for ML
There has been substantial progress in studies on the acceleration of neural networks on classical processors, e.g. CPU, GPU, ASIC, FPGA over decades. But with the increasing scale of the application, a bottleneck with respect to memory comes in place which is known as memory wall. This is where advanced quantum computing comes as a solution. The development of machine learning using a classical hardware accelerator can be done in two phases:
* Neural network tailored hardware design: FPGA-based DNN accelerators [102], FPGA-based cost-optimal design for timing-constrained CNNs [103], etc.
* NAS (Neural architecture search)" which is capable of learning the architectures for large-scale target tasks and target hardware platform directly(without any proxy).
In [78], the authors stated that the neural network co-design and quantum circuits design must be executed to fully utilize the potential of the quantum computer (QC). The full acceleration system consists of three units:
* Pre- and post-processing of the data on a classical computer
* NN accelerator on the quantum circuit including quantum state preparation (\(U_{P}\))
* QC-based neural computation (\(U_{N}\)).
Figure 16: Quantum Computing Accelerator [78]
Preparation of the quantum data to quantum state encoding is the initial step. If the first column of a unitary matrix \(U\) encodes the vector \(U_{0}\) of \(2^{N}\) data, then \(U_{0}=U\left|\psi\right\rangle\) where \(\left|\psi\right\rangle=\left|0\right\rangle^{\otimes N}\) is the initial state. The next step of quantum-state preparation has the potential of affecting the complexity of the whole circuit significantly. The authors propose the use of a quantum memory (qRAM) where a binary-tree-like structure stores the vector in \(U_{0}\). This is used to query in quantum superposition and to generate the states in a competent way. Next, they apply the popular MNIST data set [106] with the goal of carrying out a case study where the image data (16 inputs) is encoded onto 4 qubits. In the neural computation part, which is the primary element in the implementation of QML, the weighted sum with quadratic function is calculated using the binary weights \(W\).
The computation of the hidden layer consists of two parts. The first is to multiply inputs and weights (quantum gates such as \(X\) gate and 3-controlled-\(Z\) gate accompanied by three trigger qubits are used to operate the weights with the inputs). The second is to apply the quadratic function on the weighted sum (the Hadamard (\(H\)) gates are applied on each of the qubits to accumulate all states to the state-0 followed by swapping the amplitudes of state-0 and state-1, and then applying the N-control-\(X\) gate for the extraction of the amplitude to a single output qubit \(O\), in which the probability of \(O=\left|1\right\rangle\) which is the square of the weighted sum). Then they finally get the output layer for the final results. With these \(N\) output qubits, these qubits are used continuously to directly compute the outputs. However, there is a need to modify the fundamental computation to the multiplication of random variables, as the probability of the qubit to be in state \(\left|0\right\rangle\) is associated with the data represented by a qubit. Running a simulation or execution on the IBM Quantum processors can measure the output qubits, and the classification results can finally be obtained.
## 5 Classical machine learning in quantum communications and cryptography
Transmission of classical or quantum information through a quantum communication system from one location to another using a quantum channel is known as a quantum communication system. The information is usually transmitted as a photon along optical cables between sender and receiver, and is capable to move at a speed close to that of light, along with weak environment interaction. Photon communication is prone to challenges with optical light sources. Additionally, even in nonlinear materials, two photons do not interact strongly. Fig. 17 narrates the communication of classical bits and quantum qubits. Its main applications include cloud computing, cryptography-related tasks, secured storage, and other secure communications [107], as well as a secure and safe quantum network for distributing quantum properties such as randomness, entanglement and non-locality at connected but remote locations. Quantum communication [108] protects data by employing quantum mechanics and allowing particles to be superimposed.
There are various techniques for implementing quantum communication. Some of the current approaches being researched include universal quantum gate model, analog quantum model and quantum annealing. Several sectors are progressing in quantum communications, for example, the first commercial quantum annealing device was developed by D-wave [109]. Technologies for quantum communication systems rely on the development of diverse central protocols and schemes, such as quantum cryptography [110, 111], teleportation [112] and to handle fundamental problems like channel noise accumulation and decoherence such as quantum repeaters [113] and entanglement purification to enable for scalable long-distance quantum communication. In [114] the authors have shown how ML is used to address the prime areas of teleportation, purification of entanglement, quantum repeaters, and quantum protocols.
Low data rates, quantum channel and its security are the integral issues of quantum communication till date and hence classical machine learning can be applied to discover the properties of free space quantum channels. The authors of [115] have proposed a supervised ML technique to discover the atmospheric strength of free space quantum channel in the form of Strehl ratio. The study has revealed that the application of the random forest method has resulted in a good prediction of the Strehl ratio of a quantum channel with less mean error.
### In quantum cryptographic protocols
Quantum Key Distribution (QKD) is the new secure mechanism of communication to share keys secretly among communicating parties. The process of QKD involves a strong secure key exchange system to encrypt and decrypt information. The classical key distribution techniques depend on public key ciphers
by using convoluted mathematical computations and hence demand more processing power to decipher. These ciphers also face numerous challenges such as weak random number generators and continuous new strategies for attack. Unlike classical mathematically oriented key distribution, QKD involves basic properties of quantum mechanics to protect data. According to the no-cloning theorem [116], no identical copy of an arbitrary unknown quantum state can be created and hence it is very difficult to identify or to copy data across two ends. In a sifting phase, Alice and Bob post their slots of detection, and both Alice and Bob are set to detect the polarization of schemes. At any point of time when Bob measures the corresponding measurement of Alice, then it is in principal state, else whenever any attacker peeps or attempts to disturb the system, the system changes automatically so that the intruder ends up with failure to decrypt.
The transmitted photon reaches the destination through a beam splitter that guides the photon to take any random path to reach the photo collector. The receiver confirms the photon sequence to the sender after receiving the photons and these are verified across the sender and receiver. If any photon is received from the wrong beam splitter, then those are discarded otherwise the secret key is exchanged using the bit sequence. The final secret key is exchanged safely by adopting a technique that delays the privacy of amplification and this post-processing method erases all the information from any eavesdropper who has acquired any information about the key.
Figure 18 shows the basic QKD system and the mechanism of exchanging secure information. In general a QKD system is equipped with two types of channels, namely PICh (public interaction channel) and QSCh (quantum signal channel) along with encryption, decryption blocks, and a QKD protocol. At the outset, QKD protocol is used to initiate a secure interconnection between Alice and Bob, also used to generate the secret keys and to analyze the accurate information split between sender and receiver while generating the keys.
Currently, there are many kinds of QKD protocols, and some are listed in Table 3. Discrete-Variable (DV-QKD) Protocols use photon polarization states to encode bits to generate confidential keys between sender and receiver. It also implements post-processing methods and photon counting techniques to detect the single photon to develop secret keys [129]. BB84 is the first QKD [130] protocol of this family. A basic QKD illustration is shown in Figure 19.
Typically, a single bit of information is encoded on a single photon. The bit can be stored on any basis. The two most commonly used bases are (i) vertical (V) or horizontal (H) polarization states corresponding to \(\ket{0},\ket{1}\) basis, and (ii) \(+45^{*}\) and \(-45^{*}\) states of polarization corresponding to \(\ket{+},\ket{-}\)
Figure 17: (a) Classical and (b) Quantum Communication Systems
basis. Both Alice and Bob use one of these two bases randomly for preparation and measurement respectively. If Bob measures in a base that is different from the one Alice used to prepare, then his answer is discarded. However, when they both use the same basis, it results in a perfectly correlated outcome. This requires Alice and Bob to communicate about the choice of basis publicly. The QKD protocol is designed so that this public discussion does not lead to any information leakage to a potential Eavesdropper. Multiple variations of this basic method have been studied.
Continuous Variable (CV-QKD) Protocol was introduced fifteen years after the DV-QKD protocol with a different approach known as CV coding by Ralph [131] to ensure better secure transmission of data. In order to implement the DV-QKD protocol a single photon source and a spotter are required. However, a standard positive intrinsic-negative (PIN) photo-diode device is used by CV-QKD and the photon detection technique applied by these two protocols are dissimilar. A coherent detection technique is adopted to replace the photon counting by CV-QKD protocol which is known as homodyne observation. This is found to be highly fast, efficient, and economical. The first compressed state category of BB84 protocol [132, 133, 134] used Gaussian modulation and discrete methods. Lately, the CV-QKD practical demonstrations were performed to identify the coherent states of light by [135, 136].
Distributed Phase Reference (DPR-QKD) Protocols are the family of QKD protocols that involve Differential Phase Shift (DPS) and Coherent One-Way (COW) protocols, [137] that were developed recently. In both these protocols, a sequence of coherent states of fragile laser pulses is transmitted. While in DPS the pulses are modulated by keeping the intensity unchanged, in COW protocol the intensities are varied by keeping the pulses unchanged.
Quantum computing is prone to various kinds of attacks: quantum computing attacks (Beam Splitter, Photon Number Splitting) and classical computing attacks (Man-in-Middle, Denial-of-Service, Trojan, etc.). In this paper, we limit our attention to Photon Number Splitting and Denial of Service attacks on DV-QKD, CV-QKD and DPR-QKD protocols. Table 4 lists the attacks and countermeasures.
#### 5.1.1 Machine learning techniques on quantum key distribution protocol
Recently a CV-QKD protocol with defense systems has been presented by researchers [151]. The recommended technique may meritoriously recognize most known liabilities while reducing a small slice of secret keys and the transmission distance for recall values greater than 99%, as per simulation data. They proposed that many properties of pulses would be transformed using a feature vector. These features are supplied to an artificial neural network (ANN) model as input for detecting and classifying the attacks. In order to train the ANN model, the authors have considered the effects of the current assault techniques that have measurable properties such as local oscillator (LO) pulses and signals. Additionally, they have created a set of feature vectors labeled by using the dissimilar kinds of attack, as input to the ANN model. The datasets for training, testing, and performance evaluation are created by considering
Figure 18: Basic QKD System (Image normes: [117] )
real-time attacks. The proposed trained ANN model was able to identify irregular feature vectors and categorize them into dissimilar assault kinds. Subsequently, they established a general attack discovery model that is able to identify the most recognized attacks by using an accelerated computation. As and when Bob receives the secret keys, all of them are sequentially recorded into the ANN input model; at any point when abnormal data is noticed, then automatically the transmission process is aborted. Hence, Bob does not have to wait for the pending key, and the broadcast process is accomplished to verify whether the system is confronted. They have performed a simulation and revealed the trained ANN results that can automatically recognize and categorize attacks with recall and accuracy above 99%. An interesting matter in this work is that the performance of the trained ANN model depends on the number of neurons in the hidden layer. Hence, selecting suitable values of neurons plays a vital role in real-time deployment. The proposed work marginally reduced the transmission distance and secret key rate, but nevertheless created a general defense model for most recognized attack tactics and pointedly improved the system security.
The paper [152] proposes an error reconciliation strategy based on ANNs, more specifically, the Tree Parity machine (TPM). TPMs are classical cryptographic techniques to share secure keys based on neural networks. The idea is to create a neural network with a similar structure (number of inputs, hidden layers, hidden nodes, etc.) for both the parties. The network has a single bit as output which is either
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Name of the Attack & Target & Countermeasures \\ \hline Photon Number Splitting (PNS) & Source & Decoy states [138, 139, 140, 141], SARG04 [145, 146, 147] \\ \hline Denial of Service & Any & BB84 [146, 147, 148, 149], Software Defined Networks [150] \\ \hline \end{tabular}
\end{table}
Table 4: QKD Protocols Attacks and Countermeasures
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline \multicolumn{4}{|c|}{**P**rotocol} & \multicolumn{2}{c|}{**Attacks} & \multicolumn{2}{c|}{**Salient Feature**} \\ \hline \multirow{2}{*}{**Family**} & **Name** & **Scheme** & **PNS** & **DoS** & \\ \hline \multirow{5}{*}{DV-QKD [118]} & BB84 [119], 1984 & P-M & V & V & First QC protocol with 4 polarization states \\ \cline{2-6} & E91 [120], 1991 & E & V & V & First Entanglement protocol with 2 non-orthogonal states \\ \cline{2-6} & B92 [121], 1992 & P-M & V & V & Similar to BB84, uses 2 non-orthogonal states \\ \cline{2-6} & SSP [122], 1998 & E & V & V & Uses 6 polarization states \\ \cline{2-6} & SARG04 [145, 2004] & P-M & V & R & Difference in classical phase of BB84 \\ \hline \multirow{2}{*}{CV-QKD [118]} & Discrete modulation protocol [124] – BB84 & P-M & R & R & Updated BB84, includes squeezed-state and discrete modulation \\ \cline{2-6} & Gaussian protocol [125] – BB84 & P-M & R & R & Updated discrete modulation protocol with Gaussian modulation \\ \cline{2-6} & 2001 & PPS [127], 2003 & P-M & R & R & First DPR based QKD protocol, uses one bit delay circuit to create and measure qubits \\ \cline{2-6} & COW [128], 2004 & P-M & R & V & Similar to DPS but uses pulses for creating photon, and bit encoding is done using a sequence of one non-empty (\(\upmu\))-pulses. \\ \hline \multicolumn{4}{|c|}{SSP: Six State Protocol, P and M: Prepare and Measure, E: Entanglement based V : Vulnerable R : Robust, PNS: Photon Number Splitting, DoS: Denial of Service} \\ \hline \end{tabular}
\end{table}
Table 3: QKD Protocols: Attacks and Salient Features
+1 or -1. Both Alice and Bob start with some random initial weights and then both give the same input to the network. Whenever the outputs at both ends are the same, the training enhances weights corresponding to the fired neurons. Whenever the outputs of the networks differ, another input is chosen. Once when both the parties start getting the same outputs for multiple inputs, the two networks are synchronized and thus have roughly equal weights, which can then be used for cryptography purposes.
The concept of TPM was applied to quantum cryptography since using QKD, the two parties start with very similar bit strings with an average error of around 5-7%. The bit strings are translated to weights of TPMs and then trained usually. Since the difference is already very low, the training can be completed much faster. Further, since Alice and Bob are not communicating parity information anymore (as is the case in conventional error reconciliation techniques such as cascade), an eavesdropper cannot get as much information out of the error reconciliation data being transferred between the two parties. Therefore, this improves both the performance and security of the protocol.
One major drawback of this approach is that it works only when the Quantum Bit Error Rate (QBER) is very low. For high QBER systems, synchronisation process by the protocol is prolonged, just like a regular TPM. Therefore, it cannot help us distinguish whether noise caused in the transmission is due to an eavesdropper or due to inherent errors in the apparatus or channel noise. That would require a scheme which does not consider the QKD process as a black box and consider the inner functioning and operations of the apparatus and the kind of noise they incur on the key exchange.
#### 5.1.2 Future Scope of ML in Quantum Communications
A potential focus on quantum communication is on discovering an efficient way to transfer information in quantum channels at the nano-scale. That would include a next-generation approach of using most elegant electronics and photonic devices. These technology devices ensure the transfer of information without any losses. A promising project TOCHA [153] by the European Commission is already in action with an objective to build a quantum particle of electromagnetic radiation such as light and photons that can be communicated with the smallest possible wastage of energy with novel topological wave guides. The current major communication challenges witnessed are relevant to both quantum-assisted classical and pure quantum communication because quantum networks generally work on the principle of disseminating single photons of light through free space or optical fiber. Hence, the new challenges are:
1. Appropriate encoding schemes
2. Quantum state generation
Figure 19: Quantum Cryptography Public Key Using BB84 Protocol. (Image source: UNS Nice (France), Department of Physics)
3. Quantum state transmission
4. Quantum state detection
Further, linear optics gives hope for quantum communication challenges, where more than a single bit of quantum information is used as a carrier. These complex quantum states are capable of noise resistance in numerous configurations and allow the encoding of more data into a single photon. Additionally, in the near future from short to medium term, numerous quantum communications techniques are targeted. However, in the long term, building a Very Wide Area Network (VWAN) across continents with quantum processors is to be addressed. In the near future, QKD will probably be witnessed in showing vast distances across trusted nodes, high-altitude platform systems, test-bed networks, or satellites, as well as intra-city networks with many nodes or switchable networks, all of which would necessitate large-scale investments in infrastructure [154].
The engineering and operational systems are targeted at enabling high-speed electronics and optoelectronics, such as Field Programmable Gate Array/Application Specific Integrated Circuit (FPGA/ASIC)[155], packaging, coupled photonics, and compact cryptosystems to be scaled and mass-produced in large quantities in order to deliver solutions that operate with existing communication networks. This includes combining classical and quantum encryption techniques to provide comprehensive security solutions while also expanding the market for new applications.
## 6 Concluding Remarks
The field of quantum computing is witnessing exponential growth in many computational applications. Employing quantum computing for efficient processing of information of massive volumes of data is the ultimate goal. This paper is a brief survey of the various applications of ML-enabled quantum computing. Machine learning is in its prime as many techniques have been used for various computational applications. We have focused here on how machine learning techniques can be harnessed for various applications in the domain of quantum computing and communications. More specifically, the paper begins with a review of quantum computing and machine learning. It provides various ways in which these two domains have crossed paths and furthermore how the new idea of machine learning-enabled quantum computing has become a topic of global importance.
This article also sheds some light on the challenges associated with the implementation of these in near-term devices. The domain of AI enabled quantum computing is still nascent with a lot of potential and many research opportunities.
In a nutshell, machine learning-enabled quantum computing is a new and fast-growing area, yet many challenges remain before it can be at par with classical ML. Our aim is to provide a basic understanding, trigger interest, and supply ample references for further in-depth studies, to both beginners and experts in the various domains that involve both QC and ML.
## Acknowledgments
We would like to acknowledge the fruitful discussion with Ritajit Majumdar, a former Senior Research Fellow at the Indian Statistical Institute and presently a Research Scientist at IBM India Research Lab.
## Appendix A Appendix
### Quantum Circuit
A quantum circuit serves as the schematic depiction of a quantum algorithm or quantum program. Every line within the quantum circuit is denoted by a qubit, and the operations, specifically quantum gates, are depicted using distinct blocks placed along the line [156]. Table 5 provides a summary of frequently used logical quantum gates. |
2307.06183 | Experimental Single Electron 4D Tracking in IOTA | This paper presents the results of the first experiments on 4D tracking of a
single electron using a linear multi-anode photomultiplier tube. The reported
technology makes it is possible to fully track a single electron in a storage
ring, which requires tracking of amplitudes and phases for both, slow
synchrotron and fast betatron oscillations. Complete tracking of a point-like
object enabled the first direct measurements of single-particle dynamical
properties, including dynamical invariants, amplitude-dependent oscillation
frequencies, and chaotic behavior. | A. Romanov, J. Santucci, G. Stancari | 2023-07-12T14:16:27Z | http://arxiv.org/abs/2307.06183v1 | # Experimental Single Electron 4D Tracking in Iota 1
###### Abstract
This paper presents the results of the first experiments on 4D tracking of a single electron using a linear multi-anode photomultiplier tube. The reported technology makes it is possible to fully track a single electron in a storage ring, which requires tracking of amplitudes and phases for both, slow synchrotron and fast betatron oscillations. Complete tracking of a point-like object enabled the first direct measurements of single-particle dynamical properties, including dynamical invariants, amplitude-dependent oscillation frequencies, and chaotic behavior.
## 1 Introduction
Complete tracking of a charged particle in a circular accelerator will enable a new class of diagnostics capabilities. It will allow measurements of important single-particle dynamical properties, including dynamical invariants, amplitude-dependent oscillation frequencies, and chaotic behavior. The true single-particle measurements can be employed for benchmarking of long-term tracking simulations, for training of AI/ML algorithms, and ultimately for precise predictions of dynamics in present and future accelerators.
Observation of a single electron in storage rings has a long history that goes back to experiments at AdA, the first electron-positron collider [1, 2]. Several experiments using various instruments were done in the past to track single electron dynamics in storage rings, with the goal to track relatively slow synchrotron oscillations [3, 4, 5] and tracking of all 3 mode amplitudes [6].
The goal of the study presented here was to demonstrate for the first time a complete 6-dimensional tracking of a single particle, an electron in our case, in a storage ring. Unfortunately, due to long delivery times we were able to use only one coordinate sensitive photon detector. This allowed us to track an electron in 4 dimensions of the phase space, covering longitudinal and horizontal planes.
## 2 Experimental Setup
Each of the 8 main dipoles in IOTA [7] is equipped with synchrotron light stations installed on top of the magnets themselves. The light out of the dipoles is deflected upwards and back to the horizontal plane with two 90-degree mirrors. After the second mirror, the light enters the dark box, which is instrumented with customizable diagnostics, as shown in Figure 1. A focusing achromatic lens with a 40 cm focal length and an iris are installed in the vertical insulation tube that connects to the mirror holders. This experiment used one of such diagnostics stages located at the M3L dipole.
The PML-16 detector from the Becker&Hickl company based on multianode photomultiplier tube (PMT) was used for the presented experiment. Figure 2 shows general view of the detector and its dimensions, including the geometry of the sensitive area. The PML-16 detectors have active area of 16x16 mm with 16 individual cathodes arranged in a linear array. To fully utilize this relatively large area a defoculsing lens was added to the optical system with the goal to make the larger beam sigma be around 2 mm when focused on the sensitive area of PMT (either in horizontal or vertical plane).
PML-16 has a preamp and channel encoding electronics attached to the PMT forming a single unit. This allows minimisation of the noise and time jitter. Control over the detector's high voltage is done by the DCC-100 card. The SPC-130 card is measuring time of arrival and position of a segment that detected a photon. Figure 3 shows connection layout of the PML-16 detector.
A modification to the existing optical and mechanical systems was done to match beam and the detector sizes. Figure 4 shows the layout of the instruments. The setup
Figure 1: Photograph of the optical diagnostics setup at one of the IOTA’s main dipoles (left) and corresponding schematic diagram (right).
Figure 2: General view of the PML-16 multianode PMT (left) and geometry of its sensitive area (right).
allows keeping existing operational modes, while enabling single electron tracking with two magnification factors, the nominal 88% and 400% necessary to effectively use large aperture of the PML-16 detector.
The reported measurements were done concurrently with other experiments that were ongoing in the IOTA ring without any special modifications to the lattice parameters. The only implemented modification was small change of the horizontal betatron tune to move the working point off the coupling resonance and have a flat beam. Resulting IOTA parameters are listed in TABLE. The lattice was well characterized using LOCO method. The following tolerances are expected for the points of observation of the optical instruments:
* Beta functions accuracy of 5%
* Dispersion functions error smaller than 1 cm
* Betatron tunes within 0.001
Figure 5 shows horizontal beam size and dispersion for the used IOTA configuration.
## 4 Results
The data set used for the presented results consists of a set of 3 numbers for each of the detected photons: the turn number at which the detection happened, the intra-turn time and the number of a segment that detected the photon. A total of 102767 photons related to the electron have been detected over 10 second time window.
Figure 6 shows an example of tracking an electron in 4D phase space over about 60000 turns using 80 photons detected over that time. Table 2 contains corresponding trajectory parameters assuming harmonic oscillations in
\begin{table}
\begin{tabular}{l r} \hline \hline Parameter & Value \\ \hline Perimeter & 39.96 m \\ Momentum & 150 MeV/c \\ Bunch intensity & 1 \(e^{-}\) \\ RF frequency & 30 MHz \\ RF voltage & 350 V \\ Betatron tunes, \((\nu_{x},\nu_{y})\) & (5.2965, 5.3) \\ Synchrotron tune, \(\nu_{s}\) & 3.5 \(\times\) 10\({}^{-4}\) \\ Damping times, \((\tau_{x},\tau_{y},\tau_{x})\) & (2.08, 0.65, 0.24) s \\ Horizontal emittance, \(\epsilon_{x}\) & 127 nm \\ Momentum spread, \(\Delta p/p\), RMS & 1.3 \(\times\) 10\({}^{-4}\) \\ Momentum compaction, \(\alpha_{p}\) & 0.083 \\ Natural chromaticity \(C_{x}\), \(C_{y}\) & -10.9, -9.4 \\ \hline \hline \end{tabular}
\end{table}
Table 1: IOTA parameters during the experiment.
Figure 4: Schematic diagram of the opto-mechanical setup for an electron tracking. Both 16ch PMT and a digital camera (blue shaded) are located on a stack of movable stages. Focusing stage can move insertion stage (grey shaded) to position sensors in the focal planes. The insertion stage can position either one of the sensors on axis of the light beam or let the light pass through to other detectors. Additional insertion stage can move a defocusing lens in and out of the photons path changing magnification factor from 88% to 400% which matches beam size to the size of 16ch PMT.
Figure 5: Horizontal beam size and horizontal dispersion of the IOTA lattice.The second half of the ring is shown, ending at the injection straight section. The vertical green line shows location of the M3L monitor.
Figure 3: General connection scheme of the PML-16 detector. DCC-100 is a voltage control and overload protection unit that can control two detectors. SPC modules are used to record intra-cycle time, consecutive number of cycle, and a position of the segment for the detected photons. Each detector requires one SPC module.
both longitudinal and horizontal directions. Uncertainties of the trajectory parameters were calculated using bootstrap method.
Because of random kickbacks from the synchrotron radiation photons oscillation amplitude change in time which allows naturally scan the phase space. Figure 7 shows dependence of the horizontal betatron tune on the horizontal betatron amplitude from the analyzed data set. The Fourier transform was used to extract spectrum from 20 photons with a fine peak detection in the range between tunes 5.296 and 5.297. This simplified algorithm increased speed of the analysis but resulted in a higher noise in the reconstructed parameters. Another complication for the analysis is that we have a jitter coming from power supplies that varies betatron tunes and has been filtered out for the presented amplitude dependence plot.
## 4 Summary
Presented results are the first experimental tracking of betatron and synchrotron oscillations at the same time for a single electron in a storage ring. This 4D tracking proves that with addition of the second coordinate sensitive single photon detector it will be possible to fully track an electron in a storage ring.
As an example of a practical use, betatron tune was measured with exceptional precision of \(5\,\divide@quant\,10^{-7}\) as well as a dependence of horizontal betatron tune on the oscillations amplitude.
\begin{table}
\begin{tabular}{l c} \hline \hline Parameter & Value \\ \hline Horizontal betatron tune & 5.2964325(5) \\ Horizontal betatron phase & \(2\pi\,\cdot\,0.12(2)\) \\ Horizontal betatron ampitude & 5.0(2) mm \\ Synchrotron tune & 0.0003456(7) \\ Synchrotron phase & \(2\pi\,\cdot\,0.09(2)\) \\ Synchrotron ampitude & 1.20(8) ns \\ \hline \hline \end{tabular}
\end{table}
Table 2: Parameters of the 4D electron trajectory measured using 80 photons over 60000 turns.
Figure 6: (Top) Horizontal positions of an electron measured (red circles) and reconstructed on the same turns (black triangles) assuming harmonic oscillations. (Bottom) Reconstruction of the synchrotron oscillations (solid line) compared to the measured delays of the arrival time (red circles). Same photon detection events were used for both plots.
Figure 7: Dependence of the betatron tune on the amplitude of the horizontal oscillations at the image plane. |
2303.14070 | ChatDoctor: A Medical Chat Model Fine-Tuned on a Large Language Model
Meta-AI (LLaMA) Using Medical Domain Knowledge | The primary aim of this research was to address the limitations observed in
the medical knowledge of prevalent large language models (LLMs) such as
ChatGPT, by creating a specialized language model with enhanced accuracy in
medical advice. We achieved this by adapting and refining the large language
model meta-AI (LLaMA) using a large dataset of 100,000 patient-doctor dialogues
sourced from a widely used online medical consultation platform. These
conversations were cleaned and anonymized to respect privacy concerns. In
addition to the model refinement, we incorporated a self-directed information
retrieval mechanism, allowing the model to access and utilize real-time
information from online sources like Wikipedia and data from curated offline
medical databases. The fine-tuning of the model with real-world patient-doctor
interactions significantly improved the model's ability to understand patient
needs and provide informed advice. By equipping the model with self-directed
information retrieval from reliable online and offline sources, we observed
substantial improvements in the accuracy of its responses. Our proposed
ChatDoctor, represents a significant advancement in medical LLMs, demonstrating
a significant improvement in understanding patient inquiries and providing
accurate advice. Given the high stakes and low error tolerance in the medical
field, such enhancements in providing accurate and reliable information are not
only beneficial but essential. | Yunxiang Li, Zihan Li, Kai Zhang, Ruilong Dan, Steve Jiang, You Zhang | 2023-03-24T15:29:16Z | http://arxiv.org/abs/2303.14070v5 | # ChatDoctor: A Medical Chat Model Fine-tuned on LLaMA Model using Medical Domain Knowledge
###### Abstract
Recent large language models (LLMs) in the general domain, such as ChatGPT, have shown remarkable success in following instructions and producing human-like responses. However, such language models have yet to be adapted for the medical domain, resulting in poor accuracy of responses and an inability to provide sound advice on medical diagnoses, medications, etc. To address this problem, we fine-tuned our ChatDoctor model based on 100k real-world patient-physician conversations from an online medical consultation site. Besides, we add autonomous knowledge retrieval capabilities to our ChatDoctor, for example, Wikipedia or a database as a knowledge brain. By fine-tuning the LLMs using these 100k patient-physician conversations, our model showed significant improvements in understanding patients' needs and providing informed advice. The autonomous ChatDoctor model based on Wikipedia and Database Brain can access real-time and authoritative information and answer patient questions based on this information, significantly improving the accuracy of the model's responses, which shows extraordinary potential for the medical field with a low tolerance for error. To facilitate the further development of dialogue models in the medical field, we make available all source code, datasets, and model weights available at: [https://github.com/Kent0n-Li/ChatDoctor](https://github.com/Kent0n-Li/ChatDoctor).
## 1 Introduction
The development of instruction-following large-scale language models (LLMs) such as ChatGPT[1] has gained significant attention due to their remarkable success in instruction understanding and human-like response generation. These auto-regressive LLMs [2] are pre-trained on web-scale natural language by predicting the next token and then fine-tuned to follow large-scale human instructions. At the same time, they show robust performance on a wide range of natural language processing (NLP) tasks and generalize to unseen tasks, demonstrating their potential as unified solutions to various problems in natural language understanding, text generation, and conversational artificial intelligence. However,
exploring such generalized domain LLMs in the medical domain remains relatively unexplored [3], despite their great potential to transform medical communication and decision-making [4]. The reason is that existing models need to learn the medical domain specifically or in detail, resulting in models that often give incorrect medical responses.
By fine-tuning large linguistic dialogue models on data from doctor-patient conversations, the models' ability to understand patients' needs can be significantly improved. Furthermore, to improve the model's credibility, we also designed a knowledge brain based on Wikipedia and medical-domain databases, which can access real-time and authoritative information and answer patients' questions based on this reliable information, which is vital for the medical field with low error tolerance. Through extensive experiments, we found that the fine-tuned model of doctor-patient dialogue outperforms ChatGPT in terms of precision, recall, and F1. In addition, the autonomous ChatDoctor model can answer the latest medical questions like Mpox. Since large language models such as ChatGPT are in a non-open source state, we used Meta's open-source LLaMA. We first trained a generic conversation model using 52K instruction-following data from Stanford University's Alpaca [5]. Then we fine-tuned the model on our collected dataset of doctor-patient conversations. Our approach has three main contributions:
1. We designed a framework for fine-tuning large language models in the medical domain.
2. We collected and open-sourced a dataset with 100k patient-physician conversations for fine-tuning the large language model. The dataset contains extensive medical expertise for the medical application of LLMs.
3. Based on the external knowledge brain, we proposed an autonomous ChatDoctor model with online analysis ability of novel expertise.
Figure 1: Overview of the physician and patient conversation dataset collection pipeline and the training procedure of ChatDoctor.
## 2 Method
### Patient-physician Conversation Dataset
The first step in fine-tuning is to collect a dataset of patient-physician conversations. In patient-physician conversations, the patient's descriptions of disease symptoms are often colloquial and cursory. If we manually construct the synthesized patient-physician conversation dataset, it often leads to the problem of insufficient diversity and over-specialized descriptions, which are often spaced out from real scenarios. Collecting real patient-physician conversations is a better solution. Therefore, we collected about 100k real doctor-patient conversations from an online medical consultation website HealthCareMagic1. We filtered these data both manually and automatically, removed the identity information of the doctor and patient, and used language tools to correct grammatical errors, and we named this dataset HealthCareMagic-100k, shown as Fig. 1. In addition, we collected approximately 10k patient-physician conversations from the online medical consultation website iCliniq2 to evaluate the performance of our model.
Figure 2: Overview of the Autonomous ChatDoctor based on Knowledge Brain.
Figure 3: Some samples in our disease database consist of symptoms, clinical test approaches, and medication suggestions.
### External Knowledge Brain
The auto-regressive prediction of the next word by the large language model leads the model to often give wrong answers to uncertain questions. Moreover, the output of the model is often uncontrollable and random, which is unacceptable in the medical field. The accuracy of the model would be greatly improved if the model could then answer based on the given authoritative and reliable knowledge, as illustrated in Fig. 2. For Q&A in medical scenarios, we collected and compiled a database, partly sampled in Fig. 3, which includes about 700 diseases and their associated symptoms, further medical tests or measures, and recommended medications, as a gold standard for the medical profession. The database can be updated at any time without retraining the model and can theoretically be set up for a specific disease database depending on the department or specific target. In addition to disease databases, some authoritative sources of information can also serve as external knowledge brains for the autonomous model, such as Wikipedia, a free multilingual online encyclopedia that is the largest and most widely read reference book in history. In summary, we can refer to the disease databases and Wikipedia (or any other reliable information source) as external knowledge brains of our ChatDoctor.
### Autonomous ChatDoctor based on Knowledge Brain
Equipped with the external knowledge brain, i.e., Wikipedia or our constructed database encompassing over 700 diseases, ChatDoctor could retrieve the corresponding knowledge and reliable sources to answer patients' inquiries more accurately. After constructing the external knowledge brain, we need to let our ChatDoctor retrieve the knowledge he needs autonomously, which can generally be achieved in a large language model by constructing appropriate prompts. To automate this process, we design keyword mining prompts (Fig. 4) for ChatDoctor to extract key terms for relevant knowledge seeking. Then, the top-ranked
Figure 4: Autonomous Wikipedia retrieval through the prompt to ChatDoctor.
relevant passages were retrieved from Knowledge Brain with a term-matching retrieval system. As for the disease database, since the model cannot read all the data at once, we first let the model read the data in batches and select for itself the data entries that might help answer the patient's question (Fig. 5). Finally, all the data entries selected by the model are given to the model for a final answer, shown as Fig. 6. This approach better ensures that patients receive well-informed and precise responses backed by credible references.
### Training of the model
We build our ChatDoctor utilizing Meta's LLaMA model [6], a publicly accessible LLM. Notably, in spite of its mere 7 billion parameters, the LLaMA has reported superior efficacy and competitive performance in comparison to the considerably larger GPT-3 (with 175 billion parameters) on several NLP benchmarks. LLaMA's performance improvement was achieved by amplifying the diversity of training data, as opposed to parameter quantity. Specifically, LLaMA was trained on 1.0 trillion tokens, procured from publicly accessible data repositories such as CommonCrawl and arXiv documents.
We utilize conversation from HealthCareMagic-100k to fine-tune the LLaMA model [7], in accordance with the Stanford Alpaca [5] training methodology, and our model was firstly be fine-tuned by Stanford Alpaca's data to have some ba
Figure 5: Autonomous disease database retrieval
Figure 6: Answer based on retrieved knowledge, and we can determine whether the ChatDoctor needs to incorporate its own prior knowledge or not.
sic conversational capabilities. The fine-tuning process on HealthCareMagic-100k was conducted using 6 * A100 GPUs for a duration of 3 hours. The hyperparameters employed in the training process were as follows: the total batch size of 192, a learning rate of \(2\times 10^{-5}\), a total of 3 epochs, a maximum sequence length of 512 tokens, and a warmup ratio of 0.03, with no weight decay.
## 3 Results
To test the capability of the autonomous ChatDoctor model based on knowledge brains, we asked the model some recent medical questions, such as Mpox (monkeypox) in Fig. 8, which is newly named by World Health Organization (WHO) on 28 November 2022. Since it is a new term, ChatGPT is completely unable to answer it, while our autonomous ChatDoctor can autonomously retrieve the Wikipedia content of Mpox and give an accurate answer. For some general medical questions, such as Otitis, ChatDoctor provides a very reliable answer after knowledge retrieval. As for Daybue in Fig. 10, which was approved as a drug
Figure 8: New knowledge test comparison between the ChatGPT and our ChatDoctor with knowledge brain. The ChatGPT cannot recognize the word Mpox (old name: Monkeypox), while our ChatDoctor can provide the precise answer for the medical test of Mpox.
Figure 7: Let the ChatDoctor read the retrieved domain knowledge and provide a reliable answer.
by the Food and Drug Administration (FDA) in March 2023, our model also provided accurate answers after autonomous information retrieval.
To quantitatively evaluate the performance of ChatDoctor, we use questions from iCliniq as input to ChatDoctor, and then we use the corresponding real doctors' answers from iCliniq as ground truth, and we also give the same input to ChatGPT and record its responses. We use the BERT Score [8] to calculate Precision, Recall and F1 scores for ChatDoctor and ChatGPT, respectively. Comparing the results in Fig. 11, we find that the fine-tuned ChatDoctor model outperforms ChatGPT in Precision, Recall and F1, and some dialogue examples are shown in the Fig. (12-16).
Figure 10: Comparison between ChatGPT and autonomous ChatDoctor with knowledge brain. The ChatGPT is unfamiliar with the ”Daybue” medication, while our ChatDoctor accurately pointed out the purpose of Daybue (trofinetide).
Figure 9: Comparison between ChatGPT and autonomous ChatDoctor with knowledge brain. Due to the lack of domain knowledge, the ChatGPT provided a common answer about otitis, while our ChatDoctor provided a professional response to the otitis treatment.
## 4 Limitations
We would like to emphasize that ChatDoctor is for academic research only and any commercial use and clinical use is strictly prohibited. First, we have not designed sufficient security measures, and the current model can not guarantee the full correctness of medical diagnoses and recommendations. Second, our model is not licensed for healthcare-related purposes [9]. Third, ChatDoctor is based on LLaMA and has a non-commercial license, so we necessarily inherited these rules.
## 5 Discussion and conclusion
ChatDoctor, the chatbot obtained by fine-tuning large language models on medical domain knowledge, has a wide range of potential applications. However, due to the unique characteristics of the medical domain, latent language errors in diagnosis and medical advice can have serious consequences. And large language models often generate many incorrect and harmful statements (hallucinations) on the knowledge they do not know, which may result in malpractice. Our ChatDoctor is first fine-tuned with data from real patient-physician conversations, allowing the model to better understand the patient's questions to make more informed responses, and ChatDoctor model also has the ability to autonomously retrieve the knowledge brain and then provide answers, further enhancing the credibility of the model's responses. In practical applications, the potential benefits of ChatDoctor are enormous, including improved accuracy and efficiency in medical diagnosis and reduced workload for medical professionals, while increasing access to medical consultations, especially for patients in most underserved hospitals and third world countries. We believe that our ChatDoctor can be an invaluable aid in improving patient outcomes and advancing medical research.
Figure 11: Quantitative Comparison between ChatDoctor and ChatGPT. |
2309.01340 | MDSC: Towards Evaluating the Style Consistency Between Music and Dance | We propose MDSC(Music-Dance-Style Consistency), the first evaluation metric
that assesses to what degree the dance moves and music match. Existing metrics
can only evaluate the motion fidelity and diversity and the degree of rhythmic
matching between music and dance. MDSC measures how stylistically correlated
the generated dance motion sequences and the conditioning music sequences are.
We found that directly measuring the embedding distance between motion and
music is not an optimal solution. We instead tackle this through modeling it as
a clustering problem. Specifically, 1) we pre-train a music encoder and a
motion encoder, then 2) we learn to map and align the motion and music
embedding in joint space by jointly minimizing the intra-cluster distance and
maximizing the inter-cluster distance, and 3) for evaluation purposes, we
encode the dance moves into embedding and measure the intra-cluster and
inter-cluster distances, as well as the ratio between them. We evaluate our
metric on the results of several music-conditioned motion generation methods,
combined with user study, we found that our proposed metric is a robust
evaluation metric in measuring the music-dance style correlation. | Zixiang Zhou, Weiyuan Li, Baoyuan Wang | 2023-09-04T03:55:41Z | http://arxiv.org/abs/2309.01340v3 | # MDSC: Towards Evaluating the Style Consistency Between Music and Dance
###### Abstract
We propose **MDSC**(Music-Dance-Style Consistency), the first evaluation metric that assesses to what degree the dance moves and music match. Existing metrics can only evaluate the motion fidelity and diversity and the degree of rhythmic matching between music and dance. **MDSC** measures how stylistically correlated the generated dance motion sequences and the conditioning music sequences are. We found that directly measuring the embedding distance between motion and music is not an optimal solution. We instead tackle this through modeling it as a clustering problem. Specifically, 1) we pre-train a music encoder and a motion encoder, then 2) we learn to map and align the motion and music embedding in joint space by jointly minimizing the intra-cluster distance and maximizing the inter-cluster distance, and 3) for evaluation purposes, we encode the dance moves into embedding and measure the intra-cluster and inter-cluster distances, as well as the ratio between them. We evaluate our metric on the results of several music-conditioned motion generation methods [25][38][55], combined with user study, we found that our proposed metric is a robust evaluation metric in measuring the music-dance style correlation.
## 1 Introduction
Synthesizing realistic human motion sequences has made remarkable progress in recent years. It is now possible to synthesis human motion sequence from natural language descriptions [33][16][42][55], or from music [38][55][43]. Despite these achievements in motion synthesis, less progress has been made in terms of proper evaluation metrics. Further improvements on motion generation could hardly be made without comprehensive and fine-grained evaluation metrics. Therefore, it is vital for the community to develop proper metrics on evaluating the outcomes of the human motion synthesis.
Since conditioned human motion generation is a typical one-to-many mapping problem, there are multiple aspects critical for the evaluation metrics to take into consideration. [51] summarized there are four major categories to be considered when evaluating the motion generation, and they are: 1) **fidelity**: it measures the quality and smoothness of the generated motion sequences, 2) **diversity**: it measures how diverse the synthesized motions are given same driving source, 3) **condition consistency**: it measures how correlated the generated motion sequences and the driving
sources are in terms of semantic meaning, rhythmic pattern, or style, and 4) **user study**: it measures the motion generation results from human's perspective, which is more subjective compared with other three categories.
Various metrics have been proposed to evaluate the synthesized motion sequence on text conditioned scenario[1][12][17][14][28][23][42][6][52][55], and these methods cover the four major categories of evaluation. Specifically, 1) [1][12][28] focus on evaluating the motion fidelity, 2) and [19][17][15][42][40] propose to evaluate the motion diversity, 3) [14][28][23][42][6][52][55] propose metrics on measuring how semantically consistent the generated motion sequences and the conditioning text descriptions are, and 4) [34][41][55] propose protocols on evaluating the quality of motion synthesis from subjective perspective.
While the metrics for assessing motion fidelity and diversity are condition-independent, the metrics for condition consistency are condition-specific. The definition of consistency for music-conditioned is quite different from text-conditioned. The music-motion consistency are two folds, on one hand, rhythmic consistency plays a vital role in evaluating the music driven motion quality[22][19][2][43][3][25][38], on the other hand, whether the dance motion style is consistent with the music style is also critical. Unlike the definition of text-conditioned consistency, music-conditioned consistency is much more relaxed. Specifically, text-to-motion is a one-to-many mapping. For example, the text description _'a person is running.'_ could be mapped to various motion sequence, as long as they demonstrate the same semantic meaning, but these motion sequences cannot be mapped to the description '_a person is walking.'_. For music-to-motion, however, it is a many-to-many mapping, which means a ballet style music could be mapped to various ballet style dances, and reversely, these _balllet style_ dances could also be mapped to various _balllet style_ music. These dances could be choregraphically different, and the music arrangement styles could also vary a lot.
To measure the consistency between music and dance, we don't measure the embedding similarity between music clip and motion sequence, as [31] does for text-to-motion scenario. Instead, we model it as an embedding clustering problem. We use two encoders to obtain embedding from music and motion sequences, respectively, and cluster the embedding from stylistically consistent music-motion pair into same cluster. Meanwhile, we push the clustering centers apart from each other to maximize the inter-cluster embedding distance. When evaluation, 1) we can only encode dance moves and measure the intra-cluster and inter-cluster distances between encoded embedding and clustering centers embedding, or 2) we can encode both music and dance and measure the distance between their embedding.
Our contributions are three folds: 1) We define the music-to-motion style consistency and model it as a quantifiable problem. 2) We propose the first, to the best of our knowledge, music-to-motion style consistency evaluation model as a metric, and conduct comprehensive experimental analysis to validate the effectiveness of our method. And 3) we provide baselines as measurements of music-to-motion consistency for future research.
## 2 Related Work
Music Representation LearningMusic representation learning has been widely studied in music auto-tagging and classification[9][8], music retrieval[46][10] and music understanding[30].
For music auto-tagging and classification, the task is to obtain various attributes from music streams, including the music genres, the rhythmic traits, the musical moods, etc. Typically, a music stream is likely to be categorized into multiple classes, making it difficult to define the categories of music attributes and also difficult in classifying the music streams[48]. There are two major categories of learning paradigms, namely, supervised and self-supervised learning in auto-tagging and classification. 1) For supervised learning paradigm, various types of neural networks architectures have been studied and proven effective in learning features from labeled datasets[36][44][45]. However, labeled datasets are costly to obtain, and the categories are unlimited, making it an open vocabulary problem. 2) As an alternative, self-supervised learning paradigm has numerous advantages against supervised counterpart. Instead of learning from labeled data, self-supervised paradigm attempts to learn patterns from tremendous unlabeled data. Contrastive learning[32] is an effective learning technique, and multiple studies have been proposed and proven effective in effective musical representations from unlabeled data[4][39][30].
Music retrieval and understanding is normally involved with multimodality representation learning. For example, text-based music retrieval[47][5][18][10] attempts to learn a joint representation space between music streams and natural language descriptions. For a semantically aligned pair of music and description, their embedding are pulled together in the joint space, while for misaligned pairs, their embedding are pushed apart. Similarly, image-based or video-based music retrieval and understanding is designed to learn the joint representation space between acoustic and visual modalities. For example, [50][13] attempts to correlate visual content and acoustic content using contrastive learning paradigm. The learnt representation could be effectively employed for image-based music retrieval and, in reverse, audio-based image retrieval.
Motion Representation LearningMotion representation learning could be categorized into motion recognition
[26][7][53], and motion understanding [35][20][11].
Motion recognition, or action recognition, is the task that estimates the category of the query motion sequence. Typically, given a query of motion sequence, which is normally represented by 3D skeleton joints[7] or rotation parametric prior[41], one or multiple action categories are estimated to describe the motion. Mostly, these are trained on pre-defined set of action categories using supervised paradigm [26][49][7][54][53]. However, predefined action categories are limited and normally able to describe short and simple actions, open vocabulary action recognition is an alternative solution to this[41][35][31]. Instead of learning the direct mapping between actions and labels, these methods attempts to learn a joint representation space that align the actions and descriptions, and retrieval based strategy is employed for open vocabulary recognition.
Motion understanding is a more open problem compared with recognition task, and typically applies to complex and long action sequence. In addition to estimating the action categories, these tasks also attempts to reason from the action. For example, in [20][11], they attempt to understand the action sequence from global level to local level. This does not only require alignment between action representation and description representation, but also alignment between pose or body part representation and word or phrase representation.
Music-Motion ConsistencyCurrent music-motion consistency studies focus on assessing the rhythmic consistency between music and motions. These studies [22][19][40][24][3][38] asses the rhythmic consistency by beat alignment score, which measures the degree of motion kinetic beats and the musical beats are aligned. Although motion kinetic beats are defined in different aspects, they assume that high music-motion consistency means better alignment between kinetic beats and musical beats, regardless the music styles and dance choreography. However, the evaluation of music-motion consistency is a non-trivial problem and could hardly be defined from single aspect. Existing studies are insufficient to evaluate the correlation between music and dance motion objectively and comprehensively.
SummaryWe found that although numerous research efforts have been made in understanding music and human motion, respectively, few work is proposed to bring them together. Although few work measure the consistency in terms of rhythmic matching, these methods are insufficient in evaluating the style consistency between music and dance moves. Hence, we propose a novel approach to align music and motion semantically, and show that it could be used as an evaluation metric for music-conditioned motion generation task.
## 3 Method
We show the overview pipeline of our method in Fig. 2, which contains a pretrained motion encoder \(E_{M}\), a pretrained music encoder \(E_{A}\), and two light-weight MLPs. The detail of each module is described in following sections.
### Music Encoder
We use pretrained music encoder in [31], which is a modified version of music tagging transformer[45]. The pretraining scheme is shown in Fig. 2(b), where a pretrained text encoder and a modified audio encoder are adopted to obtain music and text representation, respectively. Following typical CLIP training paradigm[37], the encoders are trained to maximize the similarity of embedding of music and text aligned pairs, and to minimize the similarity of misaligned pairs. Readers are refered to [31] for details.
### Motion Encoder
As shown in Fig. 2(a), we train a motion auto-encoder and adopt the encoder part as a good motion representation encoding prior. Given a motion sequence denoted as: \(x\in\mathbb{R}^{T^{N}c}\), where \(T\) and \(c\) are temporal length and dimension per frame. We use an encoder \(\mathcal{E}_{M}(\cdot)\) to obtain embedding \(z_{M}\) from input motion sequence as: \(z_{M}=\mathcal{E}_{M}(x)\), and a decoder \(\mathcal{D}_{M}(\cdot)\) to reconstruct motion sequence \(\tilde{x}\) from \(z\) as: \(\tilde{x}=\mathcal{D}_{M}(z_{M})\). Therefore, the motion auto-encoder process is modeled as Eq. 1:
\[\tilde{x}=\mathcal{D}_{M}(\mathcal{E}_{M}(x)) \tag{1}\]
And the motion auto-encoder is training by minimizing the reconstruction error depicted as Eq. 2:
\[\mathcal{L}_{rc}=\|\tilde{x}-x\| \tag{2}\]
After the auto-encoder is trained, we adopt its encoder part as our motion encoder.
### Music-Dance Style Alignment
Given a pair of dance motion sequence \(x_{M}\) and music clip \(x_{A}\), as shown in Fig. 2(c), we first obtain their embedding \(z_{M}\in\mathbb{R}^{1\times c_{M}}\) and \(z_{A}\in\mathbb{R}^{1\times c_{A}}\) using the pretrained audio encoder and motion encoder, respectively, where \(c_{M}\) and \(c_{A}\) are the dimension of motion embedding and music embedding. This is denoted as \(z_{M}=\mathcal{E}_{M}(x_{M})\) for motion encoding and \(z_{A}=\mathcal{E}_{A}(x_{A})\) for music encoding.
Because the audio encoder \(\mathcal{E}_{A}(\cdot)\) and motion encoder \(\mathcal{E}_{M}(\cdot)\) are adopted from pretrained models, and these pre-trained models are trained using different dataset and under different settings, their output embedidng are not necessarily aligned in latent space. To align the embedding from different encoders in latent space, there are three possible design options shown in Fig. 3, and we will discuss them in detail as follow:
### Align Motion Embedding to Music Embedding
We assume the audio encoder is powerful enough to extract stylistically meaningful feature embedding from music sequence. Therefore, as long as we we can align paired motion embedding to music embedding, the music-to-dance style consistency could be measured. This is shown in Fig. 3(a), where both motion encoder \(\mathcal{E}_{M}(\cdot)\) and audio encoder \(\mathcal{E}_{A}(\cdot)\) are fixed. To align motion embedding \(z_{M}\) to music embedding \(z_{A}\), we adopt a MLP to project the motion embedding to audio embedding as Eq. 3.
\[f_{M\to A}(z_{M}):\mathbb{R}^{1\times c_{M}}\rightarrow\mathbb{R}^{1\times c_ {A}} \tag{3}\]
### Align Music Embedding to Motion Embedding
Similarly, we assume the motion encoder obtains dance motion embedding with rich style information. In this case, aligning the music embedding to motion embedding will suffice, presumably, in measuring the dance-music style consistency. Again, for this case, both motion and music encoders are kept fixed during training, while a MLP is injected to project music embedding to motion embedding as Eq. 4. Visual illustration is given in Fig. 3(b).
\[f_{A\to M}(z_{A}):\mathbb{R}^{1\times c_{A}}\rightarrow\mathbb{R}^{1 \times c_{M}} \tag{4}\]
### Align Music and Motion Embedding in Joint Space
For this design option, we assume neither pretrained motion encoder nor audio encoder obtain stylistically representative embedding for style consistency evaluation. Therefore, we attempt to learn a joint embedding space which is representative for the style consistency measurement. In this case, as shown in Fig. 3(c), we adopt two MLPs, one for motion embedding and another for audio embedding respectively, to project the motion and audio embedding to joint space. We denote this process as Eq. 5 for audio embedding, and Eq. 6 for motion embedding.
\[f_{A\to J}(z_{A}):\mathbb{R}^{1\times c_{A}}\rightarrow\mathbb{R}^{1 \times c_{J}} \tag{5}\]
\[f_{M\to J}(z_{M}):\mathbb{R}^{1\times c_{M}}\rightarrow\mathbb{R}^{1 \times c_{J}} \tag{6}\]
### Learning Objectives
Proper design of learning objectives plays vital role in representation learning. After the audio embedding and motion embedding been aligned to same dimension, we discuss in the follow that there are two design options for learning objectives:
Contrastive-Based ObjectivesContrastive-based loss is widely adopted in representation learning[32][37]. For our case, given aligned motion and audio pairs, it aims to reduce
Figure 3: **Variants of Design in Aligning Cross-Modality Embedding**Alignment of motion embedding and music embedding in three different approaches. (a) Fix music embedding and align motion embedding. (b) Fix motion embedding and align music embedding. (c) Align both music and motion embedding to joint space. Fixed, Trainable.
Figure 2: **Pipeline of Music-Dance Style Consistency** (a) We train a motion auto-encoder supervised by reconstruction loss, and use the encoder as \(E_{M}\). (b) We use the pretrained music encoder in [31] as our \(E_{A}\). (c) Given batch of motion sequence and music streams as input, our method uses pretrained motion encoder \(E_{M}\) and music encoder \(E_{A}\) to obtain their embedding. Instead of pulling paired motion embedding and audio embedding closer and push unpaired apart, we attempt to cluster style-consistent motion embedding and music embedding into same cluster, while inconsistent embedding are clustered into different clusters. At this stage, only the light-weight **MLPs** are trainable. The **dotted arrow** means no back-propagation is applied, while **solid arrow** means back-propagation is applied.
the distance between their embedding or increase the similarity between them. On the contrary, for misaligned pairs, it attempts to increase the distance or reduce the similarity between the embedding. During training, we construct mini batch samples containing \(N\)+1 pairs of motion and music, where 1 pair is stylistically aligned denoted as positive, and other \(N\) pairs are misaligned denoted as negatives. We optimize the trainable MLPs (\(f_{M\to A}\),\(f_{A\to M}\),\(f_{A\to J}\),\(f_{M\to J}\)) by minimizing the InfoNCE loss[32] as:
\[\mathcal{L}_{M\to A}=-\log\frac{\exp{(z_{i}^{M}\cdot z_{i}^{A}/\tau)}}{\sum_{j= 1}^{N}\exp{(z_{i}^{M}\cdot z_{j}^{A})}/\tau} \tag{7}\]
The final loss is: \(\mathcal{L}_{M\leftrightarrow A}^{contr}=(\mathcal{L}_{M\to A}+ \mathcal{L}_{A\to M})/2\).
Clustering-Based ObjectivesUnlike contrastive-based objectives, clustering-based objectives assumes relaxed pairwise alignment exists. We assume the stylistic representation of music and motion lay in joint high-dimension latent space. The embedding of motions or music of the same style are close to each other, while those with different style are apart from each other. Therefore, embeddings of motion and music of the same style form one latent subspace, or same cluster in latent space. As stated previously, the music-dance mapping follows a relaxed assumption. It is not necessary for the embedding of stylistic paired music and motion to be very close to each other in the latent space. Instead, their embedding should fall into the same subspace or cluster. Therefore, we don't construct positive and negative pairs for training. Instead, we attempt to group the embedding of aligned music and dance sequences into same cluster, while misaligned embedding should be clustered into different clusters. We assume music streams could be categorized into \(\mathcal{C}^{N}\) genres as a prior. Therefore, for music embedding \(z_{A}^{c_{j}}\) of genre \(c_{j}\) and motion embedding \(z_{M}^{c_{j}}\) corresponding to the same music style, we attempt to optimize the mapping \(f_{a\to b}(z_{a})\) so that the mapped embedding \(f_{M\to b}(z_{M})\) and \(f_{A\to b}(z_{A})\) belong to same cluster \(c_{j}\). Similarly, for music embedding \(z_{A}^{c_{j}}\) and motion embedding \(z_{M}^{c_{k}}\) belonging to different style \(c_{j}\) and \(c_{k}\), respectively, the mapped embedding are optimized to belong to different clusters. Let us define \(K\) learnable embedding \(\hat{c}_{k}\) representing cluster centers, we train the MLPs by optimizing the following objective:
\[\mathcal{L}_{intra}^{a}=\frac{1}{K}\sum_{i=1}^{K}{(1-\langle\tilde{z}_{a}^{c_ {i}},\hat{c}_{i}\rangle)} \tag{8}\]
\[\mathcal{L}_{inter}^{a}=\frac{1}{K(K-1)}\sum_{i=1}^{K}\sum_{j=1}^{K}{\langle \tilde{z}_{a}^{c_{i}},\hat{c}_{j}\rangle} \tag{9}\]
\[\mathcal{L}_{reg}=\frac{1}{K(K-1)}\sum_{i=1}^{K}\sum_{j=1,j\neq i}{\langle \hat{c}_{i},\hat{c}_{j}\rangle} \tag{10}\]
where \(\langle\cdot,\cdot\rangle\) is the similarity between two embeddings, \(K\) is the number of styles, and \(a\) denote either music or motion. The final loss is: \(\mathcal{L}_{M\leftrightarrow A}^{cluster}=(\lambda_{1}\mathcal{L}_{intra}^ {M}+\lambda_{2}\mathcal{L}_{inter}^{M}+\lambda_{3}\mathcal{L}_{intra}^{A}+ \lambda_{4}\mathcal{L}_{inter}^{A}+\lambda_{5}\mathcal{L}_{reg}^{A})\).
Classification ObjectivesIn addition, we adopt a classification loss as an auxiliary objective. We use a linear layer to project the mapped embedding in joint space to class probability distribution, and cross entropy loss is employed as supervision.
## 4 Experiments
We evaluate our method on a widely adopted public available dataset AIST++[25] and AIOZ-GDANCE[21]. We conduct thorough quantitative and qualitative analysis on several music-driven motion generation methods[25][38][55] to validate that our method is an appropriate design for music-dance style consistency assessment. We also build a benchmark(Tab. 2) using our method for future research.
### Implementation Details
Data PreprocessingFor motion sequence, we adopt the SMPL representation[27]. We represent each pose frame as a 75D vector, where the first 3D are the root trajectory, the 3-6D are the root orientation in rotation vector format, and the rest 69D are the rotation vectors of each joints relative to their parents. For music sequence, we read the raw acoustic waveform data and sample to 16KHz. The window size of each training sequence is 160. We follow the standard rule to split the data into training and validation set.
Motion EncoderFor motion auto-encoder, we adopt transformer encoder as the encoder architecture, and transformer decoder as the decoder architecture. For both encoder and decoder, the number of layers are 6, the number of attention heads are 4, and the hidden dimension size is 768. We train the auto-encoder on AMASS[29], AIST++[25] and AIOZ-GDANCE[21].
Music-Dance Style AlignmentFor the mapping layer, we adopt a 2-layer MLP and project the input embedding to 256D embedding.
### Evaluation Metrics
To evaluate the effectiveness of our method, we define following metrics: 1) _Style Classification Accuracy_(**Acc.**). We estimate the style class of music and motion embeddings in joint space with one logits head. 2) _Style Retrieval_(**Retr.**). We assume a good style evaluation model is able to map input sequences to embedding which are well clustered in latent space. Consequently, for either music streams or motion sequences, we encode them, and calculate the distance between their embedding and each cluster centers embedding. If the distance to correct center embedding ranks No. \(k\) closest, it is considered as Top-_k_ Retrieval accuracy. 3) _Intra-Cluster Distance_(**Intra.**). We measure the distance between input embedding to the correct center embedding.
It is expected the more stylistically consistent the input sequence is, the closer the embedding distance is. 4) _Inter
Figure 4: **Visual Comparison of Music-Dance Style Consistency.** We compare the generated motion sequences conditioned on different music styles with GTs. (a) **JB** means _Ballet Jazz_ style, (b) **JS** means _Street Jazz_ style, (c) **LH** means _LA__Hiphop_ style, (d) **BR** means _Break_ style, (e) **LO** means _Lock_ style, (f) **MH** means _Middle__Hiphop_ style, (g) **KR** means _Krump_ style, (h) **WA** means _Waacking_ style, (i) **HO** means _House_ style, (j) **PO** means _Pop_ style. For each dance, we adopt 10sec motion segment and evenly sample 10 frames. The dashed box indicates poses that are style inconsistent with GTs.
Cluster Distance(Inter.).On the contrary to _Intra._, we expect the distance between input sequence embedding and incorrect center embedding as large as possible. This metric measures how stylistically inconsistent the input are to misaligned styles. 5) _Intra-2-Inter_(**I2L.**). This metric correlates _Inter._ and _Intra._ and calculated as: \(I2I=\frac{Intra}{Inter.}\) and uses a scalar to measure how consistent an input sequence is to the target style. The smaller the \(I2I\) is, the more stylistically consistent the input sequence is. 6) _Embedding Similarity_(**Simi.**). This metric measures the cosine similarity between audio embedding \(z_{A}\) and motion embedding \(z_{M}\). Higher score indicates higher style consistency.
### Results
Evaluation Results on GTWe apply our method with three different variant of designs on the test set of AIST++[25] and AIOZ-GDANCE[21].The quantitative results are reported at Tab. 1. As we can see, if we choose to align motion embedding \(z_{M}\) to audio embedding \(z_{A}\), we found that the model is not able to understand the music's style. This is because the assumption that the pretrained music encoder is able to extract semantically righ information for style understanding does not hold. Similarly, if we choose to map audio embedding \(z_{A}\) to align with motion embedding \(z_{M}\), it is shown that the motion style is not understood well. This is also because the assumption that pretrained motion encoder captures style representative embedding dose not hold neither. The design of mapping audio embedding \(z_{A}\) and motion embedding \(z_{M}\) to joint embedding space, however, shows that both music and dance motion style prediction, retrieval, and consistency evaluation achieve high score. This indicates that jointly learning the mappings \(f_{M\to J}\) and \(f_{A\to J}\) is able to align \(z_{M}\) and \(z_{A}\) in a representative informative latent space, and outperforms the other two variants cross-modality alignment.
Evaluation Results on Generated MotionsWe apply our method on the generated dance motion sequences of three different methods [25][38][55], and conduct both quantitative and visual analysis to validate that our method is able to evaluate the music-dance style consistency. Fig. 5 shows the intra-cluster distances, inter-cluster distances, and intra-to-inter ratios of generated motion of each method, organized by music styles. The visual results of generated motions by each method and the GTs are selected and shown in Fig. 4. For each dance sequence, we randomly crop a 10_sec_ segment with FPS=30, which is in
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \multicolumn{1}{c|}{Method} & \multicolumn{1}{c|}{Music} & \multicolumn{1}{c|}{Motion} & \multicolumn{1}{c|}{Motion} & \multicolumn{1}{c|}{Motion} & \multicolumn{1}{c|}{Motion} & \multicolumn{1}{c|}{Motion} & \multicolumn{1}{c|}{Motion} & \multicolumn{1}{c|}{Motion} \\ \hline Alignment & Objective & Acc. \(\uparrow\) & Top-Ret. \(\uparrow\) & Top-Ret. \(\uparrow\) & Intra. \(\downarrow\) & Inter. \(\uparrow\) & I2L & Acc. \(\uparrow\) & Top-Ret. \(\uparrow\) & Top-Ret. \(\uparrow\) & Intra. \(\downarrow\) & I2L & Simi. \(\uparrow\) \\ \hline \(f_{M\to A}\) & \(C_{\text{Intra.}}^{\text{Intra.}}\) & 44.00\% & 32.80\% & 45.00\% & 1.26 & 1.38 & 0.91 & 93.80\% & 94.00\% & 99.00\% & 0.24 & 1.39 & 0.18 & 0.21 \\ \(f_{A\to M}\) & \(C_{\text{Intra.}}^{\text{Intra.}}\) & 59.20\% & 58.40\% & 63.20\% & 0.77 & 1.33 & 0.60 & 91.80\% & 57.20\% & 85.00\% & 1.25 & 1.41 & 0.89 & 0.18 \\ \(f_{M\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to\to J\to J\to J\to J\to\to J\to J\to J\to\to J\to J\to\to J\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to\to J\to\to J\to J\to J\to\to J\to\to J\to J\to J\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to J\to J\to J\to\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J\to J
total 300 frames, and evenly sample 10 frames for visualization. We use red dashed box to indicate the poses that are style inconsistent with the driving music.
For example, the intra-to-inter ratio(**I2L**) of **JB** style indicates that the MDSC(music-dance style consistency) among three methods follows UDE[55]\(>\) Bailando[38]\(>\) FACT[25], but all method generate highly consistent results. Correspondingly, Fig. 4(a) shows that although few poses are identified with artifacts and style inconsistency, most of the poses are style consistent. Another example is **JS** style. In Fig. 5(c), the _I2L_ of **JS** shows the style consistency among three methods are Bailando[38]\(>\) FACT[25]\(>\) UDE[55], and among three, the consistency of Bailando[38] is much higher than other two methods, and the consistency of UDE[55] is relatively low. This conclusion could be drawn from visualization analysis in Fig. 4(b) as well. As we can see in Fig. 4(b), the dance style of **JS** music is stretching out both arms out in a T-pose and spreading legs gently. For FACT[25], artifacts and inconsistent poses are identified by red dashed boxes, the inconsistent pose looks like a **JB** style rather than **JS** style. The dance motion of UDE[55] presents more inconsistent poses. As shown in the figure, the last 3 poses present swing style, which is obviously inconsistent with desired **JS** style. These comparisons show that our method serves as a better metric in assessing the music-motion consistency in terms of music and dance style.
The evaluation results in Tab. 2 shows discrepancy with the user study reported in [55], in which the generated dances of [55] are preferred than [25, 38] by the participants. We argue that this is due to different evaluation focuses. In [55], participants are expected to pay more attention to the motion quality, specifically, whether the motion is smooth and natural. However, the primary concern in Tab. 2 is the style consistency between dances and music. This also justifies the necessity of proposing style consistency metric.
### Ablation
We conduct ablation study to explore the effectiveness of terms of \(\mathcal{L}_{intra}\), \(\mathcal{L}_{inter}\), and \(\mathcal{L}_{reg}\). We report the quantitative comparison in Tab. 3. We design three experiments with different combination of clustering terms to validate their effectiveness, and all experiments adopt same setting: \(f_{M\to J}+f_{A\to J}\), \(\mathcal{E}^{cluster}\). The first experiment uses only \(\mathcal{L}_{intra}\) term, the second experiment adopts \(\mathcal{L}_{intra}+\mathcal{L}_{inter}\), and the third one takes full \(\mathcal{L}^{cluster}\) as its training objective. As we can see, the model trained with only \(\mathcal{L}_{intra}\) does not learn representative encoding capability. Its estimation on music style is comparatively lower and the style retrieval accuracy for both music and motion are largely lower than other experiments. In addition, It computes worst _I2I_ score and _Simi_ score. The term \(\mathcal{L}_{inter}\) affects the model's capability a lot. As we can see, the model training with \(\mathcal{L}_{intra}+\mathcal{L}_{inter}\) achieves much better capability in estimating style, retrieving correct style from embedding, and clustering an input sequence to correct style cluster. The impact of \(\mathcal{L}_{reg}\) is not as large as \(\mathcal{L}_{inter}\), but it still improves the performance of the model.
We also ablate to explore the effectiveness of learning strategy. As we adopt clustering-based objective, there is two alternatives. 1) The number of clusters is unknown, and 2) the number of clusters is known. For option 2), cluster centers \(\hat{c}\) are learned jointly to facilitate the learning of our method. We evaluate the methods trained with two strategies on AIST++[25] and report the results in Tab. 4. As we can see, learning without knowing the number of clusters performs worse.
## 5 Conclusion
We propose **MDSC**, the first method in measuring music-motion style consistency, in this paper. We adopt pretrained encoders for music embedding and motion embedding, and adopt MLPs to align them in joint latent space. We learn the mapping using clustering-based objective instead of constrastive-based objective. We conduct thorough experiments to validate that our method is able to assess the music-dance style consistency, and we provide benchmarks in Tab. 2 on three different music-driven methods.
\begin{table}
\begin{tabular}{c|c c c} \hline Method & Music Acc. \(\uparrow\) & Motion Acc. \(\uparrow\) & Simi. \(\uparrow\) \\ \hline w/o \(\hat{c}\) & 53.40\% & 92.20\% & 0.46 \\ w/ \(\hat{c}\) & 57.60\% & 94.20\% & 0.47 \\ \hline \end{tabular}
\end{table}
Table 4: **Ablation on Learning Strategy** We train our method using design option \(f_{M\to J}\)+\(f_{A\to J}\) with different learning strategies. 1) w/o \(\hat{c}\) means we train the model without learnable cluster center embedding \(\hat{c}\), it is assumed the number of cluster is unknown. 2) w/ \(\hat{c}\) assumes the number of cluster is known. Indicate best results.
\begin{table}
\begin{tabular}{c c c|c|c c|c c|c c|c c|c c|c c|c} \hline \multicolumn{1}{c|}{Losses} & \multicolumn{4}{c|}{Music} & \multicolumn{4}{c|}{Motion} & \multicolumn{4}{c|}{Motion} & \multicolumn{4}{c|}{Motion} \\ \hline \(\mathcal{L}_{intra}\) & \(\mathcal{L}_{inter}\) & \(\mathcal{L}_{reg}\) & Acc. \(\uparrow\) & Top-1 Rev. \(\uparrow\) & Top-3 Rev. \(\uparrow\) & Intra. \(\downarrow\) & Inter. \(\uparrow\) & I2L \(\downarrow\) & Acc. \(\uparrow\) & Top-1 Rev. \(\uparrow\) & Top-3 Rev. \(\uparrow\) & Intra. \(\downarrow\) & Inter. \(\uparrow\) & I2L \(\downarrow\) & \(\uparrow\) \\ \hline ✓ & ✓ & & 45.60\% & 13.60\% & 27.20\% & 1.42 & 1.41 & 1.00 & 91.00\% & 0.40\% & 1.04\% & 1.42 & 1.42 & 1.00 & 0.11 \\ ✓ & ✓ & & 59.20\% & 59.20\% & 66.40\% & 0.75 & 1.32 & 0.99 & 92.80\% & 92.40\% & 99.60\% & 0.23 & 1.39 & 0.17 & 0.42 \\ ✓ & ✓ & ✓ & 57.60\% & 58.40\% & 76.00\% & 0.83 & 1.43 & 0.99 & 94.20\% & 93.40\% & 99.80\% & 0.24 & 1.48 & 0.16 & 0.47 \\ \hline \end{tabular}
\end{table}
Table 3: **Ablation on loss terms** We train our method using design option \(f_{M\to J}\)+\(f_{A\to J}\) with learning objective \(\mathcal{L}^{cluster}\), and compare the quantitative results between using training with 1) \(\mathcal{L}_{intra}\) only, 2) \(\mathcal{L}_{intra}\)+\(\mathcal{L}_{inter}\), and 3) full clustering-based loss as \(\mathcal{L}_{intra}\)+\(\mathcal{L}_{inter}\)+\(\mathcal{L}_{reg}\). |
2310.14148 | The Boosted DC Algorithm for Clustering with Constraints | This paper aims to investigate the effectiveness of the recently proposed
Boosted Difference of Convex functions Algorithm (BDCA) when applied to
clustering with constraints and set clustering with constraints problems. This
is the first paper to apply BDCA to a problem with nonlinear constraints. We
present the mathematical basis for the BDCA and Difference of Convex functions
Algorithm (DCA), along with a penalty method based on distance functions. We
then develop algorithms for solving these problems and computationally
implement them, with publicly available implementations. We compare old
examples and provide new experiments to test the algorithms. We find that the
BDCA method converges in fewer iterations than the corresponding DCA-based
method. In addition, BDCA yields faster CPU running-times in all tested
problems. | Tuyen Tran, Kate Figenschou, Phan Tu Vuong | 2023-10-22T02:08:18Z | http://arxiv.org/abs/2310.14148v1 | # The Boosted DC Algorithm for Clustering with Constraints
###### Abstract
This paper aims to investigate the effectiveness of the recently proposed Boosted Difference of Convex functions Algorithm (BDCA) when applied to clustering with constraints and set clustering with constraints problems. This is the first paper to apply BDCA to a problem with nonlinear constraints. We present the mathematical basis for the BDCA and Difference of Convex functions Algorithm (DCA), along with a penalty method based on distance functions. We then develop algorithms for solving these problems and computationally implement them, with publicly available implementations. We compare old examples and provide new experiments to test the algorithms. We find that the BDCA method converges in fewer iterations than the corresponding DCA-based method. In addition, BDCA yields faster CPU running-times in all tested problems.
keywords: Clustering, DC Programming, Difference of Convex Functions Algorithm, Boosted Difference of Convex Functions Algorithm Msc: [2020] 65K05, 65K10, 90C26, 49J52
## 1 Introduction
In the field of mathematical optimization, the properties of convex functions and convex sets have allowed the development of numerical algorithms to solve problems efficiently. Generally, the aim of an optimization problem is to minimize an objective function, with respect to some constraints in order to find the best possible (and hence smallest) objective values. A local minima will be at least as good as any nearby elements, while a global minima will be at least as good as every feasible element. Many objective functions have several local minima, which makes identifying global minima difficult, but the properties of convex functions are particularly useful in this context. For a convex function, if there is a local minimum that is interior, it is also the global minimum.
However, while convex functions can be useful modeling tools, most real-world problems are non-convex. Such problems are generally more complicated and difficult. Indeed, most non-convex optimization problems are at least NP-hard. Non-convexity presents many challenges but in particular the presence of both local and global minima, and the lack of identifiable characteristics for global minima greatly increases the computational complexity. Various methods have been developed to tackle these types of problems, which can broadly be split into either global or local approaches. Global approaches, such as branch and bound, are very expensive (especially for large-scale problems) but are able to guarantee the globality of the solution. Local approaches meanwhile are faster and cheaper, but their solutions cannot generally be proven to be global. Even local approaches struggle to be effective at large-scale too, so the challenge of developing algorithms which balance quality and scalability is complex.
DC programming is a non-convex optimization problem where the objective function is a Difference of Convex (or DC) function. The approach uses the convexity of the DC components and duality to make solving the non-convex problem easier. It has been shown to be robust and efficient in many applications, including with large-scale problems, and is relatively simple to use and implement [1, 2]. In the last 30 years, the DC Algorithm (DCA) is the golden method for solving DC programming. The computed solutions cannot be guaranteed to be global as the DCA converges to a local solution, however in experiments the DCA often converges to a global solution [2]. The method was initially introduced by Pham Dinh Tao in 1985, following on from his work on subgradient algorithms for convex maximization programming, and further developed by Le Thi Hoai An and Pham Dinh Tao in the 1990's [3]. In the following years, the DCA was applied to various different topics, especially in machine learning and data mining problems, becoming increasingly popular.
Following on from the success and popularity of DC programming, new algorithms based on the DCA have been proposed. The Boosted Difference of Convex functions Algorithm (BDCA) is one of these new methods, introduced to accelerate the convergence of the classical DCA [4]. More importantly, BDCA escapes from bad local solutions thanks to the line search step with arbitrary large trial stepsize. Numerical experiments have shown that the BDCA is able to outperform the DCA in problems such as Minimum Sum-of-Squares Clustering (MSSC) and \(l_{\infty}-\)trust-region subproblems [5]. Whether the BDCA can be successfully applied in other settings is a topic of ongoing research, and the aim of this project was to investigate the use of the BDCA when applied to clustering problems with constraints.
Clustering is a common statistical data analysis method which aims to classify similar objects together into groups (or clusters). There are many different approaches to defining the similarity of objects and how they are assigned to clusters, and generally no single algorithm will be correct for a given task. Centroid-based algorithms represent clusters with a central vector, assigning clusters based on proximity to cluster centers using a proximity metric. The k-means algorithm, a centroid-based approach with a fixed number of \(k\) clusters, uses the squared Euclidean distance and is one of the most widely known
methods. However, though it is simple and easy to implement, it suffers from certain weaknesses including a high dependence on the initial choice of cluster centers as well as the proximity measure, and the algorithm has no guarantee of convergence to a global optimal solution. Much research has focused on alternative algorithms to k-means which alleviate its drawbacks, including DC programming based approaches.
_The development of DC programing and DCA_
Pham Dinh Tao and Le Thi Hoai An, who have extensively developed DC programming and the DCA, have published many papers on the topic. Their 1997 paper "DC programming and DCA: Theory, Algorithm, Applications" presented the most complete study of DC programming and the DCA at that point [2]. It described key components of topic including DC duality local optimality conditions, convergence properties of the DCA and its theoretical basis. Alongside the new and significant mathematical results, the paper presented numerical experiments applied to real-life problems which proved the effectiveness of the DCA compared to known algorithms. The extensive content of the paper made it a highly important source for further work on the topic.
Following on from the development of DC programming in the late 20\({}^{\text{th}}\) century, the early 2000's saw increasing applications of the DCA, especially in machine learning and related fields. The first paper to investigate the use of the DCA for clustering focused on a K-median clustering problem and the K-means algorithm [6]. When tested with real-world databases, both of the DCA methods presented in the paper achieved better objective values than the classic clustering algorithms and were faster, too. The success and efficiency of the methods showed that clustering was an area of interest for further research into the DCA and led to further papers looking into different clustering types, including minimum sum-of-squares clustering (MSSC), fuzzy clustering and hierarchical clustering.
The following years saw DC programming and the DCA become classical, with applications across a wide variety of fields. Further investigation into the use of the DCA to solve clustering problems continued. The clustering and multifacility location problems with constraints described in this paper were originally developed in [7]. This paper presented the mathematical basis for the DC decompositions of each model, including the use of a penalty method based on the squared distance function. Numerical examples were presented for the models, though extensive testing on the effectiveness of the methods was not included.
As the DCA become increasingly popular, research was performed on methods to accelerate its performance and recently the BDCA was proposed. The addition of a line search step in BDCA was proven to accelerate the convergence of DCA. Moreover, it has been noticed that BDCA escapes from bad local solutions, which cannot be the case in DCA. More details about increasing performance, even in high-dimensions can be found in [4]. Further developments on the use of the BDCA for linearly constrained problems [8] and nonsmooth functions [5] have continued to show the effectiveness of the BDCA in various
applications. The BDCA has been applied to Minimum Sum-of-Squares Clustering, where it was on average 16 times faster than the DCA [5]. Further extensions of BDCA can be found in [9; 10]. The aim of this paper is to explore whether the BDCA is also effective for constrained clustering problems.
In this paper, we investigate the performance of the BDCA against the DCA for solving clustering problems with constraints and set clustering problems. The DCA for solving these problems were originally developed in [7], and this project follows from the work there. This work is split between studying the mathematical basis for the algorithms presented, and testing implementations of these algorithms in MATLAB.
The paper is structured as follows. Firstly, section 2 presents some basic mathematical tools of convex analysis, and followed by explanation of the DCA and BDCA. Next, the penalty method is reviewed in section 4. Section 5 lays out the first clustering with constraints problem, while in section 6 we study a model of clustering with constraints involving sets. The numerical tests performed are explained in section 7. Finally, section 8 summarizes the findings of the paper as well as future research directions.
## 2 Preliminaries
In this section, we present basic tools of analysis and optimization. The readers are referred to [6; 11; 12; 13] for more details and proofs of the presented results.
Let us define \(\overline{\mathbb{R}}\coloneqq\mathbb{R}\cup\{+\infty\}=(-\infty,\infty]\) and let \(f\colon\mathbb{R}^{d}\to\overline{\mathbb{R}}\) be a convex function. An element \(v\in\mathbb{R}^{d}\) is called a _subgradient_ of \(f\) at \(\bar{x}\in\operatorname{dom}\left(f\right)=\{x\in\mathbb{R}^{d}\mid f(x)<\infty\}\) if it satisfies
\[\langle v,x-\bar{x}\rangle\leq f(x)-f(\bar{x})\;\text{for all}\;x\in\mathbb{R }^{d}. \tag{2.1}\]
The set of all such elements \(v\) is called the _subdifferential_ of \(f\) at \(\bar{x}\) and is denoted by \(\partial f(\bar{x})\). If \(\bar{x}\not\in\operatorname{dom}\left(f\right)\), we set \(\partial f(\bar{x})=\emptyset\). Subdifferentials possess many calculus rules that are important in practice. In particular, for a finite number of convex functions \(f_{i}\colon\mathbb{R}^{d}\to\overline{\mathbb{R}}\), \(i=1,\ldots,m\), we have the following sum rule:
\[\partial(f_{1}+\cdots+f_{m})(\bar{x})=\partial f_{1}(\bar{x})+\cdots+\partial f _{m}(\bar{x})\;\text{for all}\;\bar{x}\in\mathbb{R}^{d} \tag{2.2}\]
provided that \(\bigcap_{i=1}^{m}\operatorname{ri}(\operatorname{dom}\left(f_{i}\right))\neq\emptyset\), where \(\operatorname{ri}(\Omega)\) denotes for the _relative interior_ of \(\Omega\); see, e.g, [12, Definition 1.68].
If \(f=\max_{i=1,\ldots,m}f_{i}\), and \(f_{i}\) is continuous at \(\bar{x}\) for every \(i=1,\ldots,m\), then for any \(\bar{x}\in\mathbb{R}^{d}\) we have the following maximal rule:
\[\partial f(\bar{x})=\operatorname{conv}\left(\bigcup_{i\in I(\bar{x})} \partial f_{i}(\bar{x})\right), \tag{2.3}\]
where \(I(\bar{x})=\{i\mid f_{i}(\bar{x})=f(\bar{x})\}\), and \(conv\) is the _convex hull_.
Given a nonempty closed convex subset \(\Omega\) of \(\mathbb{R}^{d}\) with \(\bar{x}\in\Omega\), the _normal cone_ to \(\Omega\) at \(\bar{x}\) is defined by
\[N(\bar{x};\Omega)=\big{\{}v\in\mathbb{R}^{d}\ \big{|}\ \langle v,x-\bar{x}\rangle \leq 0\text{ for all }x\in\Omega\big{\}}. \tag{2.4}\]
If \(\bar{x}\not\in\Omega\), we set \(N(\bar{x},\Omega)=\emptyset\). It is well-known that an element \(\bar{x}\in\mathbb{R}^{d}\) is an absolute minimizer of a convex function \(f\colon\mathbb{R}^{d}\to\mathbb{R}\) on \(\Omega\) if and only if \(\bar{x}\) is a local minimizer of \(f\) on \(\Omega\). Moreover, this happens if and only if the following optimality condition holds:
\[0\in\partial f(\bar{x})+N(\bar{x};\Omega).\]
Let \(\Theta\subset\mathbb{R}^{d}\) be a nonempty set (not necessarily convex). The _distance function_ to \(\Theta\) is defined by
\[d(x;\Theta)=\inf\big{\{}\|x-w\|\ \big{|}\ w\in\Theta\ x\in\mathbb{R}^{d}\big{\}}\,.\]
The _Euclidean projection_ from \(x\in\mathbb{R}^{d}\) to \(\Theta\) is the set
\[P(x;\Theta)=\big{\{}w\in\Theta\ \big{|}\ d(x;\Theta)=\|x-w\|\big{\}}.\]
There are two important properties of the Euclidean projection. First, if \(\Theta\) is a nonempty closed set, then \(P(x;\Theta)\) is nonempty and is a singleton if \(\Theta\) is also convex. Second, if \(\Theta\) is a convex set and \(w\in P(x;\Theta)\), then \(x-w\in N(w;\Theta)\).
Another tool we will use is the notion of _Fenchel conjugates_. Let \(g\colon\mathbb{R}^{d}\to\mathbb{R}\) be a function (not necessarily convex). The Fenchel conjugate of \(g\) is defined by
\[g^{*}(y)=\sup\big{\{}\langle y,x\rangle-g(x)\ \big{|}\ x\in\mathbb{R}^{d} \big{\}}\,,\ y\in\mathbb{R}^{d}.\]
Note that \(g^{*}\colon\mathbb{R}^{d}\to\overline{\mathbb{R}}\) is an extended-real-valued convex function. Suppose further that \(g\) is itself convex, then the _Felchel-Moreau theorem_ states that \((g^{*})^{*}=g\). Based on this theorem, we have the following relation between the subgradients of \(g\) and its Fenchel conjugate:
\[x\in\partial g^{*}(y)\iff y\in\partial g(x). \tag{2.5}\]
The notions of subgradient and Fenchel conjugate provide the mathematical foundation for the DCA introduced in the next section. The following proposition gives us a two-sided relationship between the Fenchel conjugates and subgradients of convex functions.
**Proposition 1**.: _Let \(\varphi\colon\mathbb{R}^{d}\to\overline{\mathbb{R}}\) be a proper, lower semicontinuous, and convex function. Then \(v\in\partial\varphi^{*}(y)\) if and only if_
\[v\in\operatorname{argmin}\big{\{}\varphi(x)-\langle y,x\rangle\ \big{|}\ x\in\mathbb{R}^{d}\big{\}}.\]
_Furthermore, \(w\in\partial\varphi(x)\) if and only if_
\[w\in\operatorname{argmin}\big{\{}\varphi^{*}(y)-\langle x,y\rangle\ \big{|}\ y\in\mathbb{R}^{d}\big{\}}.\]
The proof of this proposition can be found in [14, Proposition 2.1].
Throughout this paper, we denote \(\mathbf{A}\in\mathbb{R}^{m\times d}\) as the _data matrix_. The \(i^{th}\) row is denoted \(a^{i}\in\mathbb{R}^{d}\) for \(i=1,\ldots,m\). Similarly, \(\mathbf{X}\in\mathbb{R}^{k\times d}\) is defined as the _variable matrix_ and the \(\ell^{th}\) row is denoted \(x^{\ell}\in\mathbb{R}^{d}\) for \(\ell=1,\ldots,k\). The linear space \(\mathbb{R}^{k\times d}\) is equipped with the inner product \(\left\langle\mathbf{X},\mathbf{Y}\right\rangle=\operatorname{trace}( \mathbf{X}^{T}\mathbf{Y})\).
Recall that the _Frobenius norm_ on \(\mathbb{R}^{k\times d}\) is defined by
\[\left\|\mathbf{X}\right\|_{F}=\left\langle\mathbf{X},\mathbf{X}\right\rangle^{ 1/2}=\left(\sum_{\ell=1}^{k}\langle x^{\ell},x^{\ell}\rangle\right)^{1/2}= \left(\sum_{\ell=1}^{k}\|x^{\ell}\|^{2}\right)^{1/2}.\]
Notice that the squared Frobenius norm is differentiable with the following representation
\[\nabla\left\|\mathbf{X}\right\|_{F}^{2}=2\mathbf{X}\text{ for }\mathbf{X}\in \mathbb{R}^{k\times d}.\]
For constraint sets \(\Omega^{\ell}\) used in this paper, we are using the same notation and assumptions as those sets as in [7]. In which, \(\Omega^{\ell}\subset\mathbb{R}^{d}\) for \(l=1,\ldots,k\) are nonempty closed convex sets and the Cartesian set product is defined as \(\mathbf{\Omega}=\Omega^{1}\times\Omega^{2}\times\ldots\times\Omega^{k}.\) For \(\mathbf{X}\in\mathbb{R}^{k\times d}\), the projection from \(\mathbf{X}\) to \(\mathbf{\Omega}\) is the matrix \(\mathbf{Y}\) whose \(\ell^{th}\) row is \(y^{\ell}=P(x^{\ell};\Omega^{\ell}).\) The relationship between distance function and Frobenious norm can be viewed as
\[[d(\mathbf{X};\mathbf{\Omega})]^{2}=\|\mathbf{X}-\mathbf{Y}\|_{F}^{2}=\sum_{ \ell=1}^{k}\|x^{\ell}-y^{\ell}\|^{2}=\sum_{\ell=1}^{k}d(x^{\ell};\Omega^{\ell })^{2}.\]
## 3 DCA and BDCA
### The DCA
The notions of subgradients and Fenchel conjugates in the previous section provides mathematical foundation for the DCA introduced below. Consider the difference of two convex functions \(g-h\) on a finite-dimensional space and assume that \(g\colon\mathbb{R}^{d}\to\overline{\mathbb{R}}\) is extended-real-valued while \(h\colon\mathbb{R}^{d}\to\mathbb{R}\) is real-valued on \(\mathbb{R}^{d}\). Then a general problem of _DC optimization_ is defined by:
\[\text{minimize }f(x):=g(x)-h(x),\quad x\in\mathbb{R}^{d}.\] ( \[\mathcal{P}\] )
The DCA introduced by Tao and An is a simple but effective algorithm for minimizing the function \(f\); see [2, 15].
```
procedureDCA\((x_{1},N\in\mathbb{N})\) for\(p=1,\ldots,N\)do Find \(y_{p}\in\partial h(x_{p})\) Find \(x_{p+1}\in\partial g^{*}(y_{p})\) output\(x_{N+1}\)
```
**Algorithm 1** N Iteration DCA
To proceed further, recall that a function \(\varphi\colon\mathbb{R}^{d}\to\overline{\mathbb{R}}\) is \(\gamma\)-_convex_ with a given modulus \(\gamma\geq 0\) if the function \(\psi(x):=\varphi(x)-\frac{\gamma}{2}\|x\|^{2}\) as \(x\in\mathbb{R}^{d}\) is convex on \(\mathbb{R}^{d}\). If there exists \(\gamma>0\) such that \(\varphi\) is \(\gamma-\)convex, then \(\varphi\) is called _strongly convex_ on \(\mathbb{R}^{d}\).
We also recall that a vector \(\bar{x}\in\mathbb{R}^{d}\) is a _stationary point/critical point_ of the DC function \(f\) from eq. (\(\mathcal{P}\)) if
\[\partial g(\bar{x})\cap\partial h(\bar{x})\neq\emptyset.\]
The next result, which can be derived from [2; 15], summarizes some convergence results of the DCA. Deeper studies of the convergence of this algorithm and its generalizations involving the Kurdyka-Lojasiewicz (KL) inequality are given in [16; 17].
**Theorem 1**.: _Let \(f\) be a DC function taken from (\(\mathcal{P}\)), and let \(\{x_{k}\}\) be an iterative sequence generated by Algorithm 1. The following assertions hold:_
1. _The sequence_ \(\{f(x_{k})\}\) _is always monotone decreasing._
2. _Suppose that_ \(f\) _is bounded from below, that_ \(g\) _is lower semicontinuous and_ \(\gamma_{1}\)_-convex, and that_ \(h\) _is_ \(\gamma_{2}\)_-convex with_ \(\gamma_{1}+\gamma_{2}>0\)_. If_ \(\{x_{k}\}\) _is bounded, then the limit of any convergent subsequence of_ \(\{x_{k}\}\) _is a stationary point of_ \(f\)_._
In many practical applications of Algorithm 1, for a given DC decomposition of \(f\) it is possible to find subgradient vectors from \(\partial h(x_{k})\) based on available formulas and calculus rules of convex analysis. However, it may not be possible to explicitly calculate an element of \(\partial g^{*}(y_{k})\). Such a situation requires either constructing a more suitable DC decomposition of \(f\), or finding \(x_{k+1}\in\partial g^{*}(y_{k})\) approximately by using the minimization criterion of proposition 1. This leads us to the following modified version of the DCA.
```
procedureDCA-2(\(x_{1}\in\operatorname{dom}g,N\in\mathbb{N}\)) forp = 1,..., Ndo Find \(y_{p}\in\partial h(x_{p})\) Find \(x_{p+1}\) approximately by solving the problem \(\underset{x_{N+1}}{\text{minimize}}\;\varphi_{p}(x):=g(x)-\langle y_{p},x \rangle,\;x\in\mathbb{R}^{d}\). output\(x_{N+1}\)
```
**Algorithm 2** N Iteration for **DCA-2**
### The BDCA
The Boosted DC Algorithm (BDCA) has been recently proposed to accelerate the performance of the DCA. The BDCA has an extra line search step at the point found by the DCA at each iteration. This allows the BDCA to take larger steps leading to a larger reduction of the objective value each iteration. The BDCA has also been found to escape bad local optima more easily than
the DCA, leading to better objective values as well as increased speed. It has been proved in the case where \(g\) and \(h\) are differentiable [4], and when \(g\) is differentiable but \(h\) is not [5]. In this section, the problem eq. \((\mathcal{P})\) and associated assumptions when applying the BDCA are presented. The problems in this paper fall under the latter case. There are two assumptions made when applying the BDCA:
**Assumption 1:**_Both \(g\) and \(h\) are strongly convex with modulus \(\rho>0\)._
**Assumption 2:**_The function \(h\) is subdifferentiable at every point in \(\operatorname{dom}h\). So \(\partial h(x)\neq\emptyset\) for all \(x\in\operatorname{dom}h\). The function \(g\) is continuously differentiable on an open set containing \(\operatorname{dom}\left(h\right)\) and \(\inf\limits_{x\,\in\,\mathbb{R}^{d}}\,f(x)>-\,\infty\,\)._
Under these two assumptions, the following optimality condition holds.
**Theorem 2** (First-Order Necessary Optimality Condition).: _If \(x^{*}\in\operatorname{dom}\left(f\right)\) is an optimal solution of eq. \((\mathcal{P})\), then_
\[\partial h(x^{*})=\{\nabla g(x^{*})\}. \tag{3.1}\]
The proof of this theorem can be found as Theorem 3 in [18]. Any point satisfying eq. (3.1) is a stationary point of eq. \((\mathcal{P})\). We say that \(\overline{x}\) is a critical point of eq. \((\mathcal{P})\) if
\[\nabla g(\overline{x})\in\partial h(\overline{x}).\]
Every stationary point \(x^{*}\) is a critical point, but in general the converse is not true.
A general form of the BDCA is presented below.
```
procedureBCCA(\(x_{1}\), \(\alpha>0\), \(\beta\in(0,1)\), \(N\in\mathbb{N}\)) for\(k=1,\ldots,N\)do Step 1: Select \(u_{k}\in\partial h(x_{k})\) and solve the strongly convex optimization problem \[\min\limits_{x\,\in\,\mathbb{R}^{d}}\quad\varphi_{k}(x):=g(x)-\langle u_{k},x\rangle\] to obtain its unique solution \(y_{k}\). Step 2: Set \(d_{k}:=y_{k}-x_{k}\) if\(d_{k}=0\)then return\(x_{k}\) else Go to Step 3 Step 3: Choose any \(\overline{\lambda}_{k}\geq 0\), set \(\lambda_{k}=\overline{\lambda}_{k}\) while\(f(y_{k}+\lambda_{k}d_{k})>f(y_{k})-\alpha\lambda_{k}^{2}\|d_{k}\|^{2}\)do \(\lambda_{k}=\beta\lambda_{k}\) Step 4: Set \(x_{k+1}=y_{k}+\lambda_{k}d_{k}\), and \(k=k+1\) output\(x_{N+1}\)
```
**Algorithm 3** BDCA for solving eq. (4.1)
We can see that if \(\lambda_{k}=0\), then the iterations of the BDCA and the DCA are the same. Therefore, convergence results for the BDCA can apply in partic
ular to the DCA. The following proposition shows that \(d_{k}\) is indeed a descent direction for \(f\) at \(y_{k}\).
**Proposition 2**.: _For all \(k\in\mathbb{N}\), the following hold:_
1. \(f(y_{k})\leq f(x_{k})-\rho\|d_{k}\|^{2}\)__
2. \(f^{\prime}(y_{k}\,;\,d_{k})\leq-\rho\|d_{k}\|^{2}\)__
3. _there is some_ \(\delta_{k}>0\) _such that_ \[f(y_{k}+\lambda\,d_{k})\leq f(y_{k})-\alpha\,\lambda^{2}\,\|d_{k}\|^{2},\quad \text{for all }\lambda\in[0,\delta_{k}],\] _so that the backtracking step 3 of algorithm_ 3 _terminates finitely._
The proof of proposition 2 can be found as Proposition 3.1 in [5]. With proposition 2 it can be shown that the BDCA results in a larger decrease of the objective function than the DCA at each iteration. The work presented in [5] provides the full mathematical background to the BDCA and the improvement of the performance of DCA in relevant applications.
The first convergence result of the iterative sequence generated by BDCA is presented in the following theorem.
**Theorem 3**.: _For any \(x_{0}\in\mathbb{R}^{m}\), either BDCA returns a critical point of eq. \((\mathcal{P})\) or it generates an infinite sequence such that the following holds._
1. \(f(x_{k})\) _is monotonically decreasing and convergent to some_ \(f(x^{*})\)_._
2. _Any limit point of_ \(\{x_{k}\}\) _is a critical point of eq. \((\mathcal{P})\)_. If in addition,_ \(f\) _is coercive then there exits a subsequence of_ \(\{x_{k}\}\) _which converges to a critical point ofeq._ \((\mathcal{P})\)_._
3. \(\sum_{k=0}^{+\infty}\|d_{k}\|^{2}<+\infty.\) _Further, if there is some_ \(\overline{\lambda}\) _such that_ \(\lambda_{k}\leq\overline{\lambda}\) _for all_ \(k\)_, then_ \(\sum_{k=0}^{+\infty}\|x_{k+1}-x_{k}\|^{2}<+\infty\)_._
More details of the proof and the convergence under the Kurdyka-Lojasiewicz property can be found in [5, Section 4].
## 4 A Penalty Method via Distance Functions
In this section, we review a penalty method that was introduced in [7] using distance functions for solving constrained optimization problems and then apply them to DC programming. The _quadratic penalty method_ was utilized for this technique; see [19; 20]. Detailed proofs for theorems and propositions below can be found in [7].
We first restate the problem of interest here. Let \(f\colon\mathbb{R}^{d}\to\mathbb{R}\) be a function and let \(\Omega_{i}\) for \(i=1,\ldots,q\) be nonempty closed subsets of \(\mathbb{R}^{d}\) with \(\bigcap_{i=1}^{q}\Omega_{i}\neq\emptyset\). Consider the optimization problem:
\[\begin{array}{ll}\min&f(x)\\ \text{subject to}&x\in\bigcap_{i=1}^{q}\Omega_{i}.\end{array} \tag{4.1}\]
The above problem can be rewritten as an unconstrained version given by
\[\min\,f_{\lambda}(x)=f(x)+\frac{\lambda}{2}\sum_{i=1}^{q}[d(x;\Omega_{i})]^{2}, \;x\in\mathbb{R}^{d}. \tag{4.2}\]
The following theorem provides a relation between optimal solutions of the problem eq. (4.1) and problem eq. (4.2) which is obtained by a penalty method based on distance functions. Here \(\lambda\) is the penalty term. The proof follows [20, Theorem 17.1].
**Theorem 4**.: _Consider eq. (4.1) in which \(f\colon\mathbb{R}^{d}\to\mathbb{R}\) is a l.s.c. function. Suppose that eq. (4.1) has an optimal solution. If \(\lim_{n\to\infty}\lambda_{n}=\infty\) and \(x_{n}\in\mathbb{R}^{d}\) is an absolute minimizer of the function \(f_{\lambda_{n}}\) defined in eq. (4.2) for all \(n\in I\!\!N\), then every subsequential limit of \(\{x_{n}\}\) is a solution of eq. (4.1)._
Now, let \(F\colon\mathbb{R}^{k\times d}\to\mathbb{R}\) be a function and let \(\Omega_{i}^{\ell}\) for \(\ell=1,\ldots,k\) and \(i=1,\ldots,q\) be nonempty closed subsets of \(\mathbb{R}^{d}\). We consider the extended version of eq. (4.1)
\[\begin{array}{ll}\min&F(x^{1},\ldots,x^{k})\\ \mbox{subject to}&x^{\ell}\in\bigcap_{i=1}^{q}\Omega_{i}^{\ell}\;,x^{\ell}\in \mathbb{R}^{d}\;\mbox{for}\;\ell=1,\ldots,k.\end{array} \tag{4.3}\]
The unconstrained version of eq. (4.3) is then given by
\[\begin{array}{ll}\min&F_{\lambda}(x^{1},\ldots,x^{k})=F(x^{1},\ldots,x^{k}) +\frac{\lambda}{2}\sum\limits_{\ell=1}^{k}\sum\limits_{i=1}^{q}[d(x^{\ell}; \Omega_{i}^{\ell})]^{2}\\ &x^{\ell}\in\mathbb{R}^{d}\;\mbox{for}\;\ell=1,\ldots,k.\end{array} \tag{4.4}\]
We denote \(\mathbf{X}=(x^{1},\ldots,x^{k})\in\mathbb{R}^{k\times d}\) and its \(\ell^{\text{th}}\) row as \(x^{\ell}\) for \(\ell=1,\ldots,k\).
**Corollary 1**.: _Consider eq. (4.3) in which \(F\colon\mathbb{R}^{k\times d}\to\mathbb{R}\) is a l.s.c. function. Suppose that eq. (4.3) has an optimal solution. If \(\lim_{n\to\infty}\lambda_{n}=\infty\) and \(X_{n}=(x_{n}^{1},\ldots,x_{n}^{k})\in\mathbb{R}^{k\times d}\) is an absolute minimizer of the function \(F_{\lambda_{n}}\), then every subsequential limit of \(\{X_{n}\}\) is a solution of eq. (4.3)._
Next, we recall a known result on DC decompositions of squared distance functions. The proof can be found in [21, Proposition 5.1].
**Proposition 3**.: _Let \(\Omega\) be a nonempty closed set in \(\mathbb{R}^{d}\) (not necessarily convex). Define the function_
\[\varphi_{\Omega}(x)=\sup\big{\{}\langle 2x,w\rangle-\|w\|^{2}\;\big{|}\;w\in \Omega\big{\}}=2\sup\big{\{}\langle x,w\rangle-\frac{1}{2}\|w\|^{2}\;\big{|}\; w\in\Omega\big{\}}.\]
_Then we have the following conclusions:_
**(i)** _The function \(\varphi_{\Omega}\) is always convex. If we assume in addition that \(\Omega\) is convex, then \(\varphi_{\Omega}\) is differentiable with \(\nabla\varphi_{\Omega}(x)=2P(x;\Omega)\)._
**(ii)** _The function \(f(x)=[d(x;\Omega)]^{2}\) is a DC function with \(f(x)=\|x\|^{2}-\varphi_{\Omega}(x)\) for all \(x\in\mathbb{R}^{d}\)._
Furthermore, the proof of the contents of the following proposition can be found in [7, Section 3].
**Proposition 4**.: _Consider eq. (4.1) where additionally \(f\) is a DC function, and all constraint sets are convex sets that satisfy \(\bigcap_{i=1}^{q}\dot{\Pi}(\Omega_{i})\neq\emptyset\). Suppose that \(\lim_{n\to\infty}\lambda_{n}=\infty\) and \(x_{n}\) is a critical point of the DC function \(f_{\lambda_{n}}\) defined in eq. (4.2). Then every subsequential limit of the sequence \(\{x_{n}\}\) is a critical point of eq. (4.1)._
A similar development for relating the problems eqs. (4.3) and (4.4) is shown in [7, Section 3].
## 5 Clustering with Constraints
In this section, we review the problem of _clustering with constraints_ in [7]. The squared Euclidean norm is used for measuring distance. The task is to find \(k\) centers \(x^{1},\ldots,x^{k}\in\mathbb{R}^{d}\) of \(m\) data points \(a^{1},\ldots,a^{m}\in\mathbb{R}^{d}\) and with the constraint that each \(x^{\ell}\in\bigcap_{i=1}^{q}\Omega_{i}^{\ell}\) for some nonempty closed convex set \(\Omega_{i}^{\ell}\subset\mathbb{R}^{d}\) with \(\ell=1,\ldots,k\). Without loss of generality, we can assume that the numbers of constraints is equal for each center. The problem of interest is given by
\[\begin{array}{ll}\min&\psi(x^{1},\ldots,x^{k})=\sum_{i=1}^{m}\min_{\ell=1, \ldots,k}\|x^{\ell}-a^{i}\|^{2}\\ \mbox{subject to}&x^{\ell}\in\bigcap_{j=1}^{q}\Omega_{j}^{\ell}\mbox{ for }\ell=1,\ldots,k.\end{array} \tag{5.1}\]
As discussed in the previous section, the unconstrained minimization problem is given by
\[\begin{array}{ll}\min&f(x^{1},\ldots,x^{k})=\ \frac{1}{2}\sum_{i=1}^{m} \min_{\ell=1,\ldots,k}\|x^{\ell}-a^{i}\|^{2}+\frac{\tau}{2}\sum_{\ell=1}^{k} \sum_{i=1}^{q}[d(x^{\ell};\Omega_{i}^{\ell})]^{2},\\ &x^{1},\ldots,x^{k}\in\mathbb{R}^{d},\end{array} \tag{5.2}\]
where \(\tau>0\) is a penalty parameter.
Applying proposition 3 for any nonempty closed convex set \(\Omega\) in \(\mathbb{R}^{d}\), and using the _minimum-sum principle_ we obtain a DC decomposition of \(f\) as below
\[f(x^{1},\ldots,x^{k}) =\Big{(}\frac{1}{2}\sum_{i=1}^{m}\sum_{\ell=1}^{k}\|x^{\ell}-a^{ i}\|^{2}+\frac{\tau q}{2}\sum_{\ell=1}^{k}\|x^{\ell}\|^{2}\Big{)}\] \[-\Big{(}\frac{1}{2}\sum_{i=1}^{m}\max_{r=1,\ldots,k}\sum_{\ell=1, \ell\neq r}^{k}(\|x^{\ell}-a^{i}\|)^{2}+\frac{\tau}{2}\sum_{\ell=1}^{k}\sum_{ i=1}^{q}\varphi_{\Omega_{i}^{\ell}}(x^{\ell})\Big{)}.\]
Now, let \(f=g-h\) by denoting
\[g_{1}(x^{1},\ldots,x^{k}) =\frac{1}{2}\sum_{i=1}^{m}\sum_{\ell=1}^{k}\|x^{\ell}-a^{i}\|^{2}, g_{2}(x^{1},\ldots,x^{k}) =\frac{\tau q}{2}\sum_{\ell=1}^{k}\|x^{\ell}\|^{2}\,\] \[h_{1}(x^{1},\ldots,x^{k}) =\frac{1}{2}\sum_{i=1}^{m}\max_{r=1,\ldots,k}\sum_{\ell=1,\ell \neq r}^{k}\|x^{\ell}-a^{i}\|^{2}, h_{2}(x^{1},\ldots,x^{k}) =\frac{\tau}{2}\sum_{\ell=1}^{k}\sum_{i=1}^{q}\varphi_{\Omega_{i} ^{\ell}}(x^{\ell}),\]
and let \(g=g_{1}+g_{2}\) and \(h=h_{1}+h_{2}\). As discussed earlier, we shall collect \(x^{j}\) into the variable matrix \(\mathbf{X}\), \(a^{i}\) into the data matrix \(\mathbf{A}\), and let \(\mathbf{\Omega}_{i}=\Omega_{i}^{1}\times\Omega_{i}^{2}\times\ldots\times\Omega _{i}^{k}\in\mathbb{R}^{k\times d}\) for \(i=1,\ldots,q\).
It is clear that \(g\) is differentiable and its gradient is given by
\[\nabla g(\mathbf{X})=\nabla g_{1}(\mathbf{X})+\nabla g_{2}(\mathbf{X})=(m+ \tau q)\mathbf{X}-\mathbf{E}\mathbf{A}.\]
Here, \(\mathbf{E}\in\mathbb{R}^{k\times m}\) is the matrix of ones. More details can be found in [7]. Using eq. (2.5), we find
\[\mathbf{X}=\frac{\mathbf{Y}+\mathbf{E}\mathbf{A}}{m+\tau q}\ \in\partial g^{*}( \mathbf{Y}).\]
Next, we compute \(\mathbf{Y}_{p}\in\partial h(\mathbf{X}_{p})\) and obtain \(\mathbf{X}_{p+1}\). In [7], \(\mathbf{W}\in\partial h_{1}(\mathbf{X})\) is given by
\[\mathbf{W}=\sum_{i=1}^{m}\Big{(}\mathbf{X}-\mathbf{A}_{i}-e_{r(i )}(x^{r(i)}-a^{i})\Big{)}=m\mathbf{X}-\mathbf{E}\mathbf{A}-\sum_{i=1}^{m}e_{r( i)}(x^{r(i)}-a^{i}), \tag{5.3}\]
where \(\mathbf{A}_{i}\in\mathbb{R}^{k\times d}\) is the matrix whose all rows are \(a^{i}\), \(r(i)\) is an index where the max happens for each \(i\), and \(e_{r}\) is the \(k\times 1\) column vector with a one in the \(r^{th}\) position and zeros otherwise. For \(h_{2}\), we have \(\mathbf{U}=\frac{1}{\tau}\nabla h_{2}(\mathbf{X})\) is the \(k\times d\) matrix whose rows are \(u^{j}=\sum_{i=1}^{q}P(x^{j};\Omega_{i}^{j})\). Now, let \(\mathbf{Y}_{p}=\mathbf{W}+\tau\mathbf{U}\), we are able to get \(\mathbf{Y}_{p}\in\partial h(\mathbf{X}_{p})\) at the \(p^{th}\) iteration. Hence, an explicit formula for \(\mathbf{X}\) is
\[\mathbf{X}_{p+1}=\frac{1}{m+\tau q}\Big{(}m\mathbf{X}_{p}+\tau \mathbf{U}-\sum_{i=1}^{m}e_{r(i)}\big{(}x_{p}^{r(i)}-a^{i}\big{)}\Big{)},\]
where \(x_{p}^{\ell}\) denotes the \(\ell^{th}\) row of \(\mathbf{X}_{p}\).
The summary of the DCA-based procedure can be found in [7, Algorithm 2]. We also discussed the sensitivity of \(\tau\) and provided the multiple scalar \(\sigma\) and the max value \(\tau_{f}\). In [7, Algorithm 3], an adaptive version of DCA when combined with a penalty parameter i s demonstrated.
In [7]\(g\) was shown differentiable, and the subgradient of \(h\) can be explicitly calculated, hence fulfilling the assumptions of the BDCA. Therefore, a BDCA-based procedure for solving eq. (5.2) is shown in algorithm 4.
## 6 Set Clustering with Constraints
In this section, we revisit a model of _set clustering_ with constraints in [7]. Given \(m\) subsets \(\Lambda_{1},\ldots,\Lambda_{m}\subset\mathbb{R}^{d}\), we look for \(k\) cluster centers \(x^{\ell}\in\bigcap_{j=1}^{q}\Omega_{j}^{\ell}\) for \(\ell=1,\ldots,k\), where each \(\Omega_{j}^{\ell}\) is a subset of \(\mathbb{R}^{d}\). We also use the same square distance functions to measure distances to the sets involved. The problem of interest is given by
\[\begin{array}{ll}\min&\psi(x^{1},\ldots,x^{k})=\sum_{i=1}^{m}\min_{\ell=1, \ldots,k}[d(x^{\ell};\Lambda_{i})]^{2}\\ \mbox{subject to}&x^{\ell}\in\bigcap_{j=1}^{q}\Omega_{j}^{\ell}\mbox{ for }\ell=1, \ldots,k.\end{array} \tag{6.1}\]
Here we assume that \(\Lambda_{i}\) for \(i=1,\ldots,m\) and \(\Omega_{j}^{\ell}\) for \(j=1,\ldots,q\) and \(\ell=1,\ldots,k\) are nonempty, closed and convex.
Applying the penalty method with a parameter \(\tau>0\), we obtain the unconstrained set clustering problem
\[\min f(x^{1},\ldots,x^{k})=\tfrac{1}{2}\sum_{i=1}^{m}\min_{\ell=1, \ldots,k}[d(x^{\ell};\Lambda_{i})]^{2}+\tfrac{\tau}{2}\sum_{\ell=1}^{k}\sum_{j =1}^{q}[d(x^{\ell};\Omega_{j}^{\ell})]^{2}, \tag{6.2}\] \[x^{1},\ldots,x^{k}\in\mathbb{R}^{d}.\]
Similar to the previous section, a DC decomposition of \(f=g-h\) is achieved using the _minimum-sum principle_ and proposition 3 as follows
\[g_{1}(\mathbf{X}) =\frac{m}{2}\|\mathbf{X}\|_{F}^{2}, g_{2}(\mathbf{X}) =\frac{\tau q}{2}\left\|\mathbf{X}\right\|_{F}^{2},\] \[h_{1}(\mathbf{X}) =\sum_{i=1}^{m}\Big{(}\frac{1}{2}\sum_{\ell=1}^{k}\varphi_{ \Lambda_{i}}(x^{\ell})+\frac{1}{2}\max_{r=1,\ldots,k}\sum_{\ell=1,\ell\neq r}^ {k}[d(x^{\ell};\Lambda_{i})]^{2}\Big{)}, h_{2}(\mathbf{X}) =\frac{\tau}{2}\sum_{\ell=1}^{k}\sum_{j=1}^{q}\varphi_{\Omega_{j} ^{\ell}}(x^{\ell}),\]
where \(g=g_{1}+g_{2}\) and \(h=h_{1}+h_{2}\) are convex.
More details about the derivation of \(f\) and computation that will be introduced later can be found in [7]. Firstly, using eq. (2.5), we can easily compute
\(\mathbf{X}=\frac{1}{m+\tau q}\mathbf{Y}\in\partial g^{*}(\mathbf{Y})\). Then, we find \(\mathbf{Y}\in\partial h(\mathbf{X})\) as \(\mathbf{Y}=\mathbf{V}+\mathbf{U}\), where \(\mathbf{V}\in\partial h_{1}(\mathbf{X})\) and \(\mathbf{U}\in\partial h_{2}(\mathbf{X})\). Here, \(h_{2}\) is differentiable and \(\nabla h_{2}(\mathbf{X})=\tau\mathbf{U}\), where \(\mathbf{U}\) is the \(k\times d\) matrix whose \(\ell^{th}\) row is \(\sum_{j=1}^{k}P(x^{\ell};\Omega_{j}^{\ell})\) for \(\ell=1,\ldots,k\). Notice that \(h_{1}\) is not differentiable and the subgradient \(\mathbf{V}\in\partial h_{1}(\mathbf{X})\) is given by
\[\mathbf{V}=m\mathbf{X}-\sum_{i=1}^{m}e_{r(i)}\Big{(}x^{r(i)}-P(x^{r(i)};\Lambda _{i})\Big{)}.\]
As discussed in [7], for each \(i=1,\ldots,m\), we choose an index \(r(i)\) such that the max happens, and \(e_{r}\) is the \(k\times 1\) column vector with a one in the \(r^{th}\) position and zeros otherwise. Hence, \(\mathbf{X}_{p+1}\) is represented by
\[\mathbf{X}_{p+1}=\frac{1}{\tau q+m}\Big{(}m\mathbf{X}_{p}+\tau\mathbf{U}_{p}- \sum_{i=1}^{m}e_{r(i)}\big{(}x_{p}^{r(i)}-P(x_{p}^{r(i)};\Lambda_{i})\big{)} \Big{)}.\]
In [7], the DCA-based algorithm and the adaptive \(\tau\) version for solving eq. (6.1) were discussed in Algorithm 4 and 5. Now, we introduce the BDCA version as follows:
## 7 Numerical Experiments
We now implement and test the proposed algorithms in a number of examples. All the test are implemented in MATLAB R2023b, and we made use of profiling to create efficient, more realistic implementations. The code for the examples, along with the data used in generating the figures for the examples, can be found in the public GitHub github.com/TuyentdTran/BDCACAlustering.git. We run our tests on a iMac with 3.8 Ghz 8-Core i7 processor and 32 GB DDR4 memory. Throughout the examples, algorithms 4 and 5 are tested with \(\alpha=0.05\), \(\beta=0.1\), \(\tau=1\), \(\sigma=10\), \(\tau_{f}=10^{8}\) and the tolerance for the DCA step is \(10^{-6}\), which means we terminate that step whenever \(\|\mathbf{X}_{p+1}-\mathbf{X}_{p}\|_{F}<10^{-6}\).
In examples 2 and onward, due to the size and the complexity of the problems, it is beneficial to use the following strategy for choosing the trial step size in Step 3 of BDCA, which utilizes the previous step sizes. More details can be found in [5]. Recall that that in [4], \(\overline{\lambda}_{k}\) was chosen constantly equal to some fixed parameter \(\overline{\lambda}>0\), we use the same strategy, choosing \(\overline{\lambda}_{p}=2\) in algorithms 4 and 5 for all examples except for example 1.
**Self-adaptive trial step size**
Fix \(\gamma>1\). Set \(\overline{\lambda}_{0}=0\). Choose some \(\overline{\lambda}_{1}>0\) and obtain \(\lambda_{1}\) by Step 3 of BDCA.
For any \(k\geq 2\):
1. IF \(\lambda_{k-2}=\overline{\lambda}_{k-2}\) AND \(\lambda_{k-1}=\overline{\lambda}_{k-1}\) THEN set \(\overline{\lambda}_{k}:=\gamma\lambda_{k-1}\); ELSE set \(\overline{\lambda}_{k}:=\lambda_{k-1}\).
2. Obtain \(\lambda_{k}\) from \(\overline{\lambda}_{k}\) by Step 4 of BDCA.
The _self-adaptive strategy_ here uses the step size that was chosen in the previous iteration as a new trial step size for the next iteration, except in the case where two consecutive trial step sizes were successful. In that case, the trial step size is increased by multiplying the previously accepted step size by \(\gamma>1\). In all our experiments we took \(\gamma:=2\). Furthermore we choose the initial step size \(\overline{\lambda}_{1}=2\), for all examples, the same as the non-adaptive BDCA.
### Constrained Clustering
**Example 1.** We first consider the same example as[7, Example 7.1 ] to compare BDCA against one of the original examples of [7]. We use the dataset EIL76 taken from the Traveling Salesman Problem Library [22] and impose the following constraints on the solution:
1. The first center is a common point of a box whose vertices are \((40,40)\); \((40,60)\); \((20,60)\); \((20,40)\) and a ball of radius \(r=7\) centered at \((20,60)\).
2. The second center is in the intersection of two balls of the same radius \(r=7\), centered at \((35,20)\) and \((45,22)\), respectively.
The initial centers are chosen as follows:
* The first center is drawn randomly from the box.
* The second center is randomly chosen from the ball centered at \((35,20)\).
For this problem we take the trial step size \(\overline{\lambda}_{p}=1\) for BDCA. We run the test \(100\) times to achieve the following approximate average solutions and cost values for DCA and BDCA respectively.
**DCA:** \[\mathbf{X}=\begin{pmatrix}26.69959&57.97127\\ 41.06910&23.48800\end{pmatrix},\quad\text{ Cost: }\psi(\mathbf{X})=33576.25344,\]
**BDCA:** \[\mathbf{X}=\begin{pmatrix}26.69959&57.97125\\ 41.06910&23.48789\end{pmatrix},\quad\text{ Cost: }\psi(\mathbf{X})=33576.25387,\]
with the cost fluctuating within the range of \(10^{-11}\) for both BDCA and DCA between runs.
In fig. 1, we compare the ratio between time to complete DCA over BDCA and the ratio of total number of iterations for DCA over BDCA. We can see that BDCA runs about \(1.5\) times faster and takes about \(1/4\) of the iterations. In spite of the \(1/4\) reduction in iterations, we only see \(1.5\) times speed up due to the non-trivial cost of the line search. The average run time for DCA and BDCA are \(0.0038\)s and \(0.0024\)s respectively.
A visualization of the problem is demonstrated in fig. 2.
**Example 2**.: We now perform a scaling study to test the performance of BDCA vs DCA for a variety of number of points and dimensions. The intent is give a rough idea of what sort of behavior can be expected for clustering with constraints problems in different problem regimes. In this numerical experiment, we generated \(n\) random points from continuous uniform distribution on the interval \([0,10]^{m}\). Here, \(m\in\{2,3,5,10,20\}\) and \(n\in\{50,100,500,1000,5000,10000,50000\}\). We impose \(3\) ball constraints of radius \(1\) with centers as follows:
Figure 1: Iteration and Time Ratio Comparison for the dataset EIL76 with \(2\) centers, example \(1\).
1. Repeating in the pattern \([1,5,1,5\ldots]\)
2. Repeating in the pattern \([6,4,6,4,\ldots]\)
3. \([8,\ldots,8]\)
For each combination of \(n\) and \(m\), we run with 100 random starting points drawn from the constraints and then test the performance of BDCA vs DCA.
Figures 3 and 4 show that on average, both BDCA and adaptive BDCA are better than DCA in term of iterations and time. We can also observe that for most cases, as the dimension and number of points increase, adaptive BDCA is better than DCA for both time and iterations. On the other hand, BDCA is also better than DCA when we increase the number of points. In the case we have a small number of points, adaptive BDCA is still always better than DCA but not the BDCA with constant value of \(\lambda\). Even though for BDCA the number of iterations is significantly less, as mentioned in example 1, this does not come for free. You still have to compensate for the time to perform the line
Figure 3: Iteration and Time Ratio Comparison between DCA and BDCA for example 2.
Figure 2: A 2-center constrained clustering problem for dataset EIL76, example 1.
search, and for a small number of points, with the MATLAB implementation, the iteration reduction from BDCA isn't enough to compensate for the line search cost. The self-adaptive BDCA is necessary in this low point regime to have a time improvement. In all scenarios, iteration counts are always at least 2 times better than DCA and when the number of points go from 500 and above, we see the improvement in time for basic BDCA.
In [22], the average run time and standard deviation are reported for each situation. From those results and those of figs. 3 and 4, a suggested approach is to always use BDCA with self-adaptivity, except perhaps for a low number of points, where depending on your implementation and problem it may be faster to simply use DCA.
### Set Clustering with Constraints
**Example 3**.: We now use algorithm 5 to solve a set clustering problem with constraints which was previously discussed in [7, Example 7.2 ]. We consider the latitude and longitude of the 50 most populous US cities taken from the 2014 United States Census Bureau data 1, and approximate each city by a ball with radius \(0.1\sqrt{\frac{A}{\pi}}\) where \(A\) is the city's reported area in square miles.
Footnote 1: [https://en.wikipedia.org/wiki/List_of_United_States_cities_by_population](https://en.wikipedia.org/wiki/List_of_United_States_cities_by_population)
We set up 3 centers as before with the requirement that each center must belong to the intersection of two balls. The centers of these constrained balls are the columns of the matrix below
\[\begin{pmatrix}-80&-80&-92&-90&-115&-110\\ 34&38&37&40&45&40\end{pmatrix}\]
with corresponding radii given by \(\begin{pmatrix}2&3&4&3&4&4\end{pmatrix}\). A visualization for this problem using a plate Carree projection was plotted as in [7, Example 7.2 ] in fig. 6 below 2. In this experiment, we run the test 100 times for DCA, BDCA and
Figure 4: Iteration and Time Ratio Comparison between DCA and adaptive BDCA for example 2.
adaptive BDCA. The initial centers are drawn randomly from points belonging to the first 3 constrained balls. This yields the following approximate average solutions and cost values for each run
**DCA:**: \[\mathbf{X}=\begin{pmatrix}-79.32232&35.88171\\ -91.93111&37.70414\\ -113.82298&41.17699\end{pmatrix},\quad\text{Cost: }\psi(\mathbf{X})=2271.07289,\]
**BDCA:**: \[\mathbf{X}=\begin{pmatrix}-79.32301&35.88195\\ -91.93127&37.70427\\ -113.82299&41.17698\end{pmatrix},\quad\text{Cost: }\psi(\mathbf{X})=2271.07019,\]
**Adaptive BDCA:**: \[\mathbf{X}=\begin{pmatrix}-79.32302&35.88196\\ -91.93094&37.70400\\ -113.82299&41.17697\end{pmatrix},\quad\text{Cost: }\psi(\mathbf{X})=2271.06986.\]
Note that they are equivalent up to the relative tolerance of \(10^{-6}\).
In fig. 7 we compare the ratio between time DCA over BDCA and the ratio of total number of iterations for DCA over BDCA. Similarly, in fig. 8, we compare the ratio between DCA over adaptive BDCA for time and number of iterations. The dashed lines in both figures show the overall average ratio for both time and iterations. We can see that adaptive BDCA outperforms BDCA and both of them are better than DCA. On average, DCA is slower than BDCA by 1.48 times and 2.95 times for adaptive BDCA. In term of iterations, DCA requires 3.96 times more than BDCA and 8.6 times for adaptive BDCA. We see here that the _self-adaptive_ BDCA represents a significant improvement over regular BDCA. Since the starting points are chosen randomly within constraints, we can see that the ratio for time and iterations are scattered for both comparisons.
Figure 5: A visualization of a 3-center clustering problem with 10000 points drawn from \([0,10]^{2}\), example 2. Red stars are centers.
Figure 8: Iteration and Time Ratio Comparison between DCA and adaptive BDCA for 50 most populous US cities with 3 constraints, example 3.
Figure 6: A 3-center set clustering problems with 50 most populous US cities. Each city is approximated by a ball proportional to its area, example 3.
Figure 7: Iteration and Time Ratio Comparison between DCA and BDCA for 50 most populous US cities with 3 constraints, example 3.
**Example 4**.: We next consider the latitude and longitude of the 1500 most populous US cities derived from 2023 United States Census Bureau data 3, and approximate each city by a ball with radius \(10^{-3}\sqrt{\frac{A}{\pi}}\) where \(A\) is the city's reported area in square miles. We impose the following constraints on the solution:
Footnote 3: [https://simplemaps.com/data/us-cities](https://simplemaps.com/data/us-cities)
1. One center is to lie within \(4^{\circ}\) latitude/longitude of Caldwell, Idaho and inside the rectangular box with coordinates \([-115,42;-115,49;-125,49;-125,42]\).
2. One center is to lie within the state of Colorado and within \(2.5^{\circ}\) latitude/longitude of Cheyenne, WY.
3. One center is to lie within \(3^{\circ}\) latitude/longitude of Chicago, Illinois and within \(4^{\circ}\) latitude/longitude of St. Louis, MO.
4. One center is to lie East of \(-75^{\circ}\) longitude and within \(4^{\circ}\) latitude/longitude of Washington,DC.
This example demonstrates the ability to handle more complicated constraints than in example 3, as well as how the algorithms scale as you consider more points when compared to example 3.
In fig. 10 we compare the ratio between time DCA over BDCA and the ratio of total number of iterations for DCA over BDCA. Similarly, in fig. 11, we compare the ratio between DCA over adaptive BDCA for time and number of iterations. The dashed lines in both figures show the overall average ratio for both time and iterations. From figs. 10 and 11 we can see that BDCA improves iterations by about 5.4 times and run time by about 1.9, while adaptive BDCA gives a much better performance improvement of 12.6 for iterations and nearly 4.2 times improvement in run time. We see that compared with example 3
Figure 9: A 4-center set clustering with constraints for 1500 most populous US cities, example 4.
adaptive BDCA offers even greater improvements in runtime and iterations as the problem size increases. A trend that we will see again in the set scaling example, example 5.
**Example 5**.: We again perform a scaling study to test the performance of BDCA vs DCA for a variety of number of points and dimensions. The intent is give a rough idea of what sort of behavior can be expected for set clustering with constraints problems in different problem regimes. In this numerical experiment, we generated \(n\) random points from continuous uniform distribution on the interval \([0,10]^{m}\). Here, \(m\in\{2,3,5,10\}\) and \(n\in\{50,100,250,500,1000,5000,10000,50000\}\). We impose 4 constraints, each formed by the intersection of two balls of radius 1 with centers composed of the first \(m\) entries of the following vectors:
1. \([1,5,5,\ldots,5]\) and \([2,6,5,\ldots,5]\)
2. \([5,4,1,2,3,1,2,3,1,2]\) and \([4,4,1,2,3,1,2,3,1,2]\)
Figure 11: Iteration and Time Ratio Comparison between DCA and adaptive BDCA for 1500 most populous US cities with 4 constraints, example 4.
Figure 10: Iteration and Time Ratio Comparison between DCA and BDCA for 1500 most populous US cities with 4 constraints, example 4.
3. \([8,5,9,8,7,9,8,7,9,8]\) and \([8,4,9,8,7,9,8,7,9,8]\)
4. \([9,8,1,6,9,1,6,9,1,6]\) and \([8,8,1,6,9,1,6,9,1,6]\)
For each combination of \(n\) and \(m\), we run with 100 random starting points drawn from the constraints and then test the performance of BDCA vs DCA. figs. 12 and 13 show the results of the runs.
As in example 2 both BDCA and adaptive BDCA are better than DCA in terms of iterations and run time. Similarly we observe the general trend of BDCA becoming increasingly faster compared to DCA as we increase the problem size and the evaluation of \(y_{p}\) in algorithm 5 becomes more expensive. The set clustering problem is a more difficult problem than the basic clustering problem, and results in the BDCA being even more effective, particularly the adaptive BDCA. Notice that the average iteration and time ratios for both BDCA and adaptive BDCA in figs. 12 and 13 are nearly double of that in figs. 3 and 4. A table of the average runtimes and standard deviations can be found in [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 79, 78, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 78, 79, 79, 80, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 80, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 80, 79, 79, 78, 79, 79, 78, 79, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 78, 79, 79, 79, 79, 78, 79, 79, 79, 78, 79, 79, 79, 80, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 80, 79, 79, 79, 79, 80, 79, 79, 79, 80, 79, 79, 80, 79, 79, 80, 79, 79, 80, 79, 79, 80, 79, 80, 79, 79, 80, 79, 80, 79, 80, 79, 80, 79, 80, 79, 80, 79, 80, 79, 81, 79, 81, 82, 83, 84, 85, 86,
A visualization for example 5 when the number of points is \(5000\) is in fig. 14.
## 8 Conclusions
The aim of this project was to investigate the application of the BDCA on clustering and set clustering problems with constraints. This is the first paper to test the application BDCA to any problem with nonlinear constraints. For each problem, we presented the DCA and penalty method used previously and suggested a BDCA-based method for solving it. We performed numerical experiments to test all of the methods described and presented our results in section 7, with the code and data from these examples available in the supplemental material. These experiments tested a variety of nonlinear constraints, and in all experiments, the BDCA method was able to achieve fewer iterations to convergence than the DCA method. It also outperforms DCA in term of CPU running time.
Overall, the work of this project has shown the potential effectiveness of BDCA-based methods for solving clustering and set clustering problems with constraints. The performance of these algorithms is promising for application to practical clustering problems. Further experiments with higher dimensions and changing the number of constraints could be another important direction
Figure 14: A 4-center set clustering problems with \(1500\) points drawn from \([0,10]^{2}\). Each set is a ball with radius \(0.1\), example 5. Blue stars are centers.
for future work on this topic. Furthermore, investigations into accelerating the BDCA method, via problem dependent tuning of parameters is an area we will consider.
|
2306.00803 | Electron-phonon coupling and superconductivity in $α$-MoB$_2$ as a
function of pressure | We have studied the lattice dynamics, electron-phonon coupling, and
superconducting properties of $\alpha$-MoB$_2$, as a function of applied
pressure, within the framework of density functional perturbation theory using
a mixed-basis pseudopotential method. We found that phonon modes located along
the A$-$H, H$-$L, and L$-$A high-symmetry paths exhibit large phonon linewidths
and contribute significantly to the electron-phonon coupling constant. Although
linewidths are particularly large for the highest-frequency optical phonon
modes (dominated by B vibrations), their contribution to the electron-phonon
coupling constant is marginal. The latter is largely controlled by the acoustic
low-frequency modes of predominantly Mo character. It was observed that at a
pressure of $90$~GPa, where $\alpha$-MoB$_2$ forms, the phonon-mediated pairing
falls into the strong-coupling regime, and the estimate for the superconducting
critical temperature $T_c$ agrees well with experimental observations. When
further increasing the applied pressure, a reduction of $T_c$ is predicted,
which correlates with a hardening of the acoustic low-frequency phonon modes
and a decrease of the electron-phonon coupling parameter. | Marco-Antonio Carmona-Galván, Rolf Heid, Omar De la Peña-Seaman | 2023-06-01T15:31:33Z | http://arxiv.org/abs/2306.00803v2 | Electron phonon coupling and superconductivity in \(\alpha\)-MoB\({}_{2}\) as a function of pressure
###### Abstract
We have studied the lattice dynamics, electron-phonon coupling, and superconducting properties of \(\alpha\)-MoB\({}_{2}\), as a function of applied pressure, within the framework of density functional perturbation theory using a mixed-basis pseudopotential method. We found that phonon modes located along the A\(-\)H, H\(-\)L, and L\(-\)A high-symmetry paths exhibit large phonon linewidths and contribute significantly to the electron-phonon coupling constant. Although linewidths are particularly large for the highest-frequency optical phonon modes (dominated by B vibrations), their contribution to the electron-phonon coupling constant is marginal. The latter is largely controlled by the acoustic low-frequency modes of predominantly Mo character. It was observed that at a pressure of \(90\) GPa, where \(\alpha\)-MoB\({}_{2}\) forms, the phonon-mediated pairing falls into the strong-coupling regime, and the estimate for the superconducting critical temperature \(T_{c}\) agrees well with experimental observations. When further increasing the applied pressure, a reduction of \(T_{c}\) is predicted, which correlates with a hardening of the acoustic low-frequency phonon modes and a decrease of the electron-phonon coupling parameter.
_Keywords_: first-principles calculations, phonons, electron-phonon coupling, superconductivity
## 1 Introduction
The discovery of superconductivity in MgB\({}_{2}\) more than twenty 20 years ago [1], with a critical temperature of \(T_{c}\approx 39\) K, energized the search for new superconducting materials within the family of diborides. Such a quest was pursued almost immediately after its discovery both experimentally and computationally [2]. After several years of research, the conclusion was reached that MgB\({}_{2}\) is already optimized by nature, in the sense that attempts to improve its superconducting properties by doping [3, 4, 5, 6] or pressure [7, 8] always resulted
in a reduction of \(T_{c}\) in comparison with MgB\({}_{2}\), or even in a non-superconducting material, like the sibling system AlB\({}_{2}\)[9].
Transition-metal diborides constitute an important sub-class in this context. A typical example studied was NbB\({}_{2}\) with a wide range of measured \(T_{c}\) from \(0.62\) K to \(9\) K [10, 11, 12]. MoB\({}_{2}\) attracted attention as well. While it is not a superconductor in its pristine form, superconductivity can be induced by substitution of \(4\)% Zr with a \(T_{c}\approx 6\) K [13]. It was not until 2022 that the discovery of superconductivity in MoB\({}_{2}\) under applied pressure was reported [14]. At an applied pressure of approximately \(20\) GPa, MoB\({}_{2}\) becomes superconducting with a very low \(T_{c}\) of less than \(2\) K. At these pressures, MoB\({}_{2}\) takes a rhombohedral crystal structure (space group \(R\bar{3}m\)), known also as \(\beta\)-MoB\({}_{2}\). \(T_{c}\) rapidly increases as a function of pressure, reaching \(T_{c}\approx 27\) K at a pressure of \(p_{c}\approx 70\) GPa, where it gradually transforms into the hexagonal \(\alpha\)-MoB\({}_{2}\) structure (space group \(P6/mmm\)). With further increase of pressure, \(\alpha\)-MoB\({}_{2}\) experiences a less dramatical \(T_{c}\) increase, which culminates at \(110\) GPa in a maximum \(T_{c}\) of \(32.4\) K [14].
Theoretical calculations have suggested that the mechanism for such a high \(T_{c}\) value in \(\alpha\)-MoB\({}_{2}\) is quite different from the one in MgB\({}_{2}\). In particular, while for MgB\({}_{2}\) the pairing is coming from the strong coupling between the \(\sigma\)-bands and the B-related \(E_{2g}\) phonon modes [15, 16, 17, 18], in MoB\({}_{2}\) the pairing involves electronic states of the Mo-\(d\) character and a combination of Mo-related low-frequency phonon modes with B-dominated ones [14, 19]. In fact, Quan _et al_[19] concluded that the source of the MoB\({}_{2}\)\(T_{c}\) is the so called electron-displaced atom scattering factor \(I^{2}\), which is closely related to the electron-phonon (e-ph) matrix elements of the Eliashberg theory [20] (see equation 3). However, a detailed analysis about how this factor and other ingredients involved in conventional superconductivity (like phonon frequencies, linewidths, or electron-phonon coupling parameter) are evolving as a function of pressure is lacking.
In this paper we present a thorough study of the lattice dynamics, electron-phonon coupling, and superconducting \(T_{c}\) of \(\alpha\)-MoB\({}_{2}\) as a function of applied pressure, from \(70\) GPa to \(300\) GPa, within the framework of density functional theory (DFT) [21] and density functional perturbation theory (DFPT) [22, 23, 24, 25] using a mixed-basis pseudopotential method [26]. Superconducting properties are analyzed in the framework of the Eliashberg theory [20]. We give a detailed description of the phonon linewidths and electron-phonon coupling as a function of applied pressure. In particular, we analyze the contributions of different phonon modes to these quantities, and determine its specific role for inducing the high \(T_{c}\) value of \(\alpha\)-MoB\({}_{2}\). For comparison, we also present a similar analysis for the sibling system NbB\({}_{2}\), which is a low-\(T_{c}\) superconductor with intermediate coupling. The paper is organized as follows. In section 2 we describe the computational details of our calculations. The results for the evolution of lattice dynamics, e-ph coupling and \(T_{c}\) as a function of pressure are presented in section 3. Finally, in section 4 the main findings are summarized.
## 2 Computational details
The present density-functional calculations [21] were performed with the mixed-basis pseudopotential method (MBPP) [26]. Norm-conserving pseudopotentials for Mo, Nb, and B were generated according to the Vanderbilt description [27] and include partial-core correction. For Mo and Nb, semicore \(4s\) and \(4p\) states were taken into the valence space. The current method applies a mixed-basis scheme, which uses a combination of local functions and plane waves for the representation of the valence states. We used \(s\), \(p\), and \(d\)-type functions for Mo and Nb, while for B only \(s\) and \(p\)-type, supplemented by plane waves up to a kinetic energy of \(32\) Ry. Present calculations were performed with the PBE [28] form of the GGA exchange-correlation functional. The Monkhorst-Pack special \(k\)-point sets technique, with a Gaussian smearing of \(0.25\) eV and a grid of \(18\times 18\times 18\), was used for the the Brillouin-zone integration. Phonon properties are calculated via density functional perturbation theory (DFPT) [22, 23] as implemented in the MBPP code [24, 25]. The phonon dispersions are obtained by a Fourier interpolation of dynamical matrices calculated on a \(6\times 6\times 6\)\(q\)-point mesh. For the calculation of e-ph coupling matrix elements, a denser \(36\times 36\times 36\)\(k\)-point mesh was necessary.
Through the knowledge of the phonon dispersion and e-ph matrix elements the Eliashberg function is accessible,
\[\alpha^{2}F(\omega)=\frac{1}{2\pi\hbar N(E_{F})}\sum_{\mathbf{q}\eta}\frac{ \gamma_{\mathbf{q}\eta}}{\omega_{\mathbf{q}\eta}}\delta(\omega-\omega_{ \mathbf{q}\eta}), \tag{1}\]
with \(N(E_{F})\) as the electronic density of states at the Fermi level, per atom and spin; \(\omega_{\mathbf{q}\eta}\) as the frequency of the phonon mode at the \(\mathbf{q}\)-vector and branch \(\eta\), and the phonon linewidths \(\gamma_{\mathbf{q}\eta}\) given by
\[\gamma_{\mathbf{q}\eta}=2\pi\omega_{\mathbf{q}\eta}\sum_{\mathbf{k}\nu\nu^{ \prime}}\left|g^{\mathbf{q}\eta}_{\mathbf{k}+\mathbf{q}\nu^{\prime},\mathbf{k }\nu}\right|^{2}\delta(\epsilon_{\mathbf{k}\nu}-E_{F})\delta(\epsilon_{ \mathbf{k}+\mathbf{q}\nu^{\prime}}-E_{F}), \tag{2}\]
where \(\epsilon_{\mathbf{k}\nu}\) is the one-electron band energy with momentum \(\mathbf{k}\) and band index \(\nu\). In the last equation, \(g^{\mathbf{q}\eta}_{\mathbf{k}+\mathbf{q}\nu^{\prime},\mathbf{k}\nu}\) represents the coupling matrix element for scattering of an electron from a \(\mathbf{k}\nu\) electronic state to another \(\mathbf{k}+\mathbf{q}\nu^{\prime}\) state, by a phonon \(\mathbf{q}\eta\), and is given by
\[g^{\mathbf{q}\eta}_{\mathbf{k}+\mathbf{q}\nu^{\prime},\mathbf{k}\nu}=\sqrt{ \frac{\hbar}{2\omega_{\mathbf{q}\eta}}}\sum_{\kappa\alpha}\frac{1}{\sqrt{M_{ \kappa}}}\eta^{\mathbf{q}\eta}_{\kappa a}\left\langle\mathbf{k}+\mathbf{q}\nu ^{\prime}\left|\delta^{\mathbf{q}}_{\kappa a}V\right|\mathbf{k}\nu\right\rangle, \tag{3}\]
with \(M_{\kappa}\) as the mass of the \(\kappa\)-th atom in the unit cell, and \(\eta^{\mathbf{q}\eta}_{\kappa a}\) as the normalized eigenvector of the corresponding phonon mode \(\mathbf{q}\eta\). The quantity \(\delta^{\mathbf{q}}_{\kappa a}V\) represents the first-order change of the total crystal potential, with respect to the displacement of the \(\kappa\)-th atom in the \(a\) direction.
From \(\alpha^{2}F(\omega)\) we can obtain some useful integrated quantities, like the average Allen-Dynes characteristic phonon frequency \(\omega_{log}\)
\[\omega_{\mathrm{log}}=\exp\left(\frac{2}{\lambda}\int_{0}^{\infty}d\omega \frac{\ln(\omega)}{\omega}\alpha^{2}F(\omega)\right)\,, \tag{4}\]
the square-average phonon frequency \(\bar{\omega}_{2}\)
\[\bar{\omega}_{2}=\left\langle\omega^{2}\right\rangle^{1/2}=\left(\frac{2}{\lambda }\int_{0}^{\infty}d\omega\alpha^{2}F(\omega)\omega\right)^{1/2}, \tag{5}\]
the average e-ph coupling constant \(\lambda\)
\[\lambda=2\int_{0}^{\infty}\frac{d\omega}{\omega}\alpha^{2}F(\omega)=\frac{1}{ \pi\hbar N(E_{F})}\sum_{\mathbf{q}\eta}\frac{\gamma_{\mathbf{q}\eta}}{\omega_ {\mathbf{q}\eta}^{2}}\,, \tag{6}\]
as well as the frequency-dependent \(\lambda\), given by:
\[\lambda(\omega)=2\int_{0}^{\omega}\frac{d\omega^{\prime}}{\omega^{\prime}} \alpha^{2}F(\omega^{\prime})\,. \tag{7}\]
Finally, \(\alpha^{2}F(\omega)\) is used to determinate the superconducting critical temperature, \(T_{c}\), by solving the Eliashberg gap equations [20, 29] numerically.
## 3 Results and discussion
The \(\alpha\)-MoB\({}_{2}\) structure was fully optimized by energy minimization, that is, for each fixed \(V\) the \(c/a\) parameter was optimized in order to get the \(E(V)\) and \(p(V)\) equations of state (see figure 1). The current results are compared with available experimental data [14, 30], as well as reported calculated values [31, 32, 33, 34, 35]. Our results are in remarkable agreement with the data of Pei _et al_[14] at \(90\) GPa for both, the volume (a difference of around \(0.3\)%) and also the \(c/a\) ratio (difference of \(2.1\)%). In addition, structure-optimization calculations were also performed with the full-potential Elk code [36], showing an excellent agreement with the MBPP-code calculations, which demonstrates the high accuracy of the constructed pseudopotentials.
In figure 2 the phonon dispersion along high-symmetry directions, for specific applied pressure values, are presented. The chosen pressures span across the stability region of the \(\alpha\)-MoB\({}_{2}\) structure [14]. The main characteristics of the phonon spectrum, as previously observed [14, 19], are found for the whole pressure range. On the one hand, the low-frequency region dominated by Mo vibrations; the high-frequency one ruled by B modes; and the frequency gap that separates them. On the other hand, the acoustic low-frequency modes along the L-A path, which exhibit a phonon anomaly close to L-point, as well as the soft acoustic branches along the A-H and H-L paths. Interestingly, the acoustic mode with lowest frequency (labeled as A3) is the one with the largest e-ph coupling constant contribution, given by the red vertical lines in figure 2. In general, the main effect of the applied pressure on the phonon spectra is a generalized hardening of the phonon frequencies, which directly weakens the observed phonon anomaly at the L-A path, and reduces at the same time its strong e-ph contribution.
A closer inspection of the individual mode couplings revealed that large contributions to the overall e-ph coupling are attributed to the acoustic phonon branches (A1, A2, and A3) and the highest optic one (Op), in particular along at the A\(-\)H\(-\)L\(-\)A paths (figure 2). In
Figure 1: Calculated \(p(V)\) equation of state and optimized \(c/a\) parameter, as a function of applied pressure for \(\alpha\)-MoB\({}_{2}\) obtained by two different band-structure methods (MBPP [26] and Elk [36]), compared with experimental data [14, 30] (red triangles), and calculated results reported previously [31, 32, 33, 34, 35] (blue squares).
Figure 2: Phonon dispersions for \(\alpha\)-MoB\({}_{2}\), calculated at selected pressures: 70 GPa, 90 GPa, 110 GPa, and 120 GPa. Vertical red lines correspond to the e-ph coupling constant \(\lambda_{\mathbf{q}\mathbf{q}}\). The labels correspond to the acoustic phonon branches (A1, A2, and A3), as well as the highest-optic one (Op) at the A–H–L–A paths.
figure 3, linewidths and e-ph coupling constants of these modes are show along these high-symmetry directions for two pressures. The largest linewidths (figure 3a) are found for the A3 branch, with a particular strong peak located at the H-point, followed closely by the Op branch. These results indicate an important participation of phonon modes dominated by Mo (A3) and also by B (Op) in the e-ph coupling (equation 2) reflected by the phonon linewidths (equation 3). However, for the e-ph coupling constants shown in figure 3b, the influence of B phonon-modes is faded away due to the factor \(1/\omega_{\mathbf{q}\eta}^{2}\) entering its definition (equation 6). In contrast, the large e-ph coupling constants of the acoustic branch A3 is boosted by the low frequencies of these Mo phonon modes, especially around the phonon anomaly close to the L-point. With increasing pressure, while the linewidths increase a little bit, \(\lambda_{\mathbf{q}\eta}\) strongly reduces, correlating with the observed hardening of this acoustic branch.
In order to analyze the evolution of the superconducting properties as a function of pressure, the Eliashberg function \(\alpha^{2}F(\omega)\), the e-ph coupling constant \(\lambda\), the Allen-Dynes characteristic phonon frequency \(\omega_{log}\), and the square-average phonon frequency \(\bar{\omega}_{2}\) were calculated for each case.
The Eliashberg functions for selected pressures are presented in figure 4, together with \(\lambda(\omega)\). In all cases, the largest contribution for \(\alpha^{2}F(\omega)\) and \(\lambda(\omega)\) comes from the acoustic low-frequency region, dominated almost completely by Mo phonon modes along the A-H, H-L, and specially the L-A paths, where the phonon anomaly is located. As expected, the
Figure 3: (a) Linewidths and (b) e-ph coupling constant for \(\alpha\)-MoB\({}_{2}\), at \(70\) GPa (solid lines) and \(120\) GPa (dashed lines) along the A\(-\)H\(-\)L\(-\)A paths, for the three acoustic branches (A1, A2, and A3) and the highest-frequency optical branch (Op).
largest coupling corresponds to the pressure that is closest to the phase transition: \(70\) GPa with \(\lambda\approx 2.3\). As pressure increases, the coupling reduces, at the same time that the observed phonon anomaly attenuates, which is a direct consequence of the general hardening of the phonon spectrum, as previously discussed.
The evolution of the coupling related quantities, namely the density of states at the Fermi level (\(N(E_{F})\)), \(\omega_{log}\), \(\bar{\omega}_{2}\), and \(\lambda\), as a function of pressure, are presented in figure 5. There is a nice agreement of these quantities with the reported values in literature at \(90\) GPa [14, 19], although our calculated \(\lambda=1.84\) is slightly larger (between \(10\)% and \(15\)%). This can be due to the slight difference on the structural parameters (see figure 1) or pseudopotential construction. From the evolution of \(\lambda\), it can be seen that the strong pressure dependence of the coupling is coming mainly from the low-frequency phonons (traced by \(\omega_{log}\) and \(\bar{\omega}_{2}\)), while \(N(E_{f})\) does not exhibit dramatic changes as a function of pressure. \(\alpha\)-MoB\({}_{2}\) remains in the strong-coupling regime until \(300\) GPa, where \(\lambda=0.95\), while \(\omega_{log}=37.58\) meV, and \(\bar{\omega}_{2}=51.39\) meV.
For comparison, we also calculated the same e-ph parameters, as a function of applied pressure, for the sibling compound NbB\({}_{2}\) (with the same crystal structure) at its own optimized structural parameters (see figure 5). NbB\({}_{2}\) was studied more or less at the same
Figure 4: \(\alpha\)-MoB\({}_{2}\) Eliashberg function \(\alpha^{2}F(\omega)\) (black solid line) and frequency-dependent e-ph coupling constant \(\lambda(\omega)\) (red dashed line) for specific applied pressure values.
time when superconductivity in MgB\({}_{2}\) was discovered. This was done with the idea to find related materials with improved superconducting properties. It turned out, however, that NbB\({}_{2}\) has an intermediate coupling (\(\lambda=0.67\)) and a low \(T_{c}\) value (approx. \(8.4\) K) [37]. Although NbB\({}_{2}\) has lower \(\lambda\) values than MoB\({}_{2}\) (the highest calculated \(\lambda\) for NbB\({}_{2}\) is \(0.71\) at \(p=0\) GPa), the trends as a function of pressure for the coupling-related quantities are basically the same: a reduction of \(N(E_{F})\), a phonon hardening, and a \(\lambda\) decrease.
In order to analyze the evolution of \(T_{c}\) as a function of pressure, we applied three different schemes: \((1)\), the standard Allen-Dynes equation [38],
\[T_{c}^{AD}=\frac{\omega_{log}}{1.20}\mbox{exp}\left(-\frac{1.04(1+\lambda)}{ \lambda-\mu^{*}(1+0.62\lambda)}\right), \tag{8}\]
\((2)\), the corrected Allen-Dynes equation for strong-coupling systems (normally for \(\lambda\leq 1.3\)) [38],
\[T_{c}^{cAD}=\frac{f_{1}f_{2}\omega_{log}}{1.20}\mbox{exp}\left(-\frac{1.04(1+ \lambda)}{\lambda-\mu^{*}(1+0.62\lambda)}\right), \tag{9}\]
where the correction factors to describe the strong-coupling regime are
Figure 5: Density of states at the Fermi level (\(N(E_{F})\)), the Allen-Dynes characteristic phonon frequency (\(\omega_{log}\)), the square-average phonon frequency (\(\bar{\omega}_{2}\)), and the e-ph coupling constant (\(\lambda\)), as a function of applied pressure, for \(\alpha\)-MoB\({}_{2}\) (left) and NbB\({}_{2}\) (right).
\[f_{1} =\left[1+(\lambda/\Delta_{1})^{3/2}\right]^{1/3}, \tag{10}\] \[f_{2} =1+\frac{(\bar{\omega}_{2}/\omega_{log}-1)\lambda^{2}}{\lambda^{2} +\Delta_{2}^{2}}, \tag{11}\]
and the parameters \(\Delta_{1}\) and \(\Delta_{2}\) given by
\[\Delta_{1} =2.46(1+3.8\mu^{*}), \tag{12}\] \[\Delta_{2} =1.82(1+6.3\mu^{*})\left(\bar{\omega}_{2}/\omega_{log}\right), \tag{13}\]
and finally \((3)\), by solving the isotropic Eliashberg gap equations [20, 29] numerically, \(T_{c}^{EL}\), using the calculated \(\alpha^{2}F(\omega)\) for each considered pressure.
Results obtained for the three schemes, for both \(\alpha\)-MoB\({}_{2}\) and NbB\({}_{2}\), are presented in figure 6, using in all cases the same Coulomb pseudopotential parameter \(\mu^{*}=0.13\), in order to be as close as possible to the previously reported \(T_{c}\) values for \(p=90\) GPa [14, 19]. As expected, there are quantitative differences between the \(T_{c}\) estimates, in particular for the low-pressure region, where \(\alpha\)-MoB\({}_{2}\) is in the strong-coupling regime. While both strong-coupling schemes (\(T_{c}^{cAD}\) and \(T_{c}^{EL}\)) predict a monotone superconducting temperature reduction as a function of pressure, \(T_{c}^{AD}\) first increases slightly from \(70\) GPa to approximately \(150\) GPa, followed by a decrease. For \(p>250\) GPa, \(T_{c}^{AD}\) and \(T_{c}^{cAD}\) are getting closer, a clear indication of the transition to a more moderate coupling region. For NbB\({}_{2}\), all three \(T_{c}\) estimates reveal the same pressure dependence, while \(T_{c}^{AD}\) and \(T_{c}^{cAD}\) agree almost quantitatively. This behavior is expected, since NbB\({}_{2}\) has an e-ph coupling that goes from intermediate to low coupling strength, as applied pressure increases. From these results it is clear that the
Figure 6: Evolution of \(T_{c}\) as a function of pressure for \(\alpha\)-MoB\({}_{2}\) (left) and NbB\({}_{2}\) (right) using the standard Allen-Dynes [38] equation (\(T_{c}^{AD}\)), the corrected Allen-Dynes equation (\(T_{c}^{cAD}\)) [38], and the numerical solution of the isotropic Eliashberg gap equations (\(T_{c}^{EL}\)) [20, 29]. Comparison with experimental results available in literature for \(\alpha\)-MoB\({}_{2}\)[14] (red dots).
use of \(T_{c}^{AD}\) is not adequate for a strong-coupling system like \(\alpha\)-MoB\({}_{2}\), showing misleading values and even wrong tendencies, as noted previously [19]. The comparison of calculated \(T_{c}\) with experimental data [14] indicates that, very likely, the measured MoB\({}_{2}\) samples below \(p=90\) GPa correspond to a different crystal structure, or a mix of different phases. However, for pressures at (or above) \(90\) GPa, our calculated \(T_{c}^{EL}\) (by solving the Eliashberg gap equations) are around \(\pm\) 1 K from the reported measurements and, interestingly, \(T_{c}^{EL}\) shows the best agreement with the reported experimental data at \(90\) GPa. We note that, within the framework of the Eliashberg theory, solving the (isotropic) gap equations with \(\alpha^{2}F(\omega)\) as input is the most direct way to calculate the superconducting temperature, and is superior to the other two approaches, which only provide approximations to its solution. Such a \(T_{c}\) reduction as a function of applied pressure, as obtained from our calculations for \(\alpha\)-MoB\({}_{2}\) and NbB\({}_{2}\), is also observed experimentally for Nb-substituted MoB\({}_{2}\) (Nb\({}_{0.25}\)Mo\({}_{0.75}\)B\({}_{2}\)) [39]. There, a steady \(T_{c}\) reduction is reported from \(8\) K at \(0\) GPa to \(4\) K at \(50\) GPa, followed by a gradual rise to \(5.5\) K at \(170\) GPa that is accompanied by a significant broadening of the superconducting transition width [39].
## 4 Conclusions
To summarize, we have performed a first-principles linear-response study of the lattice dynamical properties, electron-phonon coupling, and superconductivity of \(\alpha\)-MoB\({}_{2}\) as a function of applied pressure (from \(70\) GPa to \(300\) GPa). We found that the electron-phonon interaction induces large phonon linewidths for modes located specifically along the A\(-\)H, H\(-\)L, and L\(-\)A high-symmetry paths, where a phonon anomaly is present. The largest linewidths are displayed by the highest-frequency optical phonon mode (ruled by B vibrations) and the acoustic low-frequency phonon modes (involving mainly Mo atoms). However, the contribution of the optical phonon mode to the electron-phonon coupling constant is diminished because of its high-frequency value, while the dominating one is coming from the lowest-frequency acoustic phonon mode. As pressure increases, the phonon spectrum hardens, in particular the acoustic low-frequency phonon modes, and the electron-phonon coupling constant decreases, while the density of states at the Fermi level barely changes. Estimates for \(T_{c}\), obtained either way by the corrected Allen-Dynes equation or by solving the Eliashberg gap equations, show a decrease as a function of applied pressure, which correlates with the phonon hardening and the reduction of \(\lambda\). We found a good agreement between the experimental \(T_{c}\) values and the calculated ones for \(90\) GPa and \(110\) GPa. However, data for larger applied pressure values are needed to allow a more complete assessment of the predicted tendencies of \(T_{c}\) for \(\alpha\)-MoB\({}_{2}\).
This research was partially supported by the Consejo Nacional de Humanidades, Ciencias y Tecnologias (CONAHCYT, Mexico) under Grant No. FOP16-2021-01-320399; Vicerrectoria de Investigacion (VIEP), Benemerita Universidad Autonoma de Puebla (BUAP) under Grant
No. 100517450-VIEP2023; and the Karlsruher Institut fur Technologie (KIT), Germany.
|
2310.13627 | Deep-Learning-based Change Detection with Spaceborne Hyperspectral
PRISMA data | Change detection (CD) methods have been applied to optical data for decades,
while the use of hyperspectral data with a fine spectral resolution has been
rarely explored. CD is applied in several sectors, such as environmental
monitoring and disaster management. Thanks to the PRecursore IperSpettrale
della Missione operativA (PRISMA), hyperspectral-from-space CD is now possible.
In this work, we apply standard and deep-learning (DL) CD methods to different
targets, from natural to urban areas. We propose a pipeline starting from
coregistration, followed by CD with a full-spectrum algorithm and by a DL
network developed for optical data. We find that changes in vegetation and
built environments are well captured. The spectral information is valuable to
identify subtle changes and the DL methods are less affected by noise compared
to the statistical method, but atmospheric effects and the lack of reliable
ground truth represent a major challenge to hyperspectral CD. | J. F. Amieva, A. Austoni, M. A. Brovelli, L. Ansalone, P. Naylor, F. Serva, B. Le Saux | 2023-10-20T16:22:53Z | http://arxiv.org/abs/2310.13627v1 | # Deep-learning-based Change Detection
###### Abstract
Change detection (CD) methods have been applied to optical data for decades, while the use of hyperspectral data with a fine spectral resolution has been rarely explored. CD is applied in several sectors, such as environmental monitoring and disaster management. Thanks to the PRCoursore lper-Septtrale della Missione operativa (PRISMA), hyperspectral-from-space CD is now possible. In this work, we apply standard and deep-learning (DL) CD methods to different targets, from natural to urban areas. We propose a pipeline starting from coregistration, followed by CD with a full-spectrum algorithm and by a DL network developed for optical data. We find that changes in vegetation and built environments are well captured. The spectral information is valuable to identify subtle changes and the DL methods are less affected by noise compared to the statistical method, but atmospheric effects and the lack of reliable ground truth represent a major challenge to hyperspectral CD.
J.F. Amieva\({}^{1*}\), A. Austoni\({}^{1}\), M.A. Brovelli\({}^{1}\), L. Ansalone\({}^{2}\), P. Naylor\({}^{3}\), F. Serva\({}^{2,3}\)+, B. Le Sauc\({}^{3}\)\({}^{1}\)Dipartimento di Ingegneria Civile e Ambiente,
Politecnico di Milano, Milano I-20133, Italy
\({}^{2}\)Agenzia Spaziale Italiana, Via del Politecnico snc, Roma I-00133, Italy
\({}^{3}\Phi\)-lab, ESRIN, European Space Agency, Frascati I-00044, Italy Change detection, hyperspectral satellite, Earth observation
Footnote †: dagger}\)Now at the National Research Council, Rome, Italy
## 1 Introduction
Change detection (CD) is the set of procedures used to identify changes between multiple images, generally acquired at different times, which has been applied with success in remote sensing data for several decades [11]. It is also known that different CD methods can produce different change maps [16], and that often expert assessment is required to interpret and post-process results in a supervised fashion.
Deep learning (DL) methods have been gaining much attention as a tool for automatizing time-consuming CD tasks [7]. Since CD involves identifying spatial features and their changes between two different dates, convolutional neural networks (NNs) have proven to be highly successful for CD in optical and radar data [4, 8].
Unlike multispectral imagery, hyperspectral data provides very detailed information on the spectral characteristics of the sensed objects, allowing for example to discriminate between different materials or retrieve biogeophysical parameters accurately [13]. Historically, this kind of data has been collected from airborne platforms, while now multiple hyperspectral satellite missions are ongoing or planned, opening a new era for their applications.
PRISMA (PRecursore lperSpettrale della Missione operativaA) is a mission of the Italian Space Agency (ASI) acquiring hyperspectral data globally since 2019. Further details on the mission are provided in Sec. 2. To our knowledge, CD with PRISMA have been limited so far, as recent works [1, 15] considered only individual pairs of images. In general, hyperspectral CD studies are challenged by the lack of suitable data, often restricted to a few regions of the world [10, 6, 17] or without temporality [5], with ground truth often unavailable for validation. Contributions of this work comprise 1) information on a list of PRISMA image pairs enabling change detection; 2) an assessment of unsupervised statistical and DL methods for CD with PRISMA data to illustrate their limits and potentialities.
## 2 Data and Methods
### PRISMA satellite data
The PRISMA mission was launched in June 2019 and is one of the latest imaging spectroscopy missions for Earth observation [9]. The PRISMA satellite has a hyperspectral imaging spectrometer (30 m GSD) and a panchromatic camera (5 m GSD). The hyperspectral sensor covers the visible and near-infrared (VNIR: 400 - 1010 nm, 66 bands) and the short-wave infrared (SWIR: 920 - 2505 nm, 174 bands) with a high spectral resolution, having a total of 240 bands. The acquisitions have a swath of 30 km with a revisit time below 29 days, and since they are primarily on-demand, based on requests by registered users, multitemporal and cloud-free images are not always available. However, the satellite has near-global acquisition capabilities, which provides potential coverage even
in remote areas. For this study, we use atmospheric-corrected and geocoded surface reflectance (L2D) data provided by ASI. An additional coregistration step is used to reduce shifts between images down to the pixel level.
Eleven pairs of PRISMA acquisitions with low cloud coverage are selected for our analysis. They are chosen to ensure global representativeness, sampling of different land cover states (e.g., rural, urban, or mixed), and consistent timing to reduce seasonal variations. Further information as acquisition times and coordinates is provided in Table 1.
### Preprocessing and CD methods
PRISMA image pairs are **co-registered** with the _GeFolki_ software [3], using the red band of PRISMA images, with similar results confirmed for other band selections. The 'before' (\(b\)) image is used as the target, and the 'after' (\(a\)) is the moving image. The optical flow derived with this method for the overlapping area is then used to correct all the bands of the moving image to better match the target, generally resulting in shifts below 5 pixels for complex terrains. Finally, square patches with size \(512\times 512\) for the area of interest (AOI) are extracted and used to derive binary CD maps (Fig. 1).
Compressed **change vector analysis** (C2VA) is an unsupervised method [2] that fully exploits multispectral image information. Similar to the standard CVA [11], for each pixel a change magnitude (\(\rho\)) and phase angle (\(\theta\)) are calculated for the before and after images for \(B\) spectral bands, as \(\rho=\sqrt{\sum_{k=1}^{B}(X_{k,a}-X_{k,b})^{2}}\). The phase angle is estimated from an arbitrary reference vector, e.g. \(X_{ref}=(\sqrt{B}/B,...,\sqrt{B}/B)\), and can be used to identify coherent changes. Examples of magnitude and phase angle from C2VA for one scene are reported in Fig. 2. Notably changes in agricultural fields (upper left areas) or the mining area on the right have a similar phase, suggesting their common nature. For this work we focus on the magnitude information, making binary change maps based on a 90\({}^{th}\) percentile threshold calculated for each pair.
**DeepCVA** (DCVA) [14] is a method for generating a pixel wise hyper vector \(G\) from the difference of a chosen subset of layers \(L\) from a pre-trained deep NN and performing unsupervised CD. The chosen subset is given apriori and is denoted by \(L\subset[1:N]\) where \(N\) is the number of layers. We denote by \(F\) the NN composed of layers \(F^{l}\) with \(l\in[1:N]\). To simplify the notation, we denote by \(F^{l}_{i}\) the feature layer of \(X_{i}\) at layer \(F^{l}\) with \(i\in\{a,b\}\). \(G\) is generated from the difference of extracted deep representations. In particular, we compute the difference for each given feature-layer \(l\in L\): \(\delta_{l}=F^{l}_{b}-F^{l}_{a}\) and concatenate each \(\delta_{l}\) to obtain \(G\). To limit the size of \(G\), we refine the number of selected features at each layer of \(L\) by only retaining a certain percentile of features from \(\delta_{l}\). For each layer \(l\), we perform a pixel-wise spatial clustering and keep for each cluster the features with the highest variance within the top percentile. CD is then computed at a pixel-wise level, where a change is reported if \(||G||>\mathcal{T}\) where \(\mathcal{T}\) is a given threshold. For the computation of \(\mathcal{T}\), we use a local adaptive thresholding method (DCVA Ada) or Otsu's method (DCVA Otsu). We use a network pre-trained [14] on visible/infrared bands (RGBIR).
## 3 Results and Discussion
An overview of CD results for all the pairs of PRISMA acquisitions is provided in Table 2, comparing different scenes and methods. By construction, 10% of the image is marked
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Location** & **Dates** & **Lat/Lon (deg)** \\ \hline \hline Athens & 2020-07-22; 2022-07-05 & 37.94 / 23.95 \\ Beirut & 2020-08-23; 2022-06-26 & 33.86 / 35.55 \\ Hanging Rock & 2019-12-27; 2021-02-11 & -31.49 / 151.29 \\ Java & 2021-04-21; 2021-07-17 & -7.54 / 110.44 \\ Lagos & 2020-11-13; 2022-01-22 & 6.44 / 3.39 \\ London & 2020-06-24; 2022-07-18 & 51.48 / -0.46 \\ Los Angeles & 2020-07-21; 2022-07-16 & 34.01 / -118.22 \\ Nalasopara & 2019-12-31; 2022-02-21 & 19.47 / 72.84 \\ Newark & 2020-04-15; 2022-04-22 & 40.72 / -74.2 \\ Rome & 2020-08-06; 2022-06-15 & 41.86 / 12.26 \\ Shanghai & 2021-04-09; 2022-04-09 & 31.35 / 121.6 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Details on the PRISMA acquisitions used.
Figure 1: Processing workflow example: RGB images and AOI (blue) (rows 1-3), CD map (row 4 - white: change, black: no change). Data processed under license (@ASI).
as change for C2VA. DCVA with Otsu thresholding method is prone to overestimating the amount of changes. This is likely due to the fact that the difference histograms produced are not bimodal and therefore binarization is affected by noise or equalization issues (e.g., due to clouds). The estimates of DCVA Ada methods are much more conservative, with little or no change detected for urban scenes, such as Athens or Los Angeles. It should be noted that the network was pre-trained on high-resolution optical imagery, and the coarse spatial resolution of PRISMA may not be fully suitable for the task. The amount of changes identified by the network is not trivially related to the number of layers selected. For example in the Beirut scene, increasing the number of layers induces more detected changes, but fewer changes for the Shanghai pair. Moreover, in the case of Lagos, the presence of haze or smog in one image (not shown) leads to the smallest amount of changes detected overall. To visualize the contrast between the different methods and sensitivity to hyperparameters, we report a comparison of CD masks and corresponding RGB images in Fig. 3 and 4. These images illustrate how DCVA Otsu tends to mark water surfaces as changed, likely due to their different optical characteristics. Detected changes by the adaptive methods are a subset of Otsu's method, and more changes are picked up when more layers are considered. This may be explained by noting that a small number of layers is more suitable for homogeneous scenes. For the Beirut scene, in the most conservative setting (layer 2,5), changes are detected in the city centre, near the August 2020 explosion site. An interesting feature in Fig. 3 is that C2VA can identify ships in the upper right portion of the image, while they are missing when the change is identified in the latent space. For specific applications, such as ship detection, it would be important to tune the algorithms with real data as fine-grained details can be missing in the latent space. Large-scale land-use changes are however identified by both methods. In general, changes identified by C2VA appear noisy in urban scenes but are fairly consistent when their extent is larger, without the sensitivity of the DCVA Otsu for water bodies. In the absence of reliable ground-truth data, it is however hard to judge which method is performing better.
## 4 Conclusions and perspectives
In this work, we present a comparison of statistical and DL methods for performing change detection on novel hyperspectral satellite data (from PRISMA). This is the first study to address hyperspectral-from-space with deep learning on a varied portfolio of changes, as operational hyperspectral satellite missions have only recently been launched. For this task, we selected eleven pairs of acquisitions. The areas were selected to represent the heterogeneity commonly found in EO data, including areas with clouds or ship traffic.
We find that larger scale changes in natural and urban scenes are successfully identified by the proposed methods. Sensitivity to terrain conditions and atmospheric effects is also noted. The moderate spatial resolution of PRISMA complicates the application of a pre-trained network using only four bands. In this case, results are very sensitive to the number of layers adopted. Training from scratch with a larger multispectral dataset would likely improve results.
Our work shows the potential of hyperspectral data for CD tasks. The lack of reliable ground truth data complicates the assessment of the different methods, but subjective evaluation indicates that the use of threshold-based methods is not always successful. The availability of datasets with high-quality ground truth labels would be useful for many applications, including the development of DL models and for semantic CD. We aim to release a public dataset for this purpose, as this is now possible thanks to current and new missions such as CHIME [12].
|
2308.09435 | A Methodology for Generative Spelling Correction via Natural Spelling
Errors Emulation across Multiple Domains and Languages | Modern large language models demonstrate impressive capabilities in text
generation and generalization. However, they often struggle with solving text
editing tasks, particularly when it comes to correcting spelling errors and
mistypings. In this paper, we present a methodology for generative spelling
correction (SC), which was tested on English and Russian languages and
potentially can be extended to any language with minor changes. Our research
mainly focuses on exploring natural spelling errors and mistypings in texts and
studying the ways those errors can be emulated in correct sentences to
effectively enrich generative models' pre-train procedure. We investigate the
impact of such emulations and the models' abilities across different text
domains. In this work, we investigate two spelling corruption techniques: 1)
first one mimics human behavior when making a mistake through leveraging
statistics of errors from particular dataset and 2) second adds the most common
spelling errors, keyboard miss clicks, and some heuristics within the texts. We
conducted experiments employing various corruption strategies, models'
architectures and sizes on the pre-training and fine-tuning stages and
evaluated the models using single-domain and multi-domain test sets. As a
practical outcome of our work, we introduce SAGE(Spell checking via
Augmentation and Generative distribution Emulation). It is a library for
automatic generative SC that includes a family of pre-trained generative models
and built-in augmentation algorithms. | Nikita Martynov, Mark Baushenko, Anastasia Kozlova, Katerina Kolomeytseva, Aleksandr Abramov, Alena Fenogenova | 2023-08-18T10:07:28Z | http://arxiv.org/abs/2308.09435v2 | A Methodology for Generative Spelling Correction via Natural Spelling Errors Emulation across Multiple Domains and Languages
###### Abstract
Modern large language models demonstrate impressive capabilities in text generation and generalization. However, they often struggle with solving text editing tasks, particularly when it comes to correcting spelling errors and mistypings. In this paper, we present a methodology for generative spelling correction (SC), which was tested on English and Russian languages and potentially can be extended to any language with minor changes. Our research mainly focuses on exploring natural spelling errors and mistypings in texts and studying the ways those errors can be emulated in correct sentences to effectively enrich generative models' pre-train procedure. We investigate the impact of such emulations and the models' abilities across different text domains. In this work, we investigate two spelling corruption techniques: 1) first one mimics human behavior when making a mistake through leveraging statistics of errors from particular dataset and 2) second adds the most common spelling errors, keyboard miss clicks, and some heuristics within the texts. We conducted experiments employing various corruption strategies, models' architectures and sizes on the pre-training and fine-tuning stages and evaluated the models using single-domain and multi-domain test sets. As a practical outcome of our work, we introduce SAGE 1 (Spell checking via Augmentation and Generative distribution Emulation). It is a library for automatic generative SC that includes a family of pre-trained generative models and built-in augmentation algorithms.
Footnote 1: [https://github.com/ai-forever/sage/](https://github.com/ai-forever/sage/)
## 1 Introduction
Recent advancements in large language models have shown remarkable capabilities in text generation and language understanding that can be seen on various benchmarks such as SuperGLUE (Wang et al., 2019), GEM (Gehrmann et al., 2021), BigBench (bench authors, 2023) etc. However, these models often encounter challenges when it comes to effectively addressing text editing tasks, particularly automatic correction of misspelling and mistyping. The task is well known, and many traditional approaches rely on explicit rules, dictionaries, or statistical models to detect and correct spelling errors. However, the emergence of large language models and generative techniques has introduced new possibilities and improved the effectiveness of automatic spelling correction (SC).
Thus, in this paper, we address the task of automatic generative SC across various domains. Our research primarily studies natural orthographic errors, text misspellings, and their emulation during model pre-training. We explore the impact of these emulations on the model's abilities across different domains and models.
As part of our methodology, we leverage two different spelling corruption techniques. The first technique applies the statistical analysis of common errors, aiming to mimic natural human behavior when making mistakes. The second technique introduces the most frequent spelling errors, keyboard miss clicks, and a set of heuristics within the texts. We conduct experiments for the Russian and English languages with various corruption strategies and model sizes during the pre-training and fine-tuning stages. As our work's practical result, we introduce SAGE (Spellchecking via Augmentation and Generative distribution Emulation) -- a comprehensive library for automatic generative SC that incorporates a range of generative models, trained using our proposed methodology, and offers built-in augmentation techniques. Additionally, SAGE contains the data hub, a valuable resource for the Russian language, consisting of novel spelling datasets.
The remainder is structured as follows. We overview multiple prior works on SC and augmentation strategies for data corruption in Section 2. Section 3 presents our methodology, including task
formulation, methodology overview, the precise approaches of the corruption techniques, and the data we used. Section 4 lists the experiments and the generative models we used and demonstrates the effectiveness of our proposed techniques and the impact of different model configurations. We report the achieved results in Section 5 and analyze the obtained scores. Section 6 concludes with a discussion of the future work directions.
## 2 Related work
Spell checking is a fundamental task in natural language processing (NLP) that aims to correct misspelled words in text automatically. Multiple approaches have been proposed to tackle this task, namely rule-based, statistical, and generative SC methods, which will be examined in this section.
Rule-based spell checking is one of the most common approaches that relies on predefined rules and dictionaries for detecting and rectifying misspelled words. These resources can incorporate algorithmic error models such as Longest Common Subsequence (Taghva and Stofsky, 2001), Levenshtein Distance (Van Delden et al., 2004), or Phonetic Algorithms (Kondrak and Sherif, 2006).
Statistical spell checking approaches employ machine learning algorithms to learn from extensive text corpora. These algorithms can identify common spelling errors and their corresponding corrections. Some examples of statistical approaches include n-gram models (Ahmed et al., 2009), Hidden Markov Models (Stuker et al., 2011), part-of-speech tagging (Vilares et al., 2016) and Noisy Channel Model (Kernighan et al., 1990).
Generative SC is a novel spell checking approach that has shown promising results in recent years. Such systems take into account the context, due to the architecture nature of language models such as seq2seq Long Short-Term Memory (LSTM) (Evershed and Fitch, 2014), seq2seq Bidirectional LSTM (Zhou et al., 2017), and state-of-the-art transformer models like BERT (Sun and Jiang, 2019), BSpell (Rahman et al., 2022), etc.
The paper (Guo et al., 2019) presents multilingual translation models for paraphrase generation task. M2M100 models (Fan et al., 2020) (Many-to-Many multilingual models) effectively translate source language text into a target language that aligns with the source language. Given the M2M100 models' comprehensive understanding of multiple languages, their utilization in spell checking tasks proves promising. In our research, among other investigations, we explore the suitability of the M2M approach for spell checking.
_Datasets_. English spell checking research has received significant attention due to English widespread use, which results in the creation of spell checking datasets. Evaluation datasets such as BEA-2019 shared task (Bryant et al., 2019), comprising corpora like FCE (Yannakoudakis et al., 2011), W&I+LOCNESS, Lang-8 (Tajiri et al., 2012), and NUCLE (Dahlmeier et al., 2013), provide valuable resources for assessing spell checking and error correction tasks. NeuSpell (Jayanthi et al., 2020) introduced the BEA60K natural test set and the well-established JFLEG dataset (Napoles et al., 2017), containing only spelling mistakes. Other clean corpora, including the Leipzig Corpora Collection (Biemann et al., 2007) and the Gutenberg corpus (Gerlach and Font-Clos, 2020), offer diverse sources such as news, web content, and books for further exploration in spell checking research.
Among the standard open source datasets for the Russian language is RUSpellRU 2, which emerged after the competition on automatic SC for Russian social media texts (Sorokin et al., 2016). Other open sources include the GitHub Typo Corpus (Hagiwara and Mita, 2019), which contains the Russian section, and the recent work (Martynov et al., 2023), which introduces a multi-domain dataset.
Footnote 2: [https://www.dialog-21.ru/evaluation/2016/spelling_correction/](https://www.dialog-21.ru/evaluation/2016/spelling_correction/)
_Text corruption methods_. For training generative SC models, building a parallel corpus is essential. There are several ways to emulate spelling errors or augment the existing datasets. The example is the GEM benchmark and its associated augmentation library NL-Augmenter (Dhole et al., 2023) and the work (Kuznetsov and Urdiales, 2021) with the method for creating artificial typos. For the Russian language, the RuTransform framework (Taktasheva et al., 2022) presents adding noise into data through spelling corruption. Also, augmentation methods are proposed by (Martynov et al., 2023).
## 3 Methodology
In this work, we wanted our models to be built upon the criterion that meets the demands of their end users. The areas of potential utilization of SC tools abound with the language of varying orthographies and styles. Hence it imposes additional
requirements for text editing systems. We decided to complement and, in some sense, complicate the straightforward paradigm of treating standard language as the only correct spelling option. In this section, we define the notion of SC task and describe our methodology in depth.
### Task Formalization
Before defining the SC task, we must establish the _correct spelling_ notion we employ in this work. Instead of rigorously normalizing all supposedly erroneous lexemes to the standard language, we propose distinguishing unintentional spelling violations from intentional ones. Plain language, colloquialisms, dialectisms, and abbreviations can express emotions and endow a text with distinct stylistic features. Since the act of intentional violation of spelling can hardly be expressed in terms of strict rules, it seems nearly impossible to distinguish intentional errors automatically. Instead, we use manual annotation as described in Martynov et al. (2023). Following Martynov et al. (2023), we consider a sentence annotated and emended by native experts as correct. Given a correct sentence, any sentence obtained from the correct one by (probably) multiple insertions, deletions, substitutions, or transpositions of characters is considered erroneous. This leads to the following definition of SC task that we use in this paper:
Let \(X=[x_{1},...,x_{N}]=X_{corr.}\cup X_{incorr.}\), where \(x_{1},...,x_{N}\) is an ordered sequence of lexemes, \(X_{corr.}=\{x_{i}\}_{i=1}^{k}\) is a set of correct lexemes, \(X_{incorr.}=\{x_{j}\}_{j=1}^{p}\) is a set of incorrect lexemes, \(p+k=N,p\geq 0,k>0\), be the sentence that may contain spelling errors. The system \(M\) then should produce corresponding sequence (ordered) \(Y=[y_{1},...,y_{M}]=Y_{corr.}\cup Y_{incorr.},Y_{incorr.}=\emptyset\) so that
1. Correct lexemes are not modified: \(\lhd f:\{x_{i}\}_{i=1}^{k}\to Y,f-\)injective and preserves order and \(f(x_{i})=x_{i}\);
2. Original style of a sentence \(X\) is preserved;
3. All the information is fully transfered from \(X\) to \(Y\) and no new information appears in \(Y\);
Basically, system \(M\) only corrects unintentional errors and carry stylistic and factological pallet the same from \(X\) to \(Y\).
### Overview
In this paper, we propose a methodology for generative SC, exploring the natural spelling errors across multiple domains and assessing their influence on spell-checking quality during pre-training and fine-tuning stages. The method can be summarized as follows:
**Corruption step**: the paper explores the methods of text corruption techniques using two augmentation methods. The first _statistic-based approach_ emulates the natural distribution of orthographic errors. The second _heuristic-based_ approach adds heuristics and related to it frequent noise to the data in some proportion without any given distribution of the particular domain parallel data set.
**Generation step**: we pre-train the generative models of different sizes and on the extensive synthetic dataset of diverse domains. The error distribution of the synthetic pre-train data is created by emulating the natural distribution of the errors via a statistic-based approach.
**Fine-tune step**: during the fine-tuning, we investigate the influence of corruption and domains on the final results. The models are evaluated on fixed single-domain and multiple-domain test sets. The experiments involve training the pre-trained models on various training data from single and multiple domains, as well as using the same data corrupted with the two aforementioned augmentation techniques.
The methodology is explored and tested in the Russian and English languages but can be potentially transferred to any language.
### Augmentations Strategies
We operate two strategies to introduce errors in sentences. This section provides a brief overview of those strategies.
#### 3.3.1 Heuristic-based spelling corruption
The first strategy represents spelling corruption through exploiting various heuristics, common error statistics, and understanding of implicit mechanics of a language. Nlpaug Ma (2019) and NeuSpell Jayanthi et al. (2020) libraries for English and Augmentex Martynov et al. (2023) for Russian are notable examples of such strategy. In this work, we choose Augmentex Martynov et al. (2023)
2023) for experiments with Russian language models. This library is accompanied with proven effectiveness for the Russian language (Martynov et al., 2023) and provides a flexible interface to its interior methods. Each method is responsible for modeling a specific type of error, including inserting random characters, replacing correctly spelled words with their incorrect counterparts, inserting nearby keyboard characters, and replacing a character with another based on the probability of its erroneous use. Augmentex allows researchers to control the distribution of error noise on word and sentence levels as well. In our experiments, we investigate Augmentex in depth by augmenting fine-tune datasets and studying its impact on models' performance. See details of its configurations used at the augmentation stage in A.3.
#### 3.3.2 Statistic-based spelling corruption
We choose statistic-based spelling corruption (SBSC) from (Martynov et al., 2023) as an attempt to reproduce errors from a particular piece of text. The method mimics human behavior when committing an error by scanning distributions of errors in a given text and then reapplying them on correct sentences. The algorithm requires a parallel corpus of sentence pairs (corrupted_sentence, correct_sentence): it builds a Levenshtein matrix between prefixes of sentences in each pair, then it traverses this matrix back along the main diagonal starting from the bottom right entry. At each step, the algorithm detects a position of an error in a sentence and its corresponding type based on surrounding entries. A detailed description of statistic-based spelling corruption is provided in (Martynov et al., 2023). Our work employs statistic-based spelling corruption to prepare pre-training datasets for both English and Russian generative models. We believe our research reveals SBSC's ability to be transferred to another language other than Russian, which it was initially proposed for in (Martynov et al., 2023). We also investigate the capacity of this noising strategy by experimenting with augmentation through spelling corruption while fine-tuning.
### Datasets
For our multi-domain spell checking experiments, we developed three distinct data suites.
**Golden Test Sets**: Fixed datasets, including both single-domain and multiple-domain texts, used for evaluation purposes.
**Pre-trained Data**: Synthetic data generated to emulate natural and random noise misspellings, employed during the pre-training stage to assess their impact on model performance.
**Training Data for fine-tuning**: Collected using the same method as the test sets, also corrupted with the proposed augmentation strategies to introduce diverse errors. Used during the fine-tuning stage to explore the impact of the different noise on the model performance across domains.
Below we describe the sets in detail.
#### 3.4.1 Golden Test Sets
The datasets for the golden test set are chosen in accordance with the specified criteria. First, _domain variation_: half of the datasets are chosen from different domains to ensure diversity, while the remaining half are from a single domain. This is done separately for English and Russian languages. Another criterion is _spelling orthographic mistakes_: the datasets exclusively comprised mistyping, omitting grammatical or more complex errors of non-native speakers. This focus on spelling errors aligns with the formalization of the task as described in section 3.1.
For the Russian language, we choose four different sets:
**RUSpellRU** - the single-domain open source dataset for social media texts presented in the Shared Task (Sorokin et al., 2016).
**MultidomainGold** - the dataset first presented in the paper (Martynov et al., 2023). It's a multi-domain corpus comprising the domains: internet domain presented by the Aranea web-corpus, literature, news, social media, and strategic documents. We followed the methodological criteria of the paper and reproduced the two-stage annotation project via a crowd-sourcing platform Toloka 3: at the first stage, annotators are asked to correct the mistakes, on the second - to validate the results from the previous step. The statistics and details of the instructions and annotation schema are presented in the Appendix A.1 and A.2. Following the annotation methodology, we extend the author's dataset with two more domains: reviews (the part of the Omnia set (Pisarevskaya and Shavrina, 2022)) and subtitles (the part of the Russian part of the OpenSubtitles set 4).
Footnote 3: [https://toloka.ai/tolokers](https://toloka.ai/tolokers)
Footnote 4: [https://opus.nlpl.eu/OpenSubtitles-v2016.php](https://opus.nlpl.eu/OpenSubtitles-v2016.php)
**GitHubTypoCorpusRu** - we take the Russian part of the corpora introduced in work (Hagiwara
and Mita, 2019). Additionally, we validate the parallel data of this corpus by the same Toloka project, but only the second step from the methodology.
**MedSpellChecker**5 is a single-domain set of a specific lexicon of the medical domain; the multi-domain set above does not cover that. The set contains the medical texts of anamnesis. The data was verified via a two-stage annotation pipeline as well.
Footnote 5: [https://github.com/OmitryPogrebnoy/MedSpellChecker/tree/main](https://github.com/OmitryPogrebnoy/MedSpellChecker/tree/main)
For the English language, we used two sets: **BEA60K** is a multi-domain dataset corpus for spelling mistakes in English.
**JHU FLuency-Extended GUG Corpus (JF-LEG) dataset** is a single domain set, the spelling part. The dataset contains 2K spelling mistakes (6.1% of all tokens) in 1601 sentences.
The test datasets statistics is presented in the Table 3 of the Appendix, the annotation details in Appendix A.2.
#### 3.4.2 Pre-training Data
To prepare pre-training datasets, we take correct samples and then corrupt them employing augmentation strategies described in 3.3. As for correct samples for experiments in Russian, we use twelve gigabytes (12GB) of raw Russian Wikipedia dumps and an open source dataset of transcripted videos in Russian 6 of three and a half million (3.5M) texts. We remove all the sentences with characters other than Russian and English alphabets, digits, and punctuation or under forty characters. We balance both datasets to roughly 3.3 million sentences, resulting in a pre-training corpus of 6.611.990 texts. Then statistic-based spelling corruption is applied. We scan statistics from the train split of RUSpellRU Sorokin et al. (2016), multiply the number of errors per sentence distribution by ten to ensure we induce a much denser noise in the pre-training corpus than it is in fine-tuning datasets, and apply to the pre-training corpus to get corrupted sentences. As a result, the pre-training dataset is a collection of 6.611.990 text pairs, each consisting of corrupted sentences and corresponding correct sentences.
Footnote 6: [https://huggingface.co/datasets/UrukHan/t5-russian-spell_I](https://huggingface.co/datasets/UrukHan/t5-russian-spell_I)
For pre-training in the English language, we combine clean Leipzig Corpora Collection 7 (News domain) and English Wikipedia dumps, clean them the way we applied for Russian and create a parallel corpus using a statistic-based augmentation technique based on a 5k subset of BEA60K. We result in six gigabytes (6 GB) of data for pre-training.
Footnote 7: [https://corpora.uni-leipzig.de](https://corpora.uni-leipzig.de)
Footnote 8: [https://github.com/OmitryPogrebnoy/MedSpellChecker/tree/main](https://github.com/OmitryPogrebnoy/MedSpellChecker/tree/main)
#### 3.4.3 Training Data for fine-tuning
As for the datasets for fine-tuning, we use train splits of RUSpellRU Sorokin et al. (2016) and MultidomainGold Martynov et al. (2023) and a combination of both. You can see details in Table 4. We also employ spelling corruption methods from 3.3 for augmentation purposes in two separate ways. First, we introduce misspellings in erroneous parts of train splits of fine-tuned datasets, inducing more errors without expanding the dataset itself. In the second strategy, we expand train splits of fine-tuned datasets. We obtain correct sentences from a particular dataset, corrupt spelling, and append pairs of corrupted sentences and corresponding correct sentences to the same dataset. In Tables 5 and 8 first strategy is marked as _Add_ and the second as _Concat_.
We do not prepare fine-tuned datasets for the English language since we do not conduct fine-tuning in our experiments.
## 4 Experiments
We conducted a comprehensive series of experiments involving diverse spelling corruption strategies over the encoder-decoder generative models of different sizes throughout the pre-training and fine-tuning phases as well as zero-shot evaluation of the pre-trained models. The models' statistics are presented in Table 7. We compared performance based on single-domain and multi-domain test sets. Furthermore, we conducted a comparative evaluation of the OpenAI models utilizing different prompts and standard open source models.
### Models
The generative models of different sizes used as pre-trained models in the experiments are the following for the Russian language:
**M2M100-1.2B**8 (Fan et al., 2020) M2M100 is a multilingual encoder-decoder (seq-to-seq) model primarily intended for translation tasks proposed by the Meta team. The model contains 1.2B parameters.
Footnote 8: [https://huggingface.co/facebook/m2m100_1.2B](https://huggingface.co/facebook/m2m100_1.2B)
**M2M100-418M**9 is a 418M parameters model of the M2M100 models family.
Footnote 9: [https://huggingface.co/facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M)
**M2M100-418M**
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c|}{**RUSpell IRU**} & \multicolumn{3}{c|}{**Multidomain Gold**} & \multicolumn{3}{c|}{**MedSpell Checker**} & \multicolumn{3}{c}{**GitHub TypoCorpusRu**} \\ & Prec. & Rec. & F1 & Prec. & Rec. & F1 & Prec. & Rec. & F1 & Prec. & Rec. & F1 \\ \hline
**M2M100-1.2B** & & & & & & & & & & & & & \\ Pre-train (PT.) & 59.4 & 43.3 & 50.1 & 56.4 & 44.8 & 49.9 & 63.7 & 57.8 & 60.6 & 45.7 & 41.4 & 43.5 \\ \hline RUSpell IRU (+PT.) & 82.9 & **72.5** & 77.3 & 53.3 & 57.8 & 55.5 & 55.9 & 57.8 & 56.9 & 39.3 & 41.5 & 40.4 \\ RUSpell IRU & 68.8 & 42.6 & 52.6 & 17.9 & 25.2 & 21.0 & 16.3 & 17.7 & 17.0 & 15.1 & 14.9 & 15.0 \\ MultidomainGold (+PT.) & 84.9 & 65.0 & 73.7 & 62.5 & 60.9 & 61.7 & 76.3 & **73.9** & **75.1** & **47.9** & **43.3** & **45.5** \\ MultidomainGold & 75.4 & 35.7 & 48.5 & 46.5 & 39.9 & 43.0 & 69.1 & 31.0 & 42.8 & 27.4 & 18.6 & 22.1 \\ RUSpell IRU+MDG (+PT.) & **88.8** & 71.5 & **79.2** & **63.8** & **61.1** & **62.4** & **78.8** & 71.4 & 74.9 & 47.1 & 42.9 & 44.9 \\ RUSpell IRU+MDG & 81.2 & 47.4 & 59.9 & 45.8 & 37.0 & 40.9 & 71.8 & 39.1 & 50.7 & 26.1 & 17.4 & 20.9 \\ \hline
**M2M100-418M** & & & & & & & & & & & & \\ Pre-train (PT.) & 57.7 & 61.2 & 59.4 & 32.8 & 56.3 & 41.5 & 23.2 & 64.5 & 34.1 & 27.5 & **42.6** & 33.4 \\ \hline RUSpell IRU (+PT.) & 81.8 & 63.4 & 71.4 & 45.3 & 55.9 & 50.0 & 40.8 & 52.2 & 45.8 & 29.5 & 36.6 & 32.7 \\ RUSpell IRU & 66.5 & 38.5 & 48.8 & 20.9 & 26.0 & 23.2 & 22.3 & 14.8 & 17.8 & 11.4 & 13.2 & 12.2 \\ MultidomainGold (+PT.) & 81.3 & 55.4 & 65.9 & 57.9 & 56.5 & 57.2 & **73.5** & **66.0** & **69.5** & 40.3 & 39.2 & 39.8 \\ MultidomainGold & 63.5 & 31.6 & 42.2 & 39.5 & 34.9 & 37.0 & 55.2 & 32.5 & 40.9 & 23.1 & 15.5 & 18.5 \\ RUSpell IRU+MDG (+PT.) & **87.6** & **64.4** & **74.2** & **60.3** & **56.6** & **58.4** & 73.1 & 62.4 & 67.3 & **42.8** & 37.8 & **40.2** \\ RUSpell IRU+MDG & 74.0 & 45.2 & 56.1 & 39.8 & 34.4 & 36.9 & 59.5 & 38.4 & 46.7 & 24.7 & 18.0 & 20.8 \\ \hline
**FredT5-large** & & & & & & & & & & & & \\ Pre-train (PT.) & 58.5 & 42.4 & 49.2 & 42.5 & 42.0 & 42.2 & 37.2 & 51.7 & 43.3 & 52.7 & 41.7 & 46.6 \\ \hline RUSpell IRU (+PT.) & 55.1 & 73.2 & 62.9 & 26.7 & 55.1 & 36.0 & 12.9 & 49.6 & 20.4 & 26.2 & 40.5 & 31.8 \\ RUSpell IRU & 40.7 & 50.4 & 45.0 & 20.5 & 42.4 & 27.6 & 6.9 & 26.0 & 11.0 & 15.2 & 23.8 & 18.6 \\ MultidomainGold (+PT.) & 67.7 & 60.2 & 63.8 & **61.7** & 60.5 & **61.1** & 39.5 & **60.4** & **47.7** & **69.3** & 44.6 & **54.3** \\ MultidomainGold & 49.6 & 39.9 & 44.2 & 48.1 & 43.4 & 45.6 & **43.2** & 41.2 & 42.2 & 50.8 & 25.7 & 34.1 \\ RUSpell IRU+MDG (+PT.) & **74.5** & **73.4** & **73.9** & 58.3 & **63.1** & 60.6 & 37.5 & 59.3 & 45.9 & 61.2 & **45.4** & 52.1 \\ RUSpell IRU+MDG & 56.3 & 56.2 & 56.3 & 48.2 & 48.5 & 48.3 & 42.5 & 42.7 & 42.6 & 49.4 & 26.9 & 34.8 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The models’ performance in experiments configurations for the Russian language. For each model, the experiments are reported for the pre-train model on zero-shot, the raw model fine-tuned on the specific train set, and the pre-train model (\(+PT.\)) fine-tuned on the specific train set. Metrics are reported in **Prec**ision / **Rec**all / **F1**-measure format from (Sorokin et al., 2016).
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c|}{**RUSpell IRU**} & \multicolumn{3}{c|}{**MultidomainGold**} & \multicolumn{3}{c|}{**MedSpell Checker**} & \multicolumn{3}{c}{**GitHub TypoCorpusRu**} \\ & Prec. & Rec. & F1 & Prec. & Rec. & F1 & Prec. & Rec. & F1 & Prec. & Rec. & F1 \\ \hline
**M2M100-1.2B** & & & & & & & & & & & & \\ Pre-train (PT.) & 59.4 & 43.3 & 50.1 & 56.4 & 44.8 & 49.9 & 63.7 & 57.8 & 60.6 & 45.7 & 41.4 & 43.5 \\ \hline RUSpell IRU (+PT.) & 82.9 & **72.5** & 77.3 & 53.3 & 57.8 & 55.5 & 55.9 & 57.8 & 56.9 & 39.3 & 41.5 & 40.4 \\ RUSpell IRU & 68.8 & 42.6 & 52.6 & 17.9 & 25.2 & 21.0 & 16.3 & 17.7 & 17.0 & 15.1 & 14.9 & 15.0 \\ MultidomainGold (+PT.) & 84.9 & 65.0 & 73.7 & 62.5 & 60.9 & 61.7 & 76.3 & **73.9** & **75.1** & **47.9** & **43.3** & **45.5** \\ MultidomainGold & 75.4 & 35.7 & 48.5 & 46.5 & 39.9 & 43.0 & 69.1 & 31.0 & 42.8 & 27.4 & 18.6 & 22.1 \\ RUSpell IRU+MDG (+PT.) & **88.8** & 71.5 & **79.2** & **63.8** & **61.1** & **62.4** & **78.8** & 71.4 & 74.9 & 47.1 & 42.9 & 44.9 \\ \hline
**M2M100-418M** & & & & & & & & & & & & \\ Pre-train (PT.) & 57.7 & 61.2 & 59.4 & 32.8 & 56.3 & 41.5 & 23.2 & 64.5 & 34.1 & 27.5 & **42.6** & 33.4 \\ \hline RUSpell IRU (+PT.) & 81.8 & 63.4 & 71.4 & 45.3 & 55.9 & 50.0 & 40.8 & 52.2 & 45.8 & 29
**Fred-T5**10 (Full-scale Russian Enhanced Denoisers T5) is a Russian 820M parameters generative model. The model is trained on a mixture of 7 denoisers like UL2 on extensive Russian language corpus (300GB). The model is inspired by the ideas from the work (Tay et al., 2022) and one of the top 11 generative models according to the RussianSuperGLUE benchmark (Shavrina et al., 2020).
Footnote 10: [https://huggingface.co/ai-forever/FRED-T5-large](https://huggingface.co/ai-forever/FRED-T5-large)
In the case of the English language, the utilization of only one pre-trained model was decided due to the considerable environmental impact caused by the training process (see section 6 _Energy Efficiency and Usage_ for details).
**T5 large**12 is the English encoder-decoder 770M parameters model introduced by Google's AI research team (Raffel et al., 2020).
Footnote 11: [https://russiansuperglue.com/leaderboard/2](https://russiansuperglue.com/leaderboard/2)
Footnote 12: [https://huggingface.co/t5-large](https://huggingface.co/t5-large)
### Russian experiments
For each of the three models \(M2M100-418M\), \(M2M100-1.2B\), \(FredT5-large\), the performance on the SC task was compared with and without pre-training, and using different training data for fine-tuning.
_Pre-training._ We use the same data and pre-training scheme for each model. We train our models in sequence-to-sequence manner with corrupted sentence as an input and correct sentence as label with a standard Cross Entropy loss.
We pre-train \(FredT5-large\) model with a total _batch size_ of 64, _AdamW optimizer_(Loshchilov and Hutter, 2017) with an initial _learning rate_ of 3e-04 and _linear decay_ with no warm up steps and _weight decay_ 0.001 applied to all the parameters but those in LayerNorm (Ba et al., 2016) and biases, and two steps to accumulate gradients for 5 _epochs._ Pre-train procedure took 180 hours on eight Nvidia A100 GPUs.
Both \(M2M100-418M\) and \(M2M100-1.2B\) were pre-trained with a total _batch size_ of 64, _AdamW optimizer_(Loshchilov and Hutter, 2017) with an initial _learning rate_ of 5e-05, _weight decay_ of 0.001 applied to all the parameters but those in LayerNorm (Ba et al., 2016) and biases, and _linear decay_ for learning rate without warm up steps. We also used 8 and 2 _gradient accumulation steps_ for \(M2M100-418M\) and \(M2M100-1.2B\) accordingly. \(M2M100-418M\) pre-training procedure took five _epochs_ and 332 hours on two Nvidia A100 GPUs, and corresponding procedure for \(M2M100-1.2B\) lasted for seven _epochs_ and 504 hours on eight Nvidia A100 GPUs.
_Fine-tuning._ We fine-tune pre-trained and non-pre-trained models using one of three sets: \(RUSpellRU\), \(MultidomainGold(MDG)\) and \(RUSpellRU+MDG\). We also use the augmentation strategies for the training data presented in section 3.3 and obtain additional training data to fine-tune the pre-trained models (see section 3.4 Training Data for fine-tuning for details).
We fine-tune models and take the best-performing checkpoint according to the metrics on the corresponding development set. The models' metrics on development set is presented in the Appendix A.4. We also used the development set to select the optimal hyperparameter values. We use AdamW optimizer (Loshchilov and Hutter, 2017) with \(\beta_{1}=0.9\), \(\beta_{2}=0.99\) and \(\epsilon=1\mathrm{e}{-8}\) and a linear learning rate scheduler to fine-tune models. All hyperparameters for fine-tuning models are contained in Appendix A.7.
_Model comparison._ We compare the performance of fine-tuned models with pre-trained models in a zero-shot setting, Yandex.Speller 13, JampSpell 14, Hunspell 15, and OpenAI 16 models via API (namely, _gpt-3.5-turbo-0301, gpt4-0314, text-davinci-003_) with different prompts (see Appendix A.6 for the prompt details) using single-domain and multi-domain test sets (see section 3.4 Golden Test Sets for the details).
Footnote 13: [https://yandex.ru/dev/speller/](https://yandex.ru/dev/speller/)
Footnote 14: [https://github.com/bakwc/JamSpell](https://github.com/bakwc/JamSpell)
Footnote 15: [https://github.com/hunspell/hunspell](https://github.com/hunspell/hunspell)
Footnote 16: [https://chat.openai.com/](https://chat.openai.com/)
### English experiments
We pre-train _T5 large_ model as described in 3.4.2 with the following hyperparameters: _batch size_ 64, _learning rate_ 3e-04 with linear decay and no warm up steps, _weight decay_ 0.001 applied analogously as in experiments with Russian language, 2 _gradient accumulation steps_, 5 _epochs_. Pre-training is done in mixed-precision with data type bfloat16 17. The procedure took 360 hours on eight Nvidia A100 GPUs.
Footnote 17: [https://pytorch.org/docs/stable/generated/torch.Tensor.bfloat16.html](https://pytorch.org/docs/stable/generated/torch.Tensor.bfloat16.html)
We compare the performance of several models on two datasets: BEA60k and JFLEG. The mod
els are as follows: eight NeuSpell Jayanthi et al. (2020) models: BERT, CNN-LSTM, SC-LSTM, Nested-LSTM, SC-LSTM + BERT at input/output and SC-LSTM + ELMO at input/output. Additionally, we evaluate OpenAI models via API (namely, _gpt-3.5-turbo-0301_, _gpt4-0314_, _text-davinci-003_) with different prompts: Full, Short, and Cut (see Appendix 10 for the details). Finally, we compare obtained results on the Full prompt with models from NeuSpell Jayanthi et al. (2020) and T5-large model.
## 5 Evaluation
### Metrics
For the evaluation, we use the script from the Dialogue Shared Task Sorokin et al. (2016).
As a result, the _F1-measure_ as the harmonic mean between _Precision_ and _Recall_ is calculated. The evaluation script reported all three metrics.
We also evaluated models for the English language with _accuracy_ (correct words among all words) and _correction rate_ (misspelled tokens corrected), as it was proposed by Jayanthi et al. (2020).
### Results
Table 1 presents the results of experiments conducted on the Russian language. The findings indicate superior dominance of pre-trained (\(+PT\).) models over the bare fine-tuning. Moreover, larger models generally perform better though this trend is only observed for M2M100 models. The Fred-T5 model, despite its larger size compared to the M2M100-418 model, demonstrates poorer quality on \(Ruspell RU\) and \(MedSpellChecker\) datasets. This difference in performance may be attributed to the multilingual architecture of the M2M100 model. In our experimental setup, we emulated errors in the pre-trained models using the \(Ruspell RU\) dataset. This may cause the scores of the models on this specific domain to be substantially higher than those obtained on other datasets.
Including corruption strategies (Table 5) during the fine-tuning stage improves scores. This trend persists consistently across different domains. In the case of heuristic-based approach, _Add_ strategy celebrates most of the performance improvements. In contrast, the statistic-based approach manifests equal contribution of both strategies.
Table 2 demonstrates that non-generative models in the Russian language perform comparably to generative OpenAI models, but they are lightweight and more efficient. However, our best M2M100 model configuration significantly outperforms these solutions.
According to Table 9, the pre-trained T5 model shows comparable with OpenAI models results. We emulated the error distribution based on the BEA60K set during pre-training. However, the final evaluation of the JFLEG set is slightly better than the BEA60K.
The Tables 10,11 presented in the Appendix A.4 demonstrate a notable gap in performance between OpenAI models for English and Russian. In English, the results indicate higher performance when punctuation is not considered. Furthermore, three models demonstrate comparable performance across all models, employing more specific prompts shows better results. However, for Russian the _text-davinci-003_ model with punctuation performs better. While analyzing the results, we observed that the generated outputs are sensitive to the prompts. The results contain cliches phrases, forcing additional filtering to obtain accurate results. The observed discrepancy can be attributed to the pre-trained nature of the OpenAI models primarily trained on English language data.
## 6 Conclusion
In this paper, we have presented a novel methodology for generative SC. Our approach, which involves emulating natural spelling errors during large generative model pre-training, has shown state-of-the-art results in addressing text editing tasks. We use two augmentation techniques for text corruption to improve the results. Conducting the experiments in two languages, we have demonstrated the effectiveness of these techniques and the impact of different corruption strategies across different domains. As our research's practical impact, we proposed the library SAGE 18 (that includes the data hub resource for the Russian language) for automatic SC with the proposed methods and the family of generative models. We believe our work contributes significantly to the SC field and opens routes for further exploration.
Footnote 18: [https://github.com/ai-forever/sage/](https://github.com/ai-forever/sage/)
### Limitations
The proposed generative methodology of spell checking and the created models have certain limitations that should be considered:
Decoding strategies.The choice of the decoding strategy affects the quality of generated texts (Ippolito et al., 2019). However, our current methodology fails to comprise the entire spectrum of decoding strategies, limiting our evaluation's extent. We leave this aspect for future work.
Parameters.During the pre-training and fine-tuning stages, the choice of each model's parameters is limited due to the significant computational costs associated with training and processing. Consequently, there is a potential for improved results by exploring and optimizing new parameter configurations.
Text Corruptions.The heuristic approach only covers some of the augmentation methods. To address this, we plan to expand the range of hyperparameter methods for substitutions in future research. Furthermore, the different percentages of the additive noise in the data may significantly vary the result. Thus, it's a good way for future research.
Data collection.A limitation of our study is the availability of different data for both the training and fine-tuning stages and the annotated data. The data used in our research may be limited to specific domains, preventing comprehensive coverage of all possible text variations. Despite these limitations, we tried to address the issue of data diversity by incorporating single-domain and multi-domain datasets in the proposed research. This approach allowed us to shed light on the diversity and variances within the data, providing valuable insights despite the inherent constraints.
Context.The spell checking model's understanding and processing of word context may be limited due to the two main factors. Firstly, the model's context length is constrained (for example, T5 is limited for 512 sequence length). Secondly, the data used for the fine-tuning is limited to the text's length of the examples in the dataset, which can lead to bad performance on longer texts if the models saw only short ones. We added the domains of various text lengths to address this problem in the MultidomainGold set. Additionally, it should be mentioned that handling longer texts becomes problematic, requiring substantial computational GPU resources.
Languages.The methodology employed in our study primarily focuses on investigating the applicability of our spell checking methodology within the Russian language, with an examination of its transferability to the English language. However, the generalizability of the method across diverse language families remains unclear. Thus, further research is needed to expand the datasets and evaluate the methodology's effectiveness for a wider range of languages.
## Ethics Statement
In conducting our research on automatic generative SC, we recognize the importance of addressing potential ethical implications and ensuring responsible use of the developed technology. We have taken the following steps to maintain ethical standards throughout the study.
Crowdsourcing annotation.Responses of human annotators are collected and stored anonymously, eliminating personally identifiable information. The annotators are warned about potentially sensitive topics in data (e.g., politics, culture, and religion). The average annotation pay rate exceeds the hourly minimum wage in Russia twice.
Datasets.We clearly state our work's aims and implications, making it open source and transparent. The data will be available under a public license. As our research involved anonymized textual data, informed consent from human participants was not required. However, we obtained permission to access publicly available datasets and ensured compliance with any applicable terms of service or usage policies.
Energy Efficiency and Usage.Training large-scale language models consumes significant amounts of computational resources and energy, resulting in substantial carbon emissions. To minimize the ecological footprint of the research, the decision was made to limit the number of pre-trained models employed for the English language. The CO2 emission of pre-training the M2M100 (Fan et al., 2021) and T5 (Raffel et al., 2020) models in our experiments is computed as Equation 1 (Strubell et al., 2019):
\[CO2=\frac{PUE*kWh*I^{CO2}}{1000} \tag{1}\]
The resulting CO2 emissions are listed below:
1. _M2M100-1.2B_ = 87.09 kg;
2. _M2M100-418M_ = 57.37 kg;
3. _T5-large_ = 62.21 kg;
4. _FredTS-large_ = 31.11 kg;
The power usage effectiveness (\(PUE\)) of our data centers is not more than \(1.3\). Despite the costs, spelling models can be efficiently adapted to the user needs and bringing down potential budget costs in the scope of modern applications. Model compression techniques, e.g., pruning and distillation, can further reduce the model inference cost and footprint.
Biases.The datasets we collected include large segments representing the Internet domain, and therefore, they may contain various stereotypes and biases, as well as the pre-train models. The scope of risks associated with the misuse of generative language models is widely discussed in the community (Weidinger et al., 2021; Bommasani et al., 2021). We acknowledge the potential for biases to emerge in both the training data and the model's predictions. Proper evaluation is still needed to explore possible model vulnerabilities in terms of generalizing on the new data and specific new data.
Possible Misuse.We understand that the results of our work can be used maliciously, e.g., to write inappropriate and toxic texts. We believe that our research should not be involved in creating content that affects the individual or communal well-being in any way, including legislative application or consorship; mis- and disinformation; infringement of the rights of access to information.
We propose a novel methodology applied potentially to any language and a valuable resource for the Russian language in particular. We anticipate that our work may contribute to improved written communication, but we also recognize the need for ongoing ethical evaluation to address emerging challenges.
## Acknowledgements
The authors sincerely thank Alexey Sorokin for providing us with the evaluation script from the Dialogue Shared task. The authors would also like to extend their appreciation to the teams of the authors of the datasets we took for the training and testing parts. We thank DmitryPogrebnoy, the author of anamnesis medical data validated and included in our MedSpellChecker set. The authors are grateful for Ibragim Badertdinov's ideas of heuristic-based corrupted method in the texts. The authors would like to thank Denis Kulagin and his "kartaslov" 19 git-project for the data and statistics on typos. The authors are deeply grateful for the valuable contributions of everyone mentioned above. Their efforts played a crucial role in completing this research.
Footnote 19: [https://kartaslov.ru/](https://kartaslov.ru/)
|
2302.13436 | Navigating Multi-Stakeholder Incentives and Preferences: Co-Designing
Alternatives for the Future of Gig Worker Well-Being | Gig workers, and the products and services they provide, play an increasingly
ubiquitous role in our daily lives. But despite growing evidence suggesting
that worker well-being in gig economy platforms have become significant
societal problems, few studies have investigated possible solutions. We take a
stride in this direction by engaging workers, platform employees, and local
regulators in a series of speed dating workshops using storyboards based on
real-life situations to rapidly elicit stakeholder preferences for addressing
financial, physical, and social issues related to worker well-being. Our
results reveal that existing public and platformic infrastructures fall short
in providing workers with resources needed to perform gigs, surfacing a need
for multi-platform collaborations, technological innovations, as well as
changes in regulations, labor laws, and the public's perception of gig workers,
among others. Drawing from multi-stakeholder findings, we discuss these
implications for technology, policy, and service as well as avenues for
collaboration. | Jane Hsieh, Miranda Karger, Lucas Zagal, Haiyi Zhu | 2023-02-26T23:21:12Z | http://arxiv.org/abs/2302.13436v2 | Navigating Multi-Stakeholder Incentives and Preferences: Co-Designing Alternatives for the Future of Gig Worker Well-Being
###### Abstract.
Gig workers, and the products and services they create, play an increasingly ubiquitous role in our daily lives. But despite growing evidence suggesting that worker well-being in gig economy platforms have become significant societal problems, few studies have investigated possible solutions. We take a stride in this direction by engaging workers, platform employees, and local regulators in a series of speed dating workshops using storyboards based on real-life situations to rapidly elicit stakeholder preferences for addressing financial, physical, and social issues related to worker well-being. Our results reveal that existing public and platform infrastructures fail to provide workers with resources needed to perform gigs, surfacing a need for multi-platform collaborations, technological interventions/advancements, as well as changes in regulations, labor laws, and the public's perception of gig workers, among others. Drawing from multi-stakeholder findings, we discuss these implications for technology, policy, and service as well as avenues for collaboration.
Design Methods, Workplaces +
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
## 1. Introduction
The rapid growth of the gig economy has motivated individuals around the globe to engage in more flexible and autonomous forms of work. Gig and platform-based work are presently characterized by short-term, on-demand work completed by independent contractors who get paid in return for the "gigs" they perform. Upon first glance, digital labor platforms seem to benefit everyone involved, offering workers novel job opportunities, enabling small businesses to scale quickly, and providing individual consumers services like ridesharing and food delivery (Zagal and Zagal, 2019; Hauy et al., 2019). But under the surface, the amalgamation of low compensation, high competition, and the just-in-time nature of gig work leaves individual contractors toiling at odd hours for prolonged periods of time, and with insufficient compensation for making a living. Unlike employees of traditional firms, gig workers are not entitled to employee benefits such as healthcare or retirement contributions (Hauy et al., 2019; Hauy et al., 2019). With the proliferation of online gig platforms that facilitate short-term work, individual contractors increasingly experience competition, lowered wages, a commodification of labor, job precarity and in general adverse working conditions (Hauy et al., 2019; Zagal and Zagal, 2019; Zhu et al., 2019; Zhu et al., 2019; Zhu et al., 2019).
Previous studies on gig work conditions found workers to lack many forms of social, financial, technological and regulatory support necessary for making a living safely and consistently from their contractual work (Zagal and Zagal, 2019; Zhu et al., 2019; Zhu et al., 2019; Zhu et al., 2019). For instance, in their seminal work examining job quality in gig work, Wood et. al. described how platformic control causes workers to have weak structural power compared to clients, which results in burnout (Hauy et al., 2019). Yao et. al. found that while social media groups enabled workers to share experiential knowledge amongst one another, they fell short
in building a collective identity among workers since strategic information-sharing could harm an individual worker's comparative advantage [116]. Howard investigated how labor laws apply in non-standard gig work arrangements, underscoring the health and safety risks involved for workers in such environments [54].
Recent bodies of work within HCI increasingly urge and pursue the design of systems from a worker-centered perspective [89; 115; 119]. As a first step in this direction, Zhang et. al. codesigned alternative platform futures with workers to minimize the impact of algorithmic management on well-being [119]. In their research agenda, Ashford et. al. drew from organizational behavior theory to delineate potential behaviors that individual workers can capitalize on to thrive in the new world of work [14]. While these studies focus on worker-driven solutions, improving gig worker conditions requires the active involvement of and collaboration with multiple stakeholder groups [43; 47]. Expertise of regulators and lawmakers are required to craft and enforce mandates and labor regulations that govern the gig economy [17; 36], support from platforms is crucial to implement programs and engage in co-regulation [24; 52], and worker input is indispensable to designing legal and platformic changes that engender practical and productive impact [66; 82].
Our work involved a diverse set of stakeholders, and by leveraging the speed dating method, we collaborated with our participants to brainstorm, develop and assess a wide range of service, policy and technological interventions for addressing the various social, financial and physical challenges of gig work [31]. The hidden costs and challenges emerging from such past bodies of work, combined with themes uncovered from local workshops and news articles, informed the design of our workshops. During the codesign sessions, speed dating allowed us to incorporate reported worker issues into scenarios accompanied by provocative questions and solutions, empowering us to 1.) learn latent social needs and boundaries of stakeholders and 2.) imagine and evaluate solutions without the high efforts of implementation. To help participants kick off the process of idea generation, we seeded the solution space with interventions that are implementable by platforms, lawmakers, or workers themselves. To evaluate the feasibility of solutions in practice, we recruited participants from the same three stakeholder groups. In conducting the codesign workshops, we sought to answer the following research questions:
**Research questions:**
1. What incentives, preferences and deterrents do stakeholders have in supporting and implementing solutions for improving gig worker well-being?
2. What are the most desirable and feasible changes for improving challenges present in gig work?
We conducted a series of codesign workshops with 20 participants (8 workers, 7 local regulators and 5 platform-side employees) within the United states. Our workshops include an exercise ranking potential future solutions and follow-up questions regarding rationales behind such rankings, allowing us to share key quantitative and qualitative insights from regulators, platform practitioners as well as the gig workforce at large. Our findings reveal details about shared worker struggles, desired benefits and steps that each stakeholder group can take to turn such imagined futures into reality. Thus, we make two unique research contributions: first we present essential improvements to the gig work condition that are acceptable to multiple relevant stakeholders, and then we offer a discussion of how these stakeholder groups can contribute to these solutions and interventions. Through this endeavor, we hope to offer design implications that contribute toward a future gig workforce that tracks and improves the physical, financial and social well being of workers, so as to approach more equitable and inclusive gig platforms and communities.
## 2. Related Works
Gig work, at large, can be characterized as?electronically mediated employment arrangements in which individuals find short-term tasks or projects via websites or mobile apps that connect them to clients and process payment* (Toh et al., 2017). However, further segmentation can divide gig work into app work (e.g. Uber, DoorDash, TaskRabbit), crowdwork (e.g. Amazon Mechanical Turk), and capital platform work (e.g. Airbnb, Etsy) (Zhu et al., 2019). A similar categorization sections gig work into local and remote parts, with the former consisting of manual labor (e.g. transport, food delivery, furniture assembly, couriering) and the latter comprising of digital services such as logo design (Zhu et al., 2019). At the start, we focused primarily on app workers performing physical tasks, but after reviewing the literature and relevant articles we found capital platform workers to share many of the same risks and challenges. Thus, our workshops aim to address the various social, financial and power struggles as well as health and physical risks endemic to these two forms of gig work.
In appearance, the gig economy model offers workers flexibility and low-entry barriers while affording consumers time and cost savings. However, such conveniences are possible only because it "necessitates cutting every cost possible, usually by externalizing them through misclassifying workers so they do not qualify for expensive benefits like a minimum wage or health insurance" (Zhu et al., 2019). Prior works in the domain have extensively studied how such reduced working conditions negatively impact the well-being of gig and contractual workers (Han et al., 2017; Li et al., 2018; Li et al., 2019). In the following, we summarize five major shortcomings of gig work explored in past studies, which also informed our workshop design.
### Risks and Challenges of Gig Work
#### 2.1.1. Missing Employment Benefits
Although gig work offers more flexible work hours, limited employment benefits forces workers to complete additional hours of unpaid labor (Li et al., 2019). While many workers prefer to keep their legal classification as an independent contractor and the associated flexibilities (e.g. no particular employer attachments), the lack of formal employment costs them many benefits and protections, including wage guarantees, workers' compensation, unemployment insurance, healthy and safe work spaces, and the right to unionization (Zhu et al., 2019). The deprivation of workers' rights and protections that contractors experience (which especially impoverishes the mental health of working mothers (Zhu et al., 2019)) has been longstanding, with accounts dating back to at least 2002 (Zhu et al., 2019).
In an effort to avoid employment regulations, many gig platforms leverage workers' desires to remain contractors as an argument in court to avoid responsibilities of providing employee benefits. This argument for platforms is frequently used in trials since as early as 2017, after which more than 100 such US lawsuits have been filed against Uber regarding driver misclassifications, with many more appearing across other platforms and nations (Li et al., 2019; Li et al., 2019). To continue exploiting the legal loophole in employment classifications, gig platforms have spent hundreds of millions to lobby for the ballot measure Prop 22 in the summer of 2021 (Zhu et al., 2019). Unfortunately, how workers should be classified is an ongoing debate - the control and economic realities tests that serve to distinguish between employees and independent contractors both lead to indeterminate results when applied to rideshare drivers (Li et al., 2019).
#### 2.1.2. Income Instability
Gig workers also suffer from a lack of financial stability induced by job precarity and the temporary nature of contractual work (Li et al., 2019). In their work evaluating the job quality of gigs, Wood et. al. identified how algorithmic management of workers causes financial instability, social isolation as well as overview and exhaustion (Han et al., 2017). The combination of low pay, high job insecurity, long working hours induces a high sense of precarity among gig workers (Zhu et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019). One major contributor to the income instability of gig workers is seasonality, endangering the financial security of part-time gig workers. For instance, work in sports have always been characterized as precarious and seasonal, and the suspension of several major sports during the pandemic has intensified such
impacts (Ravenelle et al., 2017; Ravenelle et al., 2018). Ravenelle et al. also identified increased vulnerabilities of gig workers during the pandemic, finding knowledge, sociological, and temporal/financial hurdles that prevent them from accessing unemployment assistance (Ravenelle et al., 2017).
#### 2.1.3. Minimal Access to Working Necessities
The growing prevalence of gig work probes at previously unexplored social barriers, highlighting inadequacies in our public infrastructure. In New York City, exploitative labor practices induced by platforms and public infrastructure subject food couriers to dangerous working conditions, leading to a local labor union of cyclists in 2019 - _Los Deliveristas Unidos_(Ravenelle et al., 2019). Based on the lived experiences of its constituent deliveristas, the grassroots collective formed a list of five demands surrounding working conditions, including a right to 1.) free public bathroom access 2.) physical public space for eating, resting and protection from harsh weather conditions 3.) hazard pay for work performed that involve physical hardships (e.g. the COVID-19 pandemic) and 4.) protections from e-bike robberies, wage theft and health and safety hazards. While the city council passed a bill last year to ensure bathroom access for workers (Ravenelle et al., 2018), enforcement is difficult and deliveristas still report instances of restaurants who restrict bathroom access (Ravenelle et al., 2018).
#### 2.1.4. Safety Concerns
Without proper employment classification, gig workers do not enjoy the regulated safety assets provided to traditional workers (e.g. worker's compensation, health insurance, and unemployment insurance, among other laws and regulations) (Ravenelle et al., 2017; Ravenelle et al., 2018; Ravenelle et al., 2018). Unfortunately, the non-standard nature of many gig work arrangements raises occupational health and safety risks, increasing scholarly, legal, and societal concern (Ferrie et al., 2017; Ravenelle et al., 2018; Ravenelle et al., 2018). For instance, Ferrie et al. found that poor mental health outcomes can result from sudden unemployment (Ravenelle et al., 2018), and by 2006, Virtanen et al. revealed a solid association between temporary employment and morbidity after reviewing 27 case studies (Virtanen et al., 2018). Over the past five years, the Markup has tracked a total of 361 ride-hail and delivery drivers as victims of carjackings or attempted carjackings (Ravenelle et al., 2018).
Potentially in response, Almoqbel and Wohn uncovered that platforms' rating systems to prevent drivers from engaging in protective behaviors (e.g. using dash cams) due to passengers' discomfort around monitoring (which lead to poor reviews); they further found drivers to share safety resources, vent about passengers, and coordinate informal union activities in online forums (Ferrie et al., 2017). Beyond physical attacks, Bajwa et. al. discussed how precarity, occupational and platform-based vulnerabilities can cause psychological distress, increased risk for traffic accidents and musculoskeletal injuries, as well as work-induced stress, respectively (Ravenelle et al., 2018). From the perspective of international law, Howard discussed how legal misclassifications cause a loss of protections and benefits for workers across the globe (Ravenelle et al., 2018).
#### 2.1.5. Missing Collective Action Power
The design and structure of online labor platforms creates unique challenges such as informational asymmetries and power imbalances between workers and clients, giving rise to the platform control and algorithmic management (Ravenelle et al., 2017; Ravenelle et al., 2018; Ravenelle et al., 2018; Ravenelle et al., 2018; Ravenelle et al., 2018; Ravenelle et al., 2018). Such dynamics disincentivize workers from engaging in collectivism due to fears of losing competitive advantages (Virtanen et al., 2018). The lack of physical workspaces makes it even less likely for workers to collectively form identifies and protest inequities (Ravenelle et al., 2018; Ravenelle et al., 2018). Furthermore, antitrust and employment laws directly prevent worker collectives from forming unions (Virtanen et al., 2018; Ravenelle et al., 2018). Additionally, migrant workers comprise a growing portion of the platform labor market, and legal restrictions make it difficult for them to engage in union activities or benefit from national welfare systems (Virtanen et al., 2018).
To acquire more workplace gains and protections, workers can engage in collective labor activities. But as (Virtanen et al., 2018) and (Ravenelle et al., 2018) find, many barriers (e.g. geographic dispersal, the individualistic nature of gig work, and platforms' opposition to worker organization) prevent the building of a collective, group agency. Furthermore, "antiquated notions of collective
bargaining...surrounding the gig economy" may not prove useful in the modern digital workforce (Rosen et al., 2017). To closely examine such outdated notions, Khovanskaya et. al. leveraged historical insights from mid-20th century labor unions toward management to inform how contemporary data-driven worker advocacy can bring workers together over shared concerns and raise public awareness of working conditions, instead of engaging in bureaucratic negotiations with platforms, a strategy that industrial unions historically relied upon (Krouss et al., 2019). But as Graham et. al. points out, there is a dearth of counterhegemonic research efforts particular to the gig economy that support the "building of alternatives, outrage, conflict, and worker organization", a gap that we hope to help fill (Krouss et al., 2019; Krouss et al., 2019).
### Design Efforts for Studying Worker Well-being
Early efforts to combat algorithmic management arose in contexts of crowdwork (Amazon Mechanical Turk), rideshare driving, and food couriering. The pioneering piece along this line of work was Turkopticon, a widely-adopted browser plug-in that overlays its requester/employer-reviewing features on top of the AMT site to resist minimal wages, low quality work, and unfair job rejections (a.k.a. wage theft). In the author's own words, the system aimed to "make questions of work conditions visible among technologists, policy makers, and the media" (Krouss et al., 2019). A companion tool Dynamo was developed subsequently to facilitate collective organization action among AMT workers (Krouss et al., 2019). A "social sensing" probe developed by You et. al. collected and shared personal health data of rideshare drivers with their significant others to promote well-being awareness (especially related to long working hours) and motivate behavioral changes (Zhang et al., 2019). Zhang et. al. leveraged algorithmic imaginaries to expand participants' current understandings of algorithms so as to generate alternative futures that actually support workers' needs (Zhang et al., 2019). In (Bates et al., 2019), Bates et. al. hosted two rounds of co-design workshops with gig cycle couriers in the U.K. to identify challenges in their working conditions and ideate alternative solutions. Codesign has also been used to unearth the accounts of essential workers such as airport janitorial staff (Zhang et al., 2019). Finally, Alvarez de la Vega, et al. used design fiction (informed by prior literature) in focus groups to discover potential design opportunities for improving the well-being of online freelancers (Bates et al., 2019).
These studies all have a worker-centered focus and an aim to empower and highlight the voices of the underserved workers, we expand beyond just the workers and aspire to capture the opinions of three distinct but relevant stakeholder groups, so that these involved parties may also take part in constructing a brighter and improved gig work future. In particular, we hope that our findings help policymakers make well-informed decisions when establishing new regulations to protect worker rights, as well as the media and public at large to exert pressure on platforms to implement worker-centered changes, benefits and programs.
### Multi-Stakeholder & Solution-Centered Approach
The aforementioned studies identified unique challenges that gig workers face, but few have taken a holistic view at how other stakeholders such as platform-side designers or policymakers can play a role in alleviating such constraints. By asking our participants to generate and rank solutions to these issues, we aimed to identify the most desired and practical improvements for addressing challenges present in gig work (RQ2). As Howard identified in their commentary, the key question of who should be held responsible for providing various job protections has yet to be answered (Krouss et al., 2019), so we directly asked stakeholders about who should bring forth change (3.1) and probed their solution rankings with follow-up questions surrounding underlying incentives and constraints (RQ1). By eliciting such preferences and limitations, our workshops goes beyond uncovering worker perspectives to also explore unmet needs of platforms and policymakers, so as to find ways of maximizing their ability to support gig workers. Engaging with participants from these stakeholder groups (who sociologists identified as key stakeholders of the gig economy (Krouss et al., 2019)) ensures that the
solutions arising from our workshops are acceptable to and welcomed by all three involved parties. In particular, we encouraged participates to generate their own solutions as a means of negotiating for potential futures that they find the most suitable. After all, many of the harms to worker well-being (e.g. legal misclassifcation, algorithmic management) can only be mitigated with solutions at the systemic level, and such changes require the active collaboration and involvement of lawmakers, platform designers, gig workers, and the public at large.
## 3. Methods
### Study Design
#### 3.1.1. Speed Dating
As the nature of gig work probes at previously unexplored social boundaries (e.g. traditional workers typically do not bear responsibility for consumers' physical or food safety), we require alternative methods for examining workers' social, financial, and physical needs, as well as to discover the social and cultural barriers that gig work pushes at, which are not yet well understood (Sen et al., 2012; Sen et al., 2012). Toward this end, we leveraged speed dating, a method that involved presenting pressing issues (design opportunities) and provocative alternative futures (design concepts) to multiple stakeholders in rapid sequence, enabling us to uncover their latent needs, desires, fears and dreams. Unlike romantic speed dating, where the goal is to pair potential couples, the technique strives to match gig work issues to potential solutions. Speed dating has been utilized in a variety of domains (e.g. attention management (Sen et al., 2012), AI ethics checklists (Sen et al., 2013) smart homes (Sen et al., 2013)) to rapidly explore of concepts/solutions to issues without needing to implement the proposed technologies (Sen et al., 2013).
Most similar to our contexts, Dillahunt et. al. found speed dating effective in identifying concepts for addressing needs of underserved job seekers (Dillahunt et al., 2013). Following their study design, we presented participants a series of issues that (gig) workers face, but did not pair each issue with a tool/design concept in the same way. Instead, we offered participants a list of alternative futures (and encouraged their own generated solutions) to broaden the horizon of imagined solutions. By gauging the reactions of participants toward proposed concepts, we can match relevant issues and needs to potential future solutions. While parts of our study design drew inspiration from (Dillahunt et al., 2013), we center our work around gig workers instead of underserved job seekers, and expand the pool of imagined solutions by incorporating the voices of diverse stakeholder groups.
#### 3.1.2. Scenario Construction
Initially, we generated ten scenario stories and subsequently solicited the critique of other researchers working in the space of supporting gig workers. This process helped us finalize a problem space comprising five scenarios, which covered various physical, financial and social struggles that gig workers face in their working environments (see Table 1). To avoid promoting "blue-sky" thinking, which (as Harrington et. al. pointed out (Harrington et. al., 2016)) may lead to frustration for the very population we intend to serve, the authors collectively generated ideas ahead of time to prepopulate the solution space, so as to help participants brainstorm (see a full list of these in Table 10 of the Appendix). The pre-generated solution space consisted of ideas implementable by each of the three stakeholder groups we involved so as to avoid imbuing the research team's opinions on who should hold responsibility.
Though all scenario characters were fictitious, the first three scenarios (see Table 1) were inspired by concerns expressed during a local workshop organized by the National Council of Jewish Women, which explored the hidden costs of gig work. The fourth (Bradner et al., 2016) and fifth (Bradner et al., 2016) scenarios were based on accounts of stories of worker situations covered in the respective articles. All five scenarios represent prevalent issues gig workers face today: scenario 1 focuses on missing employee benefits, scenario 2 depicts financial instability, scenario 3 represents a lack of essential working necessities, scenario 4 tackles safety issues and in scenario 5 we present gig workers' minimized ability to take collective
action. With the exception of the persona in Scenario 3, who reflects the common characteristics of food deliverers (i.e. male, young, and of an immigrant background (Steintein and Steintein, 2007)), the demographics of characters in our stories are intentionally non-representative of the general gig work population to encourage the consideration of more marginalized populations of laborers (women, elders, etc.), who often face issues such as bias, harassment, and pay gaps, all of which intersect with algorithmic control (Bahdan et al., 2012; Steintein and Steintein, 2007; Steintein and Steintein, 2007; Steintein and Steintein, 2007).
#### 3.1.3. Storyboards
To present these scenarios, we constructed five pictorial storyboards depicting stories based on news articles, local workshops, and prior work. Storyboarding, defined as "a short graphical depiction of a narrative", is an effective tool for demonstrating 1.) impacts of technologies on human activity and 2.) effects of proposed (technological) interventions and solutions before they are implemented. Since we cover a wide range of gig worker types in this study (e.g. food couriers, rideshare drivers, movers and sellers), storyboards allow participants to quickly engage with specific situations, connecting their own lived experiences when applicable, helping to gauge the desirability of imagined futures, and affording us the unique opportunity to "rapidly investigate many possible futures" by supporting "a broad investigation of contexts, triggers, and interactions." (Truong et al., 2013; Truong et al., 2014). Following Truong et. al.'s guidelines (Truong et al., 2014) on best practices for storyboarding (concise background, intentional text, characters, graphics, passing of time, etc.), we drew empathy from our participants using personas of gig workers, included text to orient participants in the character's world, and only constructed three frames per scenario to succinctly convey each character's activities without bogging participants down with overt details.
#### 3.1.4. Procedures
Each scenario was presented via three storyboard cards, and we guided conversation using a probing question that focuses discussions around broader underlying issues. After introducing the scenario and probing question, we requested that participants read the prepopulated solutions and treat them as seed solutions for generating their own ideas, and subsequently **rank all the solutions for the scenario**. Table 10 and 16 in the Appendix show the generated solutions and an example instance of the ranking process. During the ranking process, we solicited the rationales of participants' ranking decisions to probe at and uncover latent social boundaries and desiderata. Due to time constraints, we did not engage our participants in a formal consensus building processes (e.g. the Delphi method) during rankings. After solution ranking, we asked a set of followup questions to wrap up each scenario. The scenarios were presented in the same order across all workshop sessions, as shown in Table 1.
After completing the above, participants were asked to **rank the five scenarios** in terms of what they thought were most important to address, effectively performing needs-validation over the issues we presented. In summary, we asked participants of each workshop to complete the following set of tasks, in order:
1. For each of the five scenarios: 1. Examine the scenario's storyboard and accompanying descriptive text (including the probing question) 2. Read through and discuss the list of prepared solutions, then add newly generated ideas 3. Rank the solutions (including the ideas generated live) to express preferences, using sticky notes 4. Explain reasoning for ranking preferences 5. List the most and least preferred solutions 6. Express who should be responsible for implementing the mentioned solutions (using provided check-boxes)
2. Rank the five scenarios in terms of which issues are most important to address
Participants were encouraged to add solutions at any point in these steps. Additional materials used for workshops are included in supplementary materials, and solutions generated by participants are available in the Appendix.
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|p{142.3pt}|} \hline
**Scenario \#** & **Scenario Summary \& Probing Question** & **Persona \& Addressed Issues** \\ \hline
**1** & Renee is a new driver for the popular ridesharing company Lyber as well as a single parent, she struggles to balance driving full-time and caring for her two-year-old. When her child is sick, she does not have time to drive, meaning she won’t be able to afford basic costs for food, rent and child care. & **Leading question**: Unlike traditional employees, gig workers often do not have **employment benefits**. What do you think the solution should be? & Lack of employment benefits (e.g. childcare, PTO) \\ \hline
**2** & Dave started helping residents move in on TaskBunny last May and had a fruitful first 6 months due to new students and employees moving in for the fall. But now that it’s the middle of winter, no clients are hiring for his services in January. Dave has no savings nor jobs lined up and he is struggling to pay rent. & **Leading question**: What changes can help Dave overcome challenges induced by **unstable income**? & Income instability \\ \hline
**3** & Susan is a delivery driver for GrubDash, and many restaurants that she delivers for recently started banning public access to bathrooms. Now Susan has to detour to spaces like gas stations, libraries, and sometimes even ER’s just to catch a bathroom break. & **Leading question**: What changes should be made to help Susan with bathroom breaks? & **Bathroom access** \\ \hline
**4** & George traveled to a dangerous part of town to deliver for LyberEats last night and was attacked by an unknown individual after the drop-off. He arrives at the ER to check on his injuries but is lost on how to provide health insurance information. He was offline from LyberEats at the time of attack. & **Leading question**: How should drivers like George be protected from such **attacks and overcharges**? & Safety \& healthcare \\ \hline
**5** & Marianne makes a living knitting and selling gloves on Ebsy. Two years ago, Ebsy increased transaction fees by 42\%, promising to bring in more buyers. Instead, Ebsy attracted more sellers with the funds, raising competition. To protest the fee increase, sellers are closing their shops for a week to strike and Marianne now has to decide between losing income versus losing negotiating power with Ebsy. & **Leading question**: What changes could be made to help Marianne and future sellers deal with similar dilemmas? & Intransparency \& collective actions \\ \hline \end{tabular}
\end{table}
Table 1: Problem Space: Scenario Summaries
### Recruitment and Participants
Our participant population consisted of three stakeholder groups: gig workers, local regulators and members of various public service organizations, as well as employees from popular gig work platforms. We chose these specific groups because they represent the three types of people who can solve gig worker well-being problems, independently or collaboratively. Gig workers can develop their own solutions, policy makers can make laws to restrict how platforms affect workers, platform employees can voluntarily or obligatorily add/modify features to improve gig worker well-being, and together they can drive forth systemic changes that bring us closer to healthy and productive gig communities.
We recruited a total of 20 unique participants across 8 workshops. We reached the seven participants from the first stakeholder group through contacts from a local organization, consisting of individuals who self-identified as regulators or worker advocates from local organizations such as the Department of Human Services, United Way, United Steelworkers, and the Jewish Family and Community Services. While not all of our regulator participants are actively involved in policy-making (some study public policy while others work for related local government agencies), we did recruit one councilperson as well as the director to our city's department of Mobility and Infrastructure. The second group (of eight gig workers) responded to our recruitment posts on Reddit and included individuals made earnings on popular ridesharing or food delivery apps. The last group is composed of five platform employees (e.g. product designers, managers, and engineers) whom we contacted through a combination of Reddit posts and LinkedIn direct messages. Participants selection was based on responses to a pre-screen survey, which asked for affiliated organizations and engagements with gig workers. Table 2 summarizes these workshop participants and their relevant expertise, in chronological order of when workshops were conducted.
### Workshop Setup
We conducted a total of 8 co-design workshops with 20 participants, one of which was in-person while the rest were virtually conducted via Zoom. All participants were located in the United States and compensated at a rate of $60/hour for their time, and each workshop lasted 90-120 minutes. To encourage discussion and collaboration among participants of the same stakeholder group, we included 2-3 participants in most workshops instead of conducting individual sessions. Combining the gig workers with the policy makers or platform employees could have discouraged workers to speak up in workshops, and thus we only included one stakeholder group in each workshop (Table 2 indicates the relevant stakeholder group to each workshop). This separation was intended to avoid further disempowerment of already marginalized voices, and to minimize the emergence of power differentials that could have resulted from potential employment relationships - workers in one group may have been demotivated to express their honest opinions if the workshop also hosted their employer. Because we studied our stakeholder groups separately, participants were able to connect and collaborate easily with peers from similar backgrounds. This setup of groups with similar experiences and values made each co-design workshop a productive discussion rather than confrontational. We also helped different participant groups collaborate asynchronously with each other by updating them on relevant solutions and rankings from previous workshop sessions.
Prior to each workshop, we set up whiteboards on Miro or physical easel pads to present the scenarios and potential solutions to participants, which served as a space for participants to rank or add solutions via sticky notes, and to document their finalized preferences. We took video recordings and field notes across workshops and collected participants' solution rankings, votes on who should take responsibility, and newly generated solutions.
### Positionality
As Irani states, reflexivity in HCI allows us as researchers to produce better knowledge by "recognizing designers? positions, values, limitations, and standpoints". In the following, we reflect on our own positions as designers and researchers as well as how it impacts our work outputs (Krishnan et al., 2017). We are all researchers residing in the US who work or receive training in the fields of Computer Science, Human Computer Interaction, and Music. Two of us live in the city where the study was conducted and have prior experience conducting research surrounding gig work. However, we recognize the relative privileges we hold in society as compared to worker participants. For instance, none of us have completed gig work ourselves and therefore lack first-hand experience of the issues that gig workers face. Additionally, we have all been a consumer on gig platforms; three authors often speak to rideshare workers about their job during rides and one author has family members who engage in gig work. The funding for this research sources solely from the National Science Foundation, and the work is not sponsored by any external companies or platforms.
We as researchers all hold the view that the current state of the gig economy, as discussed in Section 2, is incapable of supporting the well-being of gig workers and that these challenges should be addressed soon since it seems that gig work is here to stay. To address the need for change, we employ a combination of transformative, postmodern and pragmatic frameworks to interpret and understand the present day conditions of gig work, as well as to find practical approaches toward addressing some of these real word issues. (Krishnan et al., 2017). We presented day-to-day scenarios of individuals, which inform design decisions for addressing issues of gig work, and allows participants to generate solutions. In addition, we pre-populated the solution space with provocative ideas so as to give participants space for imagining more systematic solutions, which can contribute toward long-term reform of the gig-economy.
Following best practices suggested by prior literature (Bahdan et al., 2016; Krishnan et al., 2017; Krishnan et al., 2017), we shared our affiliations and intentions with participants prior to workshops, reflected on our own biases as researchers, and pondered "how can participants benefit from the study beyond the monetary compensation?", "are we bringing positive impacts to the worker community?" and "how can we place workers' ideas in a larger field of power?". We also include in 5.4 participant reflections on our study and considerations for future lines of research.
### Analysis
To begin analysis, we first computed average rankings for each solution and extracted the three highest and lowest ranked solutions for each scenario based on these averages. We then engaged in a thematic analysis approach to analyze 14 hours of Zoom recordings (transcribed by the online service _Revcom_) and 18 pages of field notes. In the first stage of the analysis, we followed an opening coding approach, where one to two researchers independently conducted qualitative coding for each workshop's data (at least one of these coders was present at the corresponding workshop) [29; 86; 90; 91; 103]. During this process, coders remained receptive and looked for as many codes as possible, while keeping in mind our research questions on worker well-being, the issues that each scenario targets, and potential future changes. The coders met to refine and resolve any disagreements about the initial codes, resulting in a total of 567 unique codes. In the next stage of analysis, we iteratively combined these codes into emergent themes and subthemes, wrote descriptive memos, and built an affinity diagram to map the relationships between categories [19; 53]. This analysis produced 8 themes and 63 subthemes, and we describe these findings below. The first set of findings gives an overview of participants' rationales for rankings across scenarios, the second set reports on scenario-based themes from participant's reactions and perspectives on our proposed solutions, and the last set describes themes from participant-generated solutions.
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Workshop ID** & **Stakeholder Group** & **\# Participants** & **Relevant experience** \\ \hline R1 & Regulators/Advocates & 3 & Manager at DHS; Director of community management at National Council of Jewish Women; intern analyst to director; \\ \hline P1 & Platform employees & 2 & Executive recruiter at a major rideshare organization; Product designer and an ex-employee of multiple e-commerce platforms \\ \hline W1 & Gig workers & 3 & 1 deliverer and 1 driver for a popular food delivery platform; nurse at a healthcare company; \\ \hline R2 & Regulators/Advocates & 2 & Director of Mobility Dept for local city; Professor in organizational behavior and public policy \\ \hline W2 & Gig workers & 5 & Full time food courier of 1.5 years; freelancer at a platform for matching local labor to demand; IT freelancer \\ \hline R3 & Regulators & 2 & Local councilperson; Professor of Cyber Law, Policy, and Security \\ \hline P2 & Platform employees & 2 & Product manager at a platform for matching local labor to demand; Program lead at a rideshare platform \\ \hline P3 & Platform employee & 1 & Employee at a popular food delivery platform \\ \hline \end{tabular}
\end{table}
Table 2. Workshop IDs & Participant Summaries
## 4. Results
Each stakeholder group offered unique reactions to our scenarios and proposed solutions. Thus, we start by presenting overarching incentives and preferences that motivates each stakeholder group to initiate change, as well as factors that prevent them from implementing suggested solutions. Next we delve into individual scenarios to unfold participants' quantitative rankings of solutions and provide a debrief of their rationales using qualitative results. We end by describing participants' imagined solutions that spanned across workshops and scenarios.
### Multi-Stakeholders? Incentives, Preferences and Deterrents for Improving Gig Worker Well-being
In this section, we present themes that emerged across various scenarios, reporting on stakeholders' overall incentives and preferences that motivate them to promote change for improving gig worker well-being, as well as factors that deter them from implementing suggested solutions. These patterns were revealed through discussions during solution ranking/generation; Table 3 summarizes these findings.
#### 4.1.1. Platform Motivations & Preferences
_Minimize Worker Decommission._ Platforms are inherently incentivized to support participating workers, since their operations depend critically upon labor supply. For example, when workers are decommissioned, platforms are motivated to bring them back on a job because "if the worker's not making money, if the worker's not available to work or just isn't working, the platform is not making money" (P1). Worker decommission can result from a variety of factors, including fluctuating seasonal demands, a lack of opportunities or unmet childcare needs: "If somebody doesn't have childcare, that does make them less likely to be available for work on the platform, which is problematic for the platform" (P1).
_Government Mandates and Regulations._ Regulatory pressure can incentivize platforms to make changes, but an excess of mandates can cause them to "think that a lot of this regulation stifles innovation" (P1). Mandates are also undesirable to platforms because they "means that we're more restricted, that we're gonna have to pay more" (P1). In addition to restricting platforms from implementing novel features, the cost of (unfunded) mandates can also "significantly restrict our bottom line and our ability to continue to function as a platform" (P1).
_Preserving Public Image._ To avoid and circumvent additional regulations, platforms are willing to implement services for preserving public image, or in other words "appease the general public or regulators or media...by offering something like a childcare program" (P1). Platforms' aversion to regulation is strong enough to dedicate "large government relation
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Stakeholders** & **Motivating factors and preferences** & **Deterrents** \\ \hline \multirow{3}{*}{Platform} & \(\bullet\) Minimize worker decommission & \(\bullet\) Increased operation costs \\ & \(\bullet\) Required compliance to mandates and regulations & \(\bullet\) Thin profit margins \& market competition \\ & \(\bullet\) Preserve public image & \(\bullet\) Legal liabilities \\ \hline \multirow{3}{*}{Workers} & \(\bullet\) Leverage multiple platforms & \(\bullet\) Disruptors to earning opportunities or client \\ & \(\bullet\) Personalized solutions & relations \\ & \(\bullet\) Short-term or unreliable solutions \\ \hline \multirow{3}{*}{Regulators} & \(\bullet\) Worker-initiated collective action & \(\bullet\) Providing special accommodations to specific \\ & \(\bullet\) Hold platforms responsible for initiating and implementing solutions that benefit their workers & \(\bullet\) Invasive monitoring of workers \\ \hline \end{tabular}
\end{table}
Table 3. Summary of Stakeholder’s Motivations and Deterrents
teams that...strongly lobby against" any mandates "except where they think that it benefits them to show the public for PR reasons" (P1).
#### 4.1.2. Deterrents for Platforms
High Operation CostsMany of the solutions we presented called for the development of services or programs benefiting workers. While platforms may want to contribute toward improved working conditions, they are fundamentally restricted by a lack of funding: "if we're adding incremental benefits, we have to reduce something else." (P1). The implementation of certain features can cost "easily six months of three engineers time, plus maybe a month of design effort, plus...you're probably talking about an initiative it's gonna cost $650,000" (P1). Such efforts may be so "prohibitively expensive, to the degree [that] the platform might not continue to be sustainable" (P1).
Thin Profit MarginsOne might suggest that platforms use resources gleaned from profit margins to develop features that promote worker well-being. However, platform-side participants relates how "margins are getting tougher and tougher on a lot of these products and services" (P1). In order to provide for increased pay or benefits, "the platform effectively needs to take less", but "the company's not really gonna take less cut because [then] they couldn't pay their employees and they just have to cut heads" (P1). Alternatively, platforms can "increase price [of its service]", but that instigates a negative cycle by putting the platform at risk for user abandonment because if "you raise it too high, you lose customers automatically, they don't wanna pay 50 bucks to go five miles", so it "reduces the number of users that will use the platform, which will cause Lyber to make less" (P1).
Competition Between PlatformsExacerbating monetary constraints, customers were deemed "very price sensitive, they're fickle, they may open both [apps]" (P1). If they are not satisfied with prices, clients might just abandon the service altogether: "There is a maximum amount of money that Lyber passengers are willing to pay for a single trip where [they] start to see just declines in usage" (P1). In fact, platforms assign "an entire revenue optimization team that figures out how much can be charged and how much people are willing to pay." (P1).
Legal LiabilitiesIn addition to prohibitive costs, some service offerings can entail legal ramifications. Platform participants fear such potential complications and "hope that there wouldn't be reputational risk to Lyber by Renee's[/workers] kid[s], potentially getting injured by being taken care of by another parent" (P1). Regulator participants also recognized the risks, noting that "one of the reasons why childcare programs aren't on sites in corporations is [because] the liability is huge" (R2). The ambiguous legal classification of gig workers also disincentives additional provisions of benefits since "the more that you...treat somebody as if they're an employee, the more they can argue in court that they are an employee" (P1).
#### 4.1.3. Worker Practices, Motivations & Preferences
Leverage Multiple PlatformsTo address instability, workers related experiences of engaging with multiple platforms at once: "if things slow down on one platform, then you can go to another" (W2). Distributing worker profiles across multiple platforms raises opportunities in procuring gigs, and workers view the task of finding work as their own responsibility: "you can't just sit there and say that TaskBunny should be responsible...when it's off season, it's upon you now to maybe seek other alternatives of earning" (W1).
Personalized SolutionsThe instability of gigs often forces workers to fit needs around work schedules, but ironically the promised flexibility is oftentimes what drove them toward gigs in the first place [78]. Thus it's on platforms to
adjust around worker schedules, "to understand the kind of situation that you're in and then they'll try to adjust to fit your availability...this is the best way...[when] they're trying to adjust to your schedule...[and] to your situation" (W1). Adjusting to workers' schedules can provide a peace of mind through both regularity on standard days and accommodations during emergencies. Platforms don?t currently account for situations where "[there is an] employee who is on maternity leave...[or] away for stuff like funerals", but workers desire solutions that consider "the various kinds of condition[s] that needs them to be away from work" (W1).
#### 4.1.4. Deterrents for Workers.
_Impediments to Earning or Damages to Client Relations._ Worker participants held a strong aversion against changes that conflict with their own priorities of making earnings, or maintaining good reputation with clients. For example, when presented with Susan's predicament of being blocked from restaurant bathrooms, one worker explained how "you need to work to get money", challenging the hypothetical idea that if "all the restaurants fail to offer bathroom services, do you stop working?" (W1). Another worker opposed "the restriction of platforms, [since] that means you wouldn't have work" (W2). They were also mindful of client relationships, stating concerns that "avoid[ing] orders from those locations, meaning that the clients would suffer" (W2). Beyond clients, workers also "wouldn't want to get on a restaurant's bad side" (W2).
_Short-term or Unreliable Solutions._ Temporary solutions were also undesirable to workers, as they offer only short-lived relief to long-lasting problems. While some help is better than nothing, "they are just short term, they may be a day or two solutions in a month, in the whole season" (W2). For childcare needs, "[days of paid time off], is not a solution because...she has to stay with the kid" (W1). Worker participants also resisted solutions out of their control, since they may prove to be breakable - "security equipment could fail, maybe the cameras have failed to work, or failed to capture a clear image of the attack" (W1) - or unreliable - "off-season events that are planned by TaskBunny maybe would not be very reliable" (W2).
|
2301.12278 | Pragmatic Fairness: Developing Policies with Outcome Disparity Control | We introduce a causal framework for designing optimal policies that satisfy
fairness constraints. We take a pragmatic approach asking what we can do with
an action space available to us and only with access to historical data. We
propose two different fairness constraints: a moderation breaking constraint
which aims at blocking moderation paths from the action and sensitive attribute
to the outcome, and by that at reducing disparity in outcome levels as much as
the provided action space permits; and an equal benefit constraint which aims
at distributing gain from the new and maximized policy equally across sensitive
attribute levels, and thus at keeping pre-existing preferential treatment in
place or avoiding the introduction of new disparity. We introduce practical
methods for implementing the constraints and illustrate their uses on
experiments with semi-synthetic models. | Limor Gultchin, Siyuan Guo, Alan Malek, Silvia Chiappa, Ricardo Silva | 2023-01-28T19:25:56Z | http://arxiv.org/abs/2301.12278v1 | # Pragmatic Fairness: Developing Policies with Outcome Disparity Control
###### Abstract
We introduce a causal framework for designing optimal policies that satisfy fairness constraints. We take a pragmatic approach asking what we can do with an action space available to us and only with access to historical data. We propose two different fairness constraints: a moderation breaking constraint which aims at blocking moderation paths from the action and sensitive attribute to the outcome, and by that at reducing disparity in outcome levels as much as the provided action space permits; and an equal benefit constraint which aims at distributing gain from the new and maximized policy equally across sensitive attribute levels, and thus at keeping pre-existing preferential treatment in place or avoiding the introduction of new disparity. We introduce practical methods for implementing the constraints and illustrate their uses on experiments with semi-synthetic models.
## 1 Introduction
The fairness of decisions made by machine learning models involving underprivileged groups has seen increasing attention and scrutiny by the academic community and beyond. A growing body of literature has been looking at the unfavourable treatment that might arise from historical biases present in the data, data collection practices, or the limits of modelling choices and techniques. Within this field of study, the vast majority of works has considered the problem of designing _fair prediction systems_, i.e. systems whose outcomes satisfy certain properties with respect to membership in a sensitive group (Verma and Rubin, 2018; Barocas et al., 2019; Mehrabi et al., 2019; Wachter et al., 2020; Mitchell et al., 2021; Pessach and Shmueli, 2022). In contrast, relatively little attention has been given to the problem of designing _fair optimal policies_(Joseph et al., 2016, 2018; Gillen et al., 2019; Kusner et al., 2019; Nabi et al., 2019; Chohlas-Wood et al., 2021). In this case, the goal is to design a decision making system that specifies how to select _actions_ that maximize a _downstream outcome_ of interest subject to fairness constraints.
The consideration of an outcome downstream of the action allows a more flexible and powerful approach to fair decision making, as it enables to optimize future outcomes rather than matching historical ones, and also to enforce fairness constraints on the effects of the actions rather than on the actions themselves. For example, when deciding on offering college admissions, a fair prediction system would output admissions that match historical ones and satisfy certain properties with respect to membership in a sensitive group. In contrast, a fair optimal policy system would prescribe admissions such that downstream outcomes, e.g.
academic or economic successes, are maximised and satisfy certain properties with respect to membership in a sensitive group. Enforcing fairness constraints on the effects of the actions in decision problems that correspond to allocations of resources or goods may reduce the risk of unfair delayed impact (Dwork et al., 2012; Liu et al., 2018; D'Amour et al., 2020).
In this work, we propose a causal framework for designing fair optimal policies that is inspired by the public health literature (Jackson and VanderWeele, 2018; Jackson, 2018; Jackson, 2020). We assume the causal graph in figure 0(a), where action \(A\) depends on sensitive attribute \(S\) and covariates \(X\); and where \(S\) and \(X\), which can be associated, potentially directly influence the outcome \(Y\). We define the policy with the conditional distribution \(p(A\,|\,S,X;\sigma_{A})\) parameterized by \(\sigma_{A}\)-represented in the graph as a _regime indicator_(Dawid, 2007; Correa and Bareinboim, 2020). The aim is to learn a parametrization that maximizes \(Y\) in expectation while controlling its dependence on \(S\). This graph describes common real-world settings in which the action can only indirectly control for the dependence of \(Y\) on \(S\), and therefore to a level that depends on the available action space. In addition, it enables to learn the optimal policy from available historical data collected using a baseline policy \(p(A\,|\,S,X;\sigma_{A}=\emptyset)\), overcoming issues such as, e.g., cost or ethical constraints in taking actions, which are common in real-world applications where fairness is of relevance. To gain more insights into the difference between our fair optimal policy framework and the fair prediction system framework, we offer a schema of the fair prediction setting with the casual graph in figure 0(b), in which the action coincides with the outcome; and describe the goal as learning a parametrization such that \(p(Y\,|\,S,X;\sigma_{Y})\) matches the distribution of past outcomes \(p(Y\,|\,S,X;\sigma_{Y}=\emptyset)\) while controlling for dependence of the outcome distribution on \(S\).
We consider two constraints for controlling disparity in the downstream outcomes with respecet to \(S\), which may be applicable for different use cases, depending on context and available actions: (i) a moderation breaking constraint, which aims at actively reducing disparity to the extent permitted by the action space; and (ii) an equal benefit constraint, which aims at equalizing the disparity of a new policy with that of the baseline policy in order to maintain the preferential treatment present in the baseline policy or to conservatively avoid introducing new disparity. We demonstrate the performance of our framework on two real-world based settings.
## 2 Outcome Disparity Controlled Policy
We assume the setting represented by the causal graph in figure 0(a), where the sensitive attribute \(S\) corresponds to characteristics of an individual-such as race, gender, disabilities, sexual or political orientation-which we wish to protect against some measure of unfairness. We focus on discrete \(S\), but the proposed methods can be adapted to settings with continuous \(S\).
This causal graph encodes the following statistical independencies assumptions: (a) \(\sigma_{A}\,\mbox{\Large$\perp$}\,Y\,|\,\{A,S,X\}\); (b) \(\sigma_{A}\,\mbox{\Large$\perp$}\,\{S,X\}\). These assumptions are not restrictive, as they are satisfied in any setting in which the partial ordering among nodes is given by \(\{S\cup X,\sigma_{A},A,Y\}\). Assumption (a) enables us to learn the optimal policy based on historical data collected using a _baseline policy_\(p(A\,|\,S,X;\sigma_{A}=\emptyset)\) (observational-data regime), rather than by taking actions (interventional-data regime). Such a baseline policy represents the action
Figure 1: (a): Causal graph with associated distribution \(p(A,S,X,Y;\sigma_{A})=p(Y|A,S,X)p(A|S,X;\sigma_{A})p(S,X)\) describing our fair optimal policy setting. (b): A schema of the fair prediction system setting.
allocation that was in place during the collection of the data pre-optimization. Assumption (b) allows us to compute the constraints with the proposed estimation methods below.
We denote with \(Y_{\sigma_{A}}\) the _potential outcome_ random variable, which represents the outcome resulting from taking actions according to \(p(A\,|\,S,X;\sigma_{A})\) and has distribution equal to \(p(Y;\sigma_{A})=\int_{a,s,x}p(a,s,x,Y;\sigma_{A})\). In the reminder of the paper, we indicate the baseline policy and potential outcome with \(p(A\,|\,S,X;\emptyset)\) and \(Y_{\emptyset}\) respectively.
The goal of the decision maker is to learn a parametrization \(\sigma_{A}\) that maximizes the expectation of the potential outcome, \(\mathbb{E}[Y_{\sigma_{A}}]\), while also controlling the disparity in \(Y_{\sigma_{A}}\) across \(S\) in the following ways: (i) through a _moderation breaking_ (ModBrk) constraint the aims at removing dependence of \(\mathbb{E}[Y_{\sigma_{A}}]\) on \(S\) as much as possible via what can be controlled, namely the allocation of \(A\) as determined by \(p(A|S,X;\sigma_{A})\); (ii) through an _equal benefit_ (Eq8) constraint that requires the distribution of \(Y_{\sigma_{A}}-Y_{\emptyset}\) to be approximately equal across different values of \(S\), ensuring that gain from the new policy is distributed equally.
### Disparity Control via ModBrk Constraint
Without loss of generality1, \(\mu^{Y}(a,s,x):=\mathbb{E}[Y\,|\,a,s,x]\) can be written as
Footnote 1: This can be seen by setting \(f(s,x)=h(a,x)=0\).
\[\mu^{Y}(a,s,x)=f(s,x)+g(a,s,x)+h(a,x), \tag{1}\]
which leads to the following decomposition of \(\mu^{Y}_{\sigma_{A}}(s,x):=\mathbb{E}[Y_{\sigma_{A}}\,|\,s,x]=\int_{a}\mu^{Y} (a,s,x)p(a\,|\,s,x;\sigma_{A})\)
\[\mu^{Y}_{\sigma_{A}}(s,x)=f(s,x)+g_{\sigma_{A}}(s,x)+h_{\sigma_{A}}(x),\]
where \(g_{\sigma_{A}}(s,x):=\mathbb{E}[g(A,s,x)\ |\ s,x;\sigma_{A}]\), and \(h_{\sigma_{A}}(x):=\mathbb{E}[h(A,x)\ |\ s,x;\sigma_{A}]\). This decomposition contains (i) a component \(f(s,x)\) that cannot be affected by \(\sigma_{A}\), but which can contribute to disparity; (ii) a component \(h_{\sigma_{A}}(x)\) that can be adjusted by \(\sigma_{A}\) to increase expected outcomes, but which is not affected by \(S\); and (iii) a component \(g_{\sigma_{A}}(s,x)\) by which the choice of \(\sigma_{A}\) can influence differences that are _moderated_ by \(S\). Considering how \(\mu^{Y}_{\sigma_{A}}(s,x)\) varies with \(s\) as a measure of disparity suggests the constrained objective
\[\arg\max_{\sigma_{A}}\mathbb{E}[Y_{\sigma_{A}}]\quad\text{s.t.}\quad(\mathbb{ E}[g_{\sigma_{A}}(s,X)\ |\ S=s]-\mathbb{E}[g_{\sigma_{A}}(\bar{s},X)\ |\ S=\bar{s}])^{2}\leq \epsilon,\forall s,\bar{s}. \tag{2}\]
The slack \(\epsilon\) is chosen based on domain requirements and the feasibility of the problem: central to the setup is that we work with a given space of policies, which is constrained by real-world phenomena and, in general, can only do so much to reduce disparity.
**Motivation & Suitability.**ModBrk aims at removing \(S\)'s influence on the outcome. As such, it is appropriate when disparity is illegitimate. This is analogous to the appropriateness of the demographic parity constraint in fair prediction systems.
However, ModBrk controls disparity via the separate actions and therefore to the extent permitted by the action space. Some insights into ModBrk can be gained by noticing that we are operating in the (estimated) true process that follows a particular \(\sigma_{A}\). We marginalize the intermediate process between \((A,S,X)\) and the outcome \(Y\), but implicitly assume that actions change mediating events, such as \(M\) in figure 1(a). The extent by which we can mitigate unfairness is a property of the real-world space of policies. For instance, we cannot interfere in the direct dependence between \(S\) and \(Y\) except for the moderating effect that \(M\) might have under a new policy. This is in contrast to previous work, such as Nabi et al. (2019), which solves a planning problem in a "projection" of the real-world process where unfair information has been removed by some criteria.
Figure 2: (a): Causal graph providing further intuition for the ModBrk constraint. (b): Example use cases for the suggested constrains. See full description in the text.
An example in which the ModBrk constraint would be suitable is that of a company looking to change its outreach campaign \(A\) to maximize job applications \(Y\) while mitigating current imbalances in their demographics \(S\) and level of experience \(X\) (figure 2b, green labels). The company cannot control factors such as cultural preference among applicants for industry sectors, but can induce modifications by focusing recruiting efforts in events organized by minority groups in relevant conferences and by choosing recruiting strategies that do not interact with group membership.
### Disparity Control via EqB Constraint
We formalize the requirement that the distribution of \(Y_{\sigma_{A}}-Y_{\emptyset}\) must be approximately equal across different values of \(S\) using the cumulative distribution function (cdf) \(\mathcal{F}\), leading to the constrained objective
\[\arg\max_{\sigma_{A}}\mathbb{E}[Y_{\sigma_{A}}]\quad\text{s.t.}\quad\mathcal{ F}(Y_{\sigma_{A}}-Y_{\emptyset}\,|\,S=s)=\mathcal{F}(Y_{\sigma_{A}}-Y_{\emptyset}\,| \,S=\bar{s}),\forall s,\bar{s}. \tag{3}\]
In general, the distribution of \(Y_{\sigma_{A}}-Y_{\emptyset}\) is not identifiable, due to the fact that the decision maker has access only to realization of \(Y\) under the baseline policy. This problem is similar to the "fundamental problem of causal inference" in the context of hard interventions, for which bounding approaches have been suggested (Pearl, 2000; Wu et al., 2019; Miratrix et al., 2018). Similarly, we propose matching upper bounds and lower bounds of the cdf.
**Motivation & Suitability.**EqB aims at not increasing disparity compared to the baseline policy. The EqB constraint would be appropriate in two scenarios. The first scenario is when we would like to keep the disparity present in the baseline policy when moving to the new one, since it includes legitimate disparity or desirable preferential treatment. This is a similar motivation to the equalized odds constraint in fair prediction systems. This scenario would arise, for example, when wishing to design an income taxation system to optimize disposable income or savings levels (figure 2b, purple labels). Disparities in \(Y\) across gender or racial groups \(S\) may exist, as well as possibly related individual characteristics \(X\), like socio-economic status, employment type and housing situation. The decision maker should take into account and maintain desirable preferential treatments present in the baseline policy, for example, for working moms.
The second scenario is when we would like to not introduce greater disparity in a new policy, acknowledging the limitations of the action space. This scenario would arise, e.g., when wishing to perform a software update of a medical or well-being app that includes different designs, with users belonging to different age groups. The decision maker might want to ensure that the new policy does not change existing levels of difference in health or wellbeing across groups, which relates to the usage of the app. The app use, as well as the downstream outcome \(Y\) itself, has pre-existing relations to age groups \(S\) as well as to other associated user characteristics \(X\), but the decision maker wants the update rollout to not exacerbate the already existing gap in outcome under the baseline policy. The decision maker may not want to forcefully reduce disparities in outcome in such a case - it is likely that younger age groups would be more responsive and happier with a software update, and have higher levels of the downstream outcome \(Y\) overall; the choice of policy can only do so much to change that.
## 3 Relation to Prior Work
While more attention has been given to the problem of designing fair prediction systems, some works have considered the problem of designing fair optimal policies. Like us, Nabi et al. (2019) use observational data, but in order to learn a policy as if particular path-specific effects between \(S\) and \(A\) and \(S\) and \(Y\) were completely deactivated. They do not place any constraints on the distribution of \(Y_{\sigma_{A}}\) given \(S\) and \(X\), require complex counterfactual computations, and aim to achieve a notion of fair policy which targets specific causal subpaths. Critically, they rely on a manipulation of \(S\), and do not have an equivalent to our pragmatic view, asking what can be done with a set of available actions. Chohlas-Wood et al. (2021) consider a more general utility function than ours as the optimization objective but focuses on enforcing a notion similar to demographic parity on the choice of the policies, rather than considering a fairness notion on the outcomes. The most similar work to ours is the one from Kusner et al. (2019), still in the observational data regime. Crucially, this work is different in its motivation: it aims to extend the work in
Kusner et al. (2017) to the policy setting, and thus relies on manipulation of \(S\). Further, it consists mostly of budget treatment allocations and interference problems. It does not have our pragmatic view, and does not consider parameterized policy spaces that take into account a covariate vector \(X\). Due to the difference in their disparity definitions, these methods would not be directly comparable to ours. There is also a growing literature on fair policy optimization in online settings, but those deal with sequential decisions and interventional data, which are fundamentally different from ours (see Appendix A).
## 4 Method
In this section, we describe how the ModBrk and EqB constrained objectives (2) and (3) can be estimated from an observational dataset \(\mathcal{D}=\{a^{i},s^{i},x^{i},y^{i}\}_{i=1}^{N}\), \((a^{i},s^{i},x^{i},y^{i})\sim p(A,S,X,Y;\emptyset)\), and introduce a method for optimizing \(\sigma_{A}\) using neural networks for each constraint. In both cases, we enforce the constraints via an augmented Lagrangian approach, casting them as inequality constraints controlled by a slack value \(\epsilon\) (see Appendix B). Different choices of \(\epsilon\) lead to different trade-offs between utility and constraints, similarly to varying Lagrange multiplier values.
### ModBrk Constraint
For the ModBrk constrained objective (2), we learn a deterministic policy using an MLP neural network \(\text{MLP}^{A}_{\sigma_{A}}\), i.e. \(p(A\,|\,s,x;\sigma_{A})=\delta_{A=\text{MLP}^{A}_{\sigma_{A}}(s,x)}\), where \(\delta\) denotes the delta function and \(\text{MLP}^{A}_{\sigma_{A}}(s,x)\) the output of \(\text{MLP}^{A}_{\sigma_{A}}\) given input \(s,x\). This gives \(\mu^{Y}_{\sigma_{A}}(s,x)=\mathbb{E}[Y_{\sigma_{A}}|s,x]=\int_{a}\mu^{Y}(a,s,x )p(a\,|\,s,x;\sigma_{A})=\mu^{Y}(\text{MLP}^{A}_{\sigma_{A}}(s,x),s,x)\).
We model \(\mu^{Y}(a,s,x)=f(s,x)+g(a,s,x)+h(a,x)\) using a structured neural network \(\text{NN}^{Y}\) that separates into the three components \(f,g,h\) reflecting the decomposition of \(\mu^{Y}(a,s,x)\). We learn the parameters of \(\text{MLP}^{A}_{\sigma_{A}}\) and \(\text{NN}^{Y}\) in two phases, as outlined in figure 3a. In Phase I, we estimate the parameters of \(\text{NN}^{Y}\) from \(\mathcal{D}\). In Phase II, we learn the parameters of \(\text{MLP}^{A}_{\sigma_{A}}\) by optimizing objective (2) with \(\mathbb{E}[Y_{\sigma_{A}}]\approx\frac{1}{N}\sum_{i=1}^{N}\mathbb{E}[Y_{\sigma _{A}}|\,s^{i},x^{i}]=\frac{1}{N}\sum_{i=1}^{N}\mu^{Y}(\text{MLP}^{A}_{\sigma _{A}}(s^{i},x^{i}),s^{i},x^{i})\), using the \(\text{NN}^{Y}\) trained in Phase I. If all variables are discrete, after estimating \(\mu^{Y}(a,s,x)\) and \(p(s,x)\), computing (2) can be cast as an LP (see Appendix C). Consistency is then a standard result which follows immediately. Notice that this formulation can also be extended to continuous \(S\) by defining the constraint via partial derivative, \(\mathbb{E}\left[\left|\frac{\partial g_{\sigma_{A}}(s,X)}{\partial s}\right| \Bigm{|}S=s\right]\leq\epsilon\).
**Action clipping.** We suggest ensuring overlap of \(p(a|s,x;\sigma_{A})\) with \(p(a|s,x;\emptyset)\) by adaptively constraining the output of \(\text{MLP}^{A}_{\sigma_{A}}\) to be within an interval that resembles the one observed under the baseline policy. We consider two options: (a) matching the minimal and maximal value \(a\) seen for each \(s,x\) combination (binning continuous elements) and (b): extend that interval according to the difference between the minimal
Figure 3: (a) Training phases for ModBrk. (b) Training phases for EqB. Gray circles: inputs. Teal blocks: parameter layers. Light green blocks: fixed parameters. Purple diamonds: additive gates. Dashed edge: alternating inputs.
and maximal \(a\) value seen with each \(s,x\). In the experiments, we opted for constraining the output of \(\text{MLP}^{A}_{\sigma_{A}}\) to be in the interval \([\min_{A_{S,X}}-\eta\text{gap}_{A_{S,X}},\max_{A_{S,X}}+\eta\text{gap}_{A_{S,X}}]\), where \(\text{gap}_{A_{S,X}}=\max_{A_{S,X}}-\min_{A_{S,X}}\). We enforced this interval with a shifted Sigmoid function in the last layer of \(\text{MLP}^{A}_{\sigma_{A}}\). We set \(\eta=1\) to allow some extrapolation and increase in utility.
### EqB Constraint
For the EqB constrained objective (3), we require knowledge of the baseline policy as well as creating upper and lower bounds on the cdf \(\mathcal{F}(Y_{\sigma_{A}}-Y_{\emptyset}\,|\,S=s),\forall s\). \(Y_{\sigma_{A}}-Y_{\emptyset}\) is a counterfactual quantity, which is not generally identifiable. In fact, we do not have access to the joint distribution of \(Y_{\sigma_{A}}\) and \(Y_{\emptyset}\) as we never observed both variables simultaneously for the same individual. However, we show next that if one is willing to make a parametric assumption, then \(Y_{\sigma_{A}}-Y_{\emptyset}\) is partially identifiable and amenable to bounding. We assume that the baseline policy is Gaussian and only consider Gaussian policies, with equal homogeneous variance, i.e. \(p(A|s,x;\emptyset)=\mathcal{N}(\mu^{A}_{\emptyset}(s,x),V^{A})\), \(p(A|s,x;\sigma_{A})=\mathcal{N}(\mu^{A}_{\sigma_{A}}(s,x),V^{A})\). In addition, we assume that \(Y_{\sigma_{A}}\) and \(Y_{\emptyset}\) are jointly Gaussian conditioned on \(S,X\), with marginal means \(\mu^{Y}_{\sigma_{A}}(s,x)\), \(\mu^{Y}_{\emptyset}(s,x)\) and equal homogenous marginal variance \(V^{Y}\).
The assumption on \(Y_{\sigma_{A}}\) and \(Y_{\emptyset}\) enables us to bound the population cdf by maximizing/minimizing it with respect to the unknown correlation coefficient \(\rho(s,x)\in[-1,1]\), where \(\rho(s,x):=\frac{Cor(Y_{\sigma_{A}},Y_{\emptyset}\,\,|\,s,x)}{V^{Y}}\). Let \(\Phi\) denote the cdf of the standard Gaussian. We can write
\[\mathbb{P}(Y_{\sigma_{A}}-Y_{\emptyset}\leq z\,|\,x,s)=\Phi\left(\frac{z-\mu^ {Y}_{\sigma_{A}}(s,x)+\mu^{Y}_{\emptyset}(s,x)}{\sqrt{2V^{Y}\left(1-\rho(s,x) \right)}}\right).\]
Defining \(\mu(s,x):=\mu^{Y}_{\sigma_{A}}(s,x)-\mu^{Y}_{\emptyset}(s,x)\) and \(f^{z}_{s,x}(\rho):=\Phi\left(\frac{z-\mu(s,x)}{\sqrt{2V^{Y}\left(1-\rho\right) }}\right)\), we can show that, for any \(s,x\), the following bounds are the tightest: (i) \(f^{z}_{s,x}(-1)\leq\mathbb{P}(Y_{\sigma_{A}}-Y_{\emptyset}\leq z\,|\,x,s)\leq f ^{z}_{s,x}(1),\text{for }z-\mu(s,x)>0\); (ii) \(f^{z}_{s,x}(1)\leq\mathbb{P}(Y_{\sigma_{A}}-Y_{\emptyset}\leq z\,|\,x,s)\leq f ^{z}_{s,x}(-1),\text{for }z-\mu(s,x)<0\).
Using \(N_{s}\) to indicate the number of elements in \(\mathcal{D}\) with \(s^{i}=s\), we obtain global lower and upper bounds estimates for \(\mathbb{P}(Y_{\sigma_{A}}-Y_{\emptyset}\leq z\,|\,s)\) as \(F^{L}_{s}(z)=\frac{1}{N_{s}}\sum_{i:s^{i}=s}f^{z}_{s,x^{i}}(-\text{sign}(z-\mu( s,x^{i})))\) and \(F^{U}_{s}(z)=\frac{1}{N_{s}}\sum_{i:s^{i}=s}f^{z}_{s,x^{i}}(\text{sign}(z-\mu( s,x^{i})))\).
We operationalize the constraint by minimizing the mean squared error (MSE) of the bounds differences, i.e. our final objective is
\[\arg\max_{\sigma_{A}}\mathbb{E}[Y_{\sigma_{A}}]\quad\text{s.t.}\quad\sum_{z \in S_{z}}\left(||F^{L}_{s}(z)-F^{L}_{\bar{s}}(z)||^{2}_{2}+||F^{U}_{s}(z)-F^{U }_{\bar{s}}(z)||^{2}_{2}\right)\leq\epsilon, \tag{4}\]
\(\forall s,\bar{s}\), where \(S_{z}\) is a grid of values.
We propose to estimate \(\mathbb{E}[Y_{\sigma_{A}}]\) with the inverse probability weighting (IPW) estimator. This is an alternative estimation approach to the one used for ModBrk, to demonstrate the modularity of our approach and constraints.
\[\mathbb{E}[Y_{\sigma_{A}}]=\int_{a,s,x,y}y\frac{p(a\,|\,s,x;\sigma_{A})}{p(a\,| \,s,x;\emptyset)}p(a,s,x,y;\emptyset)\approx\frac{1}{N}\sum_{i=1}^{N}y^{i}\frac {p(a^{i}\,|\,s^{i},x^{i};\sigma_{A})}{p(a^{i}\,|\,s^{i},x^{i};\emptyset)}.\]
We model \(\mu^{A}_{\emptyset}(s,x)\), \(\mu^{A}_{\sigma_{A}}(s,x)\) and \(\mu^{Y}(a,s,x)=\mathbb{E}[Y\,|\,a,s,x]\) using MLP neural networks \(\text{MLP}^{A}_{\emptyset}\), \(\text{MLP}^{A}_{\sigma_{A}}\), and \(\text{MLP}^{Y}\) respectively. We learn the parameters of these networks in two phases as outlined in figure 2(b). In Phase I, we learn the parameters of \(\text{MLP}^{A}_{\emptyset}\) and \(\text{MLP}^{Y}\) from \(\mathcal{D}\) with MSE losses between predicted and observed actions and outcome. We obtain a MAP estimate of \(\mu^{Y}_{\emptyset}(s,x)=\int_{a}\mu^{Y}(a,s,x)p(a|s,x,\emptyset)\) as \(\hat{\mu}^{Y}_{\emptyset}(s,x)=\int_{a}\mu^{Y}(a,s,x)\delta_{A=\mu^{A}_{ \emptyset}(s,x)}\). We estimate \(V^{A}\) and \(V^{Y}\) through averaging the MSE of target and predicted mean output from \(\text{MLP}^{A}\) and \(\text{MLP}^{Y}\) respectively 2. In Phase II, we learn the parameters of \(\text{MLP}^{A}_{\sigma_{A}}\) that maximize objective (4) using the \(\text{MLP}^{A}_{\emptyset}\) and \(\text{MLP}^{Y}\) trained in Phase I.
Footnote 2: We assume \(V^{Y}\) to be homoskedastic.
**Misspecification and parametric restrictiveness.** Notice that, as the Gaussianity assumption is placed on the conditional joint distribution of \((Y_{\sigma_{A}},Y_{\emptyset})\) given \(S\) and \(X\) rather than on the marginal distribution. It is thus reasonable to invoke the central limit theorem and assume that, given enough samples, the residuals of \(Y_{\emptyset}|s,x\) and \(Y_{\sigma_{A}}|s,x\) are normally distributed, since one can view the residuals as aggregating various minor factors that explain the remaining variability after taking into account a strong signal. Also notice that our proposed approach can be easily extended to heteroskedastic Gaussian regression models which are very flexible and can accommodate many real-world datasets. Finally, a nonparametric approach for the constraint computation would be possible e.g. via Frechet bound (see Appendix D), but that may come at a computational cost. To illustrate the effects of misspecification on our proposed assumption, we present below a formal result showing that our constraint estimation error is bounded from above by how the true joint density deviates from the Gaussian density (see Appendix D for a proof and discussion).
**Proposition 1**: _For fixed \(s,x\), let \(Q=\mathbb{P}(Y_{\sigma_{A}}-Y_{\emptyset}\leq z\,|\,s,x)\) and our estimator \(\hat{Q}=\Phi\left(\frac{z-\mu_{\sigma_{A}}^{Y}(s,x)+\mu_{\emptyset}^{Y}(s,x)} {\sqrt{2^{YV}(1-\rho(s,x))}}\right)\). Let \(f_{s,x}\) denote the joint density function of \(Y_{\sigma_{A}},Y_{\emptyset}\,|\,s,x\) and \(\phi_{s,x}\) the joint Gaussian density function with mean \(\left[\mu_{\sigma_{A}}^{Y}(s,x),\mu_{\emptyset}^{Y}(s,x)\right]\) and shared variance \(V\) with correlation \(\rho(s,x)\). Then \(\left|Q-\hat{Q}\right|\leq\left\|f_{s,x}-\phi_{s,x}\right\|_{1}\), where \(\left\|\cdot\right\|_{1}\) denotes the \(L_{1}\) norm._
## 5 Experiments
We evaluated the ModBrk and EqB constraint methods3 on the New York City Public School District (NYC-Schools) dataset compiled in Kusner et al. (2019), which we augmented to include actions; and further tested ModBrk on the Infant Health and Development Program (IHDP) dataset, specifically the real-data example examining dosage effects described in Section 6.2 of Hill (2011) (we could not use this dataset for EqB due to its reliance on parametric assumptions). Below, we briefly discuss the datasets and defer further details to Appendix E.
Footnote 3: The code reproducing our results is available at [https://github.com/limorigu/PragmaticFairness](https://github.com/limorigu/PragmaticFairness)
**Action-augmented NYCSchools dataset.** We adopted the same sensitive attribute and covariates as Kusner et al. (2019), and augmented the dataset with generated actions and outcomes. We created continuous actions corresponding to funding level decisions as \(A=(w_{SX}^{T}SX)^{2}+\max(0,w_{X}^{T}X)+\mathcal{N}(0.5,0.4)\), where \(E\) is the original percent of students taking the SAT/ACT exams (pre-college entry) in the dataset. The proposed form of \(Y\) is discussed in Appendix E.
**IHDP dataset.** The IHDP dataset describes a program that targeted low-birth-weight, premature infants, providing them with intensive high-quality child care and home visits from a trained provider. As continuous action \(A\) we used the self-selected number of participation days in the program. The outcome \(Y\) corresponds to child score attainment in cognitive tests at age three. We considered mother's race (white vs. non-white) as sensitive attribute \(S\). We reinterpreted the original setting as a resource allocation problems as follows: rather than as a self-decision, we view the number of days in treatment as external, e.g., by assigning different individuals to different lengths of participation. In this case, the ModBrk constraint goal is to break the moderation of the allocation of days in program by the group membership, such that the resource allocation policy is not responsible for the difference in scores attained by different groups.
### Results
We evaluated ModBrk and EqB by comparing these methods with: (1) optimizing the policy with no disparity constraint (Unconstrained), (2) optimizing the policy without using \(S\) (Drop \(S\)), and (3) using the baseline policy (\(\sigma_{A}=\emptyset\)). As discussed in Section 2, previous work on fair policy or action allocation differs from our setting and objective, and therefore cannot be meaningfully compared. For ModBrk, we also compare against different levels of constant actions (\(\text{Const}_{A}\)). We do not include it for EqB since the IPW estimator is not suitable for \(\delta\)-distributions. The range of \(\epsilon\) values in the plots was determined such that the minimal \(\epsilon\) slack values correspond to the smallest constraint value achievable with our optimization strategy (this is the lowest possible constraint value achieved when setting \(\epsilon=0\)); the maximal \(\epsilon\) slack value is the one that
would closely match the constraint violation under unconstrained optimization. In practice, we propose to explore the frontier resulting from choices of slack values and select the most acceptable trade-off for the user.
ModBrk Constraint Method.The results for ModBrk on the NYCSchools and IHDP datasets are presented in Fig. 3(a), 3(d) and in Figs. 3(c), 3(f) respectively4. For both datasets, as we increase the tolerance on fairness violations, in the form of higher slack value \(\epsilon\) in (2), we see a major increase in constraint value, while allowing some increase in utility \(\mathbb{E}[Y_{\sigma_{A}}]\). Observing the utility values broken down by group membership (\(\mathbb{E}[Y_{\sigma_{A}}\,|\,S=1]\) vs. \(\mathbb{E}[Y_{\sigma_{A}}\,|\,S=0]\)), we see most of this increase of utility is driven by the privileged group, \(S=1\) (a majority-white student body in the NYCSchools dataset, and white mothers for the IHDP dataset). This is to be expected given that we are trying to minimize interactions involving \(S=1\) and \(A\), i.e. where the membership in the privileged group inflates utility values, via higher \(g\) values. These higher \(g\) values for the \(S=1\) group also translate into higher utility values for \(S=1\) according the baseline policy, as can be seen from the orange line, indicating baseline policy actions in the corresponding figures. Notice that the choice of an appropriate slack \(\epsilon\) here is a matter of trade-off based on user preferences of utility vs. constraint value, and will depend on the specific dataset. We can gain deeper insight into the working of our approach by inspecting the histogram of recommended actions at the end of the policy model training. For both datasets, we see that applying the constraint with a small \(\epsilon\) value results in mapping more \(S=1\) members to lower action values compared to the \(S=0\) group, indicating that to enforce the constraint more tightly means allocating lower value actions to the privileged group, while giving out as many high actions
Figure 4: (Top) influence of the slack value on obtained values of the objective function and fairness constraint violations. (Bottom) recommended actions by group for unconstrained optimization and for constrained optimization, where the slack on fairness violation set to 0 or the highest level shown in the top plot, respectively.
to \(S=0\) as possible under the action clipping setting described previously. Notice that in both settings we succeed in increasing the utility compared to the baseline policy. Recall that we employ action clipping to ensure some overlap with the baseline actions for estimation purposes. This means that we are bound to some extent to the interval of actions seen for each \(s,x\) in the baseline policy (this trade-off can be explored via choice of \(\eta\) value, see Section 4.1). One could also increase \(\eta\) to achieve higher utility at the cost of increased total action allocation (i.e. higher "budget") and sacrifice in coverage. This is also why we cannot achieve utility values that are as high as the highest \(\text{Const}_{A}\) in the figures; we ask how best to distribute actions within the effective "budget".
**EqB Constraint Method.** We present results for EqB on the NYCSchools dataset in Figs. 3(b) and 3(e). As we increase the tolerance on fairness violations - in the form of higher slack value \(\epsilon\) in (4) - we observe an increase in constraint value with almost no change in overall utility \(\mathbb{E}[Y_{\sigma_{A}}]\). Observing the utility values broken down by group membership (\(\mathbb{E}[Y_{\sigma_{A}}\,|\,S=1]\) vs. \(\mathbb{E}[Y_{\sigma_{A}}\,|\,S=0]\)), we see also no significant change in utility coming from either group. This result indicates that our method learns a policy that assigns actions ensuring equal benefit without sacrificing the utility of either group. Note that we also see an unusually high estimate of utility for \(S=1\). This could be explained by the IPW estimator not extrapolating well to unseen data, as the \(S=1\) group only consists of 7% of an already small dataset. On the right hand side of Fig. 3(b) we observe similar recommended action histograms for the unconstrained (top right) and constrained (bottom right) cases. This shows that our method decreases disparity with a similar "budget". This is possible because we are not attempting to reduce disparity present under the baseline policy. The similarity in action histograms is partially due to the IPW objective which aligns recommended actions with observed ones to achieve higher weighting. To validate our approach, we compute the ground truth constraint value through counterfactual realization of the policy model's predicted action mean in the data generating process and compute distributional difference of \(Y_{\sigma_{A}}-Y_{\emptyset}\,|\,S\) for the different sensitive attribute groups. Notice also that our method succeeds in increasing the utility compared to the baseline policy. Although the baseline policy has the lowest constraint, this is expected as we are comparing distributions of \(Y_{\sigma_{A}}-Y_{\emptyset}\,|\,S\), where \(\sigma_{A}=\emptyset\). Dropping \(S\) has a slight increase in constraint and a decrease in utility compared to unconstrained setting, as our policy model performs better through taking into account \(S\) to break the indirect association between \(S\) and \(Y\).
## 6 Conclusion
We introduced a causal framework to learn fair policies given access to observational data and an action space. Taking a pragmatic view, we asked what is the best utility that can be achieved with the provided action space, while controlling two notions of disparity: one focusing on mitigating a possible moderation effect involving group membership and the policy, and the other focusing on ensuring equal benefit with respect to a baseline policy. We see this work as a first conceptual contribution in defining pragmatic fair impact policies, and envisage various possible future directions, including extending the proposed methods beyond binary sensitive attributes, to a multi-stage policy setting, to handle unmeasured confounding and to online optimization.
**Limitations discussion.**_Conceptual._ Our pragmatic approach relies on data that we have about mechanisms _in this world_, including possible inequities our actions in question cannot affect. We also do so within a stationary framework that currently does not account for feedback, although we see it as an important next step. This comes with caveats. First, whether or not it is desirable to control for disparities of outcomes across levels of the sensitive attribute is problem-dependent. For instance, if the action space consists of solely two options, to give a medical treatment or not, it may be unclear why we should take into consideration group differences in recovery: other things being equal, we just want to maximize the number of lives saved. In contrast, if \(Y\) is a relative measure, e.g., of wealth, pure wealth maximization may be judged to be harmful if disparities among groups in \(S\) are exacerbated. In this case, we may settle for a scenario with less aggregated wealth if disparities are controlled. Such value judgements are _not_ to be decided algorithmically. Our goal is to provide a formalization of disparity control _if_ it is judged to be desirable, and to provide an estimate of _whether_ different levels of control can be achieved under an acceptable loss of total expected outcome, _given_ an action space that is, again, a property of the real-world. We simply provide a framework to examine what is possible. However, if our data reflects biased mechanisms existing in the world that is
outside of reach for our actions, we will not be able to change those, and they will be reflected in estimates of what is possible. This is in contrast to methods that aim to model alternative fair worlds.
_Technical._ The parametric assumption we make for the operationalization of the EqB constraint is one that could be avoided and extended for greater generalization (e.g., via Frechet bounds, see Appendix D.1). However, in this work we opted for a parametric assumption to focus on a general concpetual introduction. For the ModBrk constraint we propose a simple MLP estimation of the decomposition in Eq. 1. One could envision a more elaborate formulation that will avoid a possible failure mode, where \(g\) simply subsumes \(f\) and \(h\). We did not observe empirically that this happens in practice. However, one could avoid such possible issue by including additional regularization or residual connections between the components of the MLP.
#### Acknowledgments
The authors would like to thank Anian Ruoss and Jessica Schrouff for their helpful feedback on the manuscript, as well as the Alan Turing Institute for providing access to computational resources.
|
2302.05254 | Gamma-ray energies and intensities observed in decay chain
$^{83}Rb$/$^{83m}Kr$/$^{83}Kr$ | Radioactive sources of the monoenergetic low-energy conversion electrons from
the decay of isomeric $^{83m}Kr$ are frequently used in the systematic
measurements, particularly in the neutrino mass and dark matter experiments.
For this purpose, the isomer is obtained by the decay of its parent
radionuclide $^{83}Rb$. In order to get more precise data on the gamma-rays
occuring in the $^{83}Rb$/$^{83m}Kr$ chain, we re-measured the relevant
gamma-ray spectra, because the previous measurement took place in 1976. The
obtained intensities are in fair agreement with this previous measurement. We
have, however, improved the uncertainties by a factor of 4.3, identified a new
gamma transition and determined more precisely energies of weaker gamma
transitions. | M. Šefčík, D. Vénos, O. Lebeda, C. Noll, J. Ráliš | 2023-02-10T14:08:27Z | http://arxiv.org/abs/2302.05254v1 | Gamma-ray energies and intensities observed in decay chain \({}^{83}\)Rb/\({}^{83m}\)Kr/\({}^{83}\)Kr
###### Abstract
Radioactive sources of the monoenergetic low-energy conversion electrons from the decay of isomeric \({}^{83m}\)Kr are frequently used in the systematic measurements, particularly in the neutrino mass and dark matter experiments. For this purpose, the isomer is obtained by the decay of its parent radionuclide \({}^{83}\)Rb. In order to get more precise data on the gamma-rays occuring in the \({}^{83}\)Rb/\({}^{83m}\)Kr chain, we remeasured the relevant gamma-ray spectra, because the previous measurement took place in 1976. The obtained intensities are in fair agreement with this previous measurement. We have, however, improved the uncertainties by a factor of 4.3, identified a new gamma transition and determined more precisely energies of weaker gamma transitions.
pacs: 23.20.Lv pacs: \(\gamma\) transitions and level energies and 21.10.-k Properties of nuclei; nuclear energy levels and 23.40.-s\(\beta\) decay; double \(\beta\) decay; electron and muon capture
## 1 Introduction
\({}^{83m}\)Kr is formed by the decay of \({}^{83}\)Rb (half-life \(86.2\pm 1\) d) via electron capture (EC). Approximately three quarters of \({}^{83}\)Rb decays result in isomer \({}^{83m}\)Kr (\(T_{i_{2}}\)= 1.83 h). It further decays by the cascade of the 9.4 and 32.2 keV nuclear transitions to the \({}^{83}\)Kr ground state. Due to low energy and high multipolarity (E3 for the 32.2 keV transition) of the transitions, the intense conversion electrons are emitted. These monoenergetic electrons are extensively used for the calibration and systematic measurements in the neutrino mass experiments (KATRIN, Project 8) [1; 2], dark matter experiments [3; 4] and also in the ALICE and COHERENT projects [5; 6]. In all these experiments the \({}^{83}\)Rb is at first deposited into a suitable substrate, from which the daughter \({}^{83m}\)Kr emanates. The last primary data on the gamma-ray intensities in \({}^{83}\)Rb decay were published several decades ago, see [7; 8]. The recent compilation and evaluation of the relevant data are available in the Nuclear Data Sheets (NDS) [9]. In the frame of our development of the \({}^{83m}\)Kr sources for the neutrino project KATRIN, see [10; 11], we also re-measured the gamma-ray spectra present in the \({}^{83}\)Rb decay.
## 2 Measurement
Rubidium isotopes were produced at the NPI CAS Rez cyclotron TR-24 in the reactions \({}^{\rm nat}\)Kr(p,xn)\({}^{83,84,86}\)Rb using pressurized gas target. The activity was extracted from the irradiated target by its thorough washout by water. Resulting aqueous solution was concentrated by evaporation and used for the activity deposition into the tungsten furnaces. The furnaces were then delivered to the HISKP in Bonn, where the gamma sources were prepared by the implantation of the separated \({}^{83}\)Rb ions with energy of 8 keV into the 0.5 mm thick Highly Oriented Pyrolytic Graphite (HOPG) substrate. The \({}^{83}\)Rb activity in the sources amounted to about 3 MBq. Another type of the source was prepared in the NPI by evaporation of the rubidium isotopes solution on the 2.5 um thick mylar foil. For the spectra acquisition, two gamma-ray detectors were used: the Ortee HPGe detector with relative efficiency of 24.1 % and energy resolution of 1.9 keV at the energy of 1.33 MeV, and low energy Canberra SiLi detector with the diameter and thickness of 10.1 and 5 mm, respectively, and the energy resolution of 180 eV at the gamma-ray energy of 5.9 keV. Both detectors were equipped with a beryllium window. The Canberra spectrometric chains were used for the signal processing: amplifier 2025 and multichannel analyzer Multiport II controlled with the computer software Genie 2000. The ADC gain conversion was set at 8192 and 4096 channels for the HPGe and SiLi detector, respectively. The distance between the detector Be window and the measured source was set to 240 and 45.7 mm for the HPGe and SiLi detector, respectively. In order to reduce the sum peaks of the intense \({}^{83}\)Rb gamma-rays with the energies of 520.4, 529.6 and 552.5 keV with the accompanying strong krypton K X-rays, the nickel foil of 20 um thickness was
applied on the HPGe beryllium window. The measured spectra were analysed with DEIMOS32 software [12].
The energy and detection efficiency calibrations were performed with use of the standards of \({}^{55}\)Fe (type EFX), and \({}^{133}\)Ba, \({}^{152}\)Eu and \({}^{241}\)Am (all three type EG3) provided by the Czech Metrology Institute (CMI). Since the calibration sources are encapsulated in the polymethylmethacrylate (PMMA) and polyethylene, the attenuation of the gamma-rays in these materials was also measured to take it into account in the efficiency calibration. The efficiency of the HPGe detector was calibrated in the low (26-244 keV) and high (244-778 keV) energy regions with the uncertainties of 2.5 and 0.9 %, respectively. In case of the SiLi detector, the efficiency was determined with the uncertainty of 2 % in the energy region of 5.9-33 keV.
In Figs. 1 and 2, examples of the \({}^{83}\)Rb gamma-ray spectra measured with the HPGe and SiLi detector, respectively are displayed. The measured gamma-ray energies and intensities are summarized in Tab. 1. The gamma-ray energies were determined with the HPGe detector in the special measurements of \({}^{83}\)Rb source together with the \({}^{152}\)Eu or \({}^{133}\)Ba standards, the gamma-rays of which were used for the calibration of the energy scale. The weak gamma transitions with the energies of 128.3 and 562.03 keV, respectively, were less distinct in the spectra acquired with the standards due to the additional Compton background. That is why the spectra with the sole \({}^{83}\)Rb source were used in their evaluation. For the energy calibration, suitable gamma lines from the background and stronger \({}^{83}\)Rb lines, the energy of which was determined previously by us, were employed. Our energies of the three strongest gamma lines in the \({}^{83}\)Rb decay agree well with the very precise values in [9] which were adopted from [14]. The energies of the remaining lines are slightly lower (by 0.1 to 1.0 keV) and were obtained with better precision in comparison with those in [8; 9]. We observed all gamma-rays presented in [9] except the 237.19 keV one for which the upper limit on its relative intensity of only 0.0011 was estimated. We are not able to observe it due to the presence of the intense 238.632(2) keV gamma line of the \({}^{212}\)Pb [15] from the \({}^{232}\)Th decay chain. In contrast, we observed the gamma line with the energy of 227.35(5) keV that is missing in [9]. The previous NDS review for A = 83 [16] listed this transition with the relative intensity of 0.03. The line was clearly visible in the spectra taken with our two different implanted sources. The half-life of this weak line was also determined to be of 90(+21,-12) days, which agrees well with the \({}^{83}\)Rb half-live of 86.2(1) days. This transition also fairly fits into the decay scheme between the nuclear levels 798.5 and 571.1 keV, see Fig. 3.
After implantation of the \({}^{83}\)Rb, the amount of the daughter \({}^{83m}\)Kr nuclei in HOPG substrate increases and within several \({}^{83m}\)Kr half-lives, the equilibrium is achieved. The amount of \({}^{83m}\)Kr starts then to decrease practically with the half-life of the parent \({}^{83}\)Rb. The possible emanation of the \({}^{83m}\)Kr out of the substrate may reduce the measured intensities of the 9.4 and 32.2 keV \({}^{83m}\)Kr gamma transitions (the decaying \({}^{83m}\)Kr nuclei find themselves partially outside of the space "visible" by the detector). Therefore we accomplished the measurement of the \({}^{83m}\)Kr retention in the implanted source. For this purpose, the HOPG substrate with the implanted source was placed on the top of the closeable cylindrical chamber. The bottom part of the chamber was equipped with a thin PMMA window enabling the detection of the gamma-rays with the SiLi detector. The chamber design is further described in [17]. Using the 32.2 gamma-ray rates measured with the chamber closed and open at the fixed distance of the HOPG substrate from SiLi detector, the retention of \({}^{83m}\)Kr in the substrate was determined to be 0.974(19), i.e. some emanation occurs.
The relative intensities of the 9.4 and 32.2 keV gamma transitions were corrected for the measured retention value. Our uncertainties of the gamma-ray intensities are on average by a factor of 4.3 smaller in comparison with previously published values. In our \({}^{83}\)Rb decay scheme (Fig. 3), the feedings of the \({}^{83}\)Kr levels by the electron capture and the \(\log ft\) values are listed. An assumption on the feeding of the Kr ground state at a level of 2.5 \(\pm\) 2.5 % according
Figure 1: Spectrum of \({}^{83}\)Rb acquired with the HPGe detector. The \(\gamma\)-lines which belong to the \({}^{83}\)Rb decay are marked by their energies in keV. The multiple lines denoted as Pb-XX are due to fluorescence effect in the detector Pb shielding.
Figure 2: The low energy spectrum of \({}^{83}\)Rb acquired with the SiLi detector. Besides the two gamma-rays resulting from the decay of its daughter \({}^{83m}\)Kr, the krypton K X-rays are present.
to the [9] was taken into account. The total intensity of the 32.2 keV transition, representing the number of \({}^{83\rm{m}}\)Kr nuclei produced per 100 \({}^{83}\)Rb decays, amounts to 76(3) %. In contrast to [9], our analysis demonstrated non-zero feeding of 4(3) % of the krypton isomeric state from the EC decay.
## 3 Conclusion
We have re-measured the gamma-ray spectra observed in the \({}^{83}\)Rb/\({}^{83}\)Kr decay chain by means of the HPGe and SiLi detectors. The values of the gamma-ray intensities are close to those in the previous paper. Nevertheless, their uncertainties were improved on average by a factor of 4.3. The feeding of the \({}^{83}\)Kr levels from the EC decay with the relevant \(\log ft\) values were also determined. We have observed the non-zero feeding of the isomeric state at the level of \(4\pm 3\) % for the first time. Moreover, the 227.35 keV gamma transition was measured and recommended to be introduced into the \({}^{83}\)Rb decay scheme. The gaseous \({}^{83\rm{m}}\)Kr, whose monoenergetic electrons are widely used for the systematic physical measurement, is formed in the 76(3) % of the \({}^{83}\)Rb decays.
This work was supported by the Ministry of Education, Youth and Sport of the Czech Republic (projects LTT19005 and LM2015056) and the Czech Academy of Sciences.
|
2305.01095 | LSTM-based Preceding Vehicle Behaviour Prediction during Aggressive Lane
Change for ACC Application | The development of Adaptive Cruise Control (ACC) systems aims to enhance the
safety and comfort of vehicles by automatically regulating the speed of the
vehicle to ensure a safe gap from the preceding vehicle. However, conventional
ACC systems are unable to adapt themselves to changing driving conditions and
drivers' behavior. To address this limitation, we propose a Long Short-Term
Memory (LSTM) based ACC system that can learn from past driving experiences and
adapt and predict new situations in real time. The model is constructed based
on the real-world highD dataset, acquired from German highways with the
assistance of camera-equipped drones. We evaluated the ACC system under
aggressive lane changes when the side lane preceding vehicle cut off, forcing
the targeted driver to reduce speed. To this end, the proposed system was
assessed on a simulated driving environment and compared with a feedforward
Artificial Neural Network (ANN) model and Model Predictive Control (MPC) model.
The results show that the LSTM-based system is 19.25% more accurate than the
ANN model and 5.9% more accurate than the MPC model in terms of predicting
future values of subject vehicle acceleration. The simulation is done in
Matlab/Simulink environment. | Rajmeet Singh, Saeed Mozaffari, Mahdi Rezaei, Shahpour Alirezaee | 2023-05-01T21:33:40Z | http://arxiv.org/abs/2305.01095v2 | # LSTM-based Preceding Vehicle Behaviour Prediction during Aggressive Lane Change for ACC Application
###### Abstract
The development of Adaptive Cruise Control (ACC) systems aims to enhance the safety and comfort of vehicles by automatically regulating the speed of the vehicle to ensure a safe gap from the preceding vehicle.However, conventional ACC systems are unable to adapt themselves to changing driving conditions and drivers' behavior. To address this limitation, we propose a Long Short-Term Memory (LSTM)based ACC system that can learn from past driving experiences and adapt and predict new situations in realtime.The model is constructed based on the real-world _highD_ dataset, acquired from German highways with the assistance of camera-equipped drones. We evaluated the ACC system under aggressive lane changes when the side lane preceding vehicle cut off, forcing the targeted driver to reduce speed. To this end, the proposed system was assessed on a simulated driving environment and compared with a feedforward Artificial Neural Network (ANN) model and Model Predictive Control (MPC) model. The results show that the LSTM-based system is 19.25 % more accurate than the ANN model and 5.9 % more accurate than the MPC model in terms of predicting future values of subject vehicle acceleration. The simulation is done in Matlab/Simulink environment.
## I Introduction
Despite the growing number of vehicles on the roads, the matter of ensuring road safety is frequently disregarded.The predominant form of road traffic accident is a rear-end collision, which arises when a vehicle collides with the one ahead of it. This type of collision constitutes 29% of all crashes and is responsible for 7.2% of fatalities [1], with the majority of these incidents attributed to human error.Advanced driver-assistance systems (ADAS) have been created to improve safety and driving comfort by providing alerts, warnings, assistance, or taking control of the vehicle when necessary [2]. ACC is a crucial element of ADAS, as it enables the longitudinal control system to automatically regulate the velocity of the vehicle and sustain a secure gap between the host and the preceding vehicle [3]. Several control approaches have been investigated in the extensive research on ACC, including the PID-based control [4], fuzzy logic control [5], model predictive control (MPC) [6], and neural networks (NN) [7]. The utilization of MPC in ACC systems offers several advantages, including its ability to achieve precise and optimal control, real-time multi-objective optimal control, and even high responsiveness during traffic congestion [8]. Deep learning has become a prominent focus of research in numerous systems and applications in recent years and has been used in various transportation and autonomous driving applications such as ACC systems [9, 10], cooperative adaptive cruise control (CACC) [11], traffic sign recognition [12], and map merging [13]. As per the author's knowledge, there was no use of the real-world data set in the above literature for developing the models.
This paper aims to extend the state-of-the-art by providing a data-driven approach for predicting the behavior of preceding vehicles and incorporating it into the ACC. The main contribution of this paper is to predict the ACC parameter (acceleration m/s\({}^{2}\)) of the subject vehicle (SV) during aggressive lane change by the side lane preceding vehicle (PV), and adapt the ACC parameters for a safer driving experience. Toward this aim, highly disaggregated naturalistic driving data from the _highD_ dataset are utilized [14].The dataset comprises over 45,000 km of naturalistic driving behavior, which was derived from 16.5 hours of video footage captured by camera-equipped drones on German highways. A Matlab-based program was initially employed to identify patterns of aggressive lane changes by the preceding vehicle in relation to the host vehicle in the data. The resulting data was then utilized as input for a real-time long short-term memory (LSTM) deep neural network, which predicted the desired acceleration of the host vehicle to adhere to the ACC scenario.
The paper is organized into several sections. Firstly, the problem statement is introduced, which is then followed by the methodology employed in the current study. A description of the data and its pre-processing is provided subsequently. The LSTM approach is then employed to train and test the data. Finally, the results of the proposed model are compared with the ANN model for predicting future acceleration values during aggressive lane change by the side lane preceding vehicle and conclusions drawn.
## II Problem Statement
We consider the problem of adapting the ACC parameter (acceleration m/s\({}^{2}\)) of the subject vehicle in advance to avoid aggressive lane changes by the preceding vehicle driving on a highway, using previously observed data. Aggressive lane change refers to a cut-off situation where a PV driver changes lanes so closely in front of SV that the driver must reduce speed suddenly to avoid a collision. Fig. 1 shows the scenarios when the preceding vehicle aggressively changes the lane from 3 to 2. Formally, we consider a set of observable features \(\Psi\) and a set of target outputs \(\Theta\) to be predicted.
In this article, our approach is to train a predictor for the future acceleration of the SV and incorporating it during ACC implementation. Therefore, we limit the amount of available information to the vehicles immediately in front of the SV which can be captured in the ACC application. LSTM had been used to train and test the processed data from the _highD_ dataset [14].
## III Data and Features
### _Dataset_
The _highD_ dataset [14] consists of German highways information with the assistance of camera-equipped drones.Traffic was recorded at six different locations and includes more than 110500 vehicles.Although the dataset was mainly created for the safety validation of highly automated vehicles, it is also suitable for many other tasks such as the analysis of traffic patterns or the parameterization of driver models.The dataset includes vehicles'ID, position, speed, acceleration, etc. shown in Table I.
### _Data Screening_
In this paper, the _highD_ data during the aggressive lane change by the preceding vehicle is counted. Based on the analysis of the factors affecting the ACC, combined with the decision-making behavior of the subject vehicle acceleration in the actual driving situation, a total of 5 data of two vehicles (subject and preceding vehicles) in a driving unit are selected as the ACC decision parameters. Based on this information, we aim to predict the acceleration of SV asthe output of the ACC model.
1. \(\mathbf{X}_{\text{sv}}=\text{positionof the SV}\)
2. \(\mathbf{X}_{\text{pv}}=\text{position of the PV}\)
3. \(\mathbf{V}_{\text{sv}}=\text{velocityof the SV}\)
4. \(\mathbf{V}_{\text{pv}}=\text{velocity of the PV}\)
5. \(\mathbf{d}=\text{distance between SV and point of lane}\)
6. change by PV
7. \(\mathbf{ACC}_{\text{sv}}=\text{acceleration of the SV}\)
When the SV encounters a situation where the side lane PV changes lanes and cuts in front of it, the SV driver will decrease speed and apply the brakes. Once the PV has passed, the SV will resume following the adaptive cruise control (ACC) system and maintain a safe distance from the PV.
The decision parameter (acceleration) is extracted by pre-processing and filtering the highD dataset, and some of the sample data are shown in Table II.
## IV Long Short-Term Memory Network Model
Recurrent Neural Networks (RNNs) are a unique type of neural network composed of multiple neural networks linked together in a chain-like structure, allowing them to model temporal dependencies in a sequence [15]. However, in practice, RNNs can struggle to model long dependencies effectively [16]. To overcome this challenge, a specialized form of RNN called Long Short-Term Memory (LSTM) was developed [17]. LSTMs retain the chain-like structure of RNNs but with modified individual units that enable them to learn long-term dependencies more effectively. LSTM networks employ input, forget, and output gate layers in addition to a memory cell, allowing them to regulate the flow of information. These gate layers are responsible for discarding non-essential information and retaining only the essential information required for a given task. LSTMs have been utilized in predicting highway trajectories, determining driver intentions at intersections, and lane change maneuvers [18].To develop a decision model for a vehicle's ACC system, this paper utilizes LSTM networks. The LSTM model is constructed using MATLAB software.
### _Training and test data_
The dataset covers both four-lane (two per direction) and six-lane (three per direction) highways with central dividing medians and hard shoulders on the outer edge. The recordings were made on highways. The dataset contains data from 110,000 vehicles (81% cars and 19% trucks), covering a total distance of 45,000 km, with 5,600 lane changes observed. We considered 40 frames before and after aggressive lane change. The current study uses only the longitudinal velocity and longitudinal acceleration time series from the dataset. Only the trajectory data from recording numbers9 to 60 are used to train and prediction models. This selected data includes 2,449 vehicles (2,034 cars and 415trucks) recorded, of which 259 vehicles executed aggressiveness change. Including both cars and trucks in the dataset allows for the capture of driver behavior in mixed traffic scenarios.The data is divided into training and validation set using an 80/20 ratio, shown in Table III. The test data set comprises 15,432 trajectories.
### _Determination of LSTM predictor_
To determine the optimal configuration of the LSTM predictor, its performance on the validation dataset and training time is assessed. The flow chart of the proposed network layers is shown in Fig.2. It consists of eight layers each has 200 neurons. A sequence input layer inputs sequence data to a network. A fully connected layer multiplies the input by a weight matrix and then adds a bias vector.The ReLU layer performs a threshold operation on each element of the input, where any value less than zero is set to zero.An LSTM layer learns long-term dependencies between time steps in time series and sequence data.The layer performs additive interactions, which can help improve gradient flow over long sequences during training and the regression layer computes the half-mean-squared-error loss for regression tasks. The Adam optimizer is utilized to adjust the learning rate [19]. The learning rate is set to 0.0001. The training process is stopped when the validation accuracy does not show improvement over five consecutive iterations/epochs to prevent overfitting.
## V Results
The performance of the vehicle acceleration prediction is evaluated in this section based on root mean square error (RMSE) as an indicator of the model prediction accuracy which is formulated as the following equation (1) and (2) [18]:
\[\overline{\mathcal{Y}}=\frac{1}{N}\sum_{t=1}^{N}\mathcal{Y}_{t} \tag{1}\] \[\text{RMSE}=\sqrt{\frac{\sum_{t=1}^{N}\left(y_{t}\overset{\wedge} {-}\mathcal{Y}_{t}\right)^{2}}{N}} \tag{2}\]
where \(\overline{\mathcal{Y}}\) is the mean of the measured acceleration, \(\overset{\wedge}{y_{t}}\) is the predicted acceleration, and \(N\) is the number of elements in output. Fig. 3 shows the RMSE values during the LSTM training process. As depicted in Fig. 3 the value of RMSE converges very fast and reaches below 0.025 values in 2000 iterations.
The proposed model is compared to the feedforward Artificial Neural Network (ANN) with five hidden layers each has 200 neurons and Model predictive control (MPC) [6] methods for prediction accuracy. The results are presented in Fig. 4, which showed that the future predicted values of the proposed model are more accurate than those of the ANN and MPC model. The overall prediction accuracy of the proposed model was found to be 98.5%, which is significantly higher than the accuracy of the ANN model (79.25 %), and MPC model (92.6 %). Hence proposed model performs better for the stated scenario.
## VI Acknowledgment
We acknowledge the financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC) Catalyst Grant. Also, the assistance provided by Mrs. Mankeen Kaur (University of Windsor) in utilizing a Matlab program to refine the raw data is acknowledged by the authors.
## VII Conclusions
In this study, an LSTM model was proposed to predict the ACC future values of a subject vehicle's acceleration during aggressive lane change caused by a side lane preceding vehicle. The proposed system utilizes a comprehensive dataset acquired from German highways and can learn from past driving experiences to adapt and predict new situations in real-time.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(\boldsymbol{X_{\text{sv}}(m)}\) & \(\boldsymbol{X_{\text{pv}}(m)}\) & \(\boldsymbol{V_{\text{sv}}(m\,s)}\) & \(\boldsymbol{V_{\text{pv}}(m\,/\,s)}\) & \(\boldsymbol{d(m)}\) & \(\boldsymbol{ACC_{\text{sv}}(m\,/\,s^{2})}\) \\ \hline
154 & 189.16 & 24.93 & 30.78 & 18.08 & 0.03 \\ \hline
78.71 & 115 & 24.98 & 32.89 & 23.79 & 0.035 \\ \hline
122.58 & 162.9 & 23.41 & 26.37 & 20.51 & 0.05 \\ \hline
209.92 & 252.89 & 30.61 & 34.24 & 35.17 & 0.06 \\ \hline
213.58 & 256.64 & 30.62 & 34.24 & 35.78 & 0.07 \\ \hline
222.12 & 266.22 & 30.64 & 34.23 & 36.82 & 0.08 \\ \hline
224.56 & 268.96 & 30.65 & 34.24 & 37.12 & 0.09 \\ \hline \end{tabular}
\end{table} TABLE II: ACC Model Decision Factor Data
Figure 2: Proposed network layers
The proposed LSTM-based system was compared with other methods, and the results show that our system outperforms state-of-the-art methods in predicting the future values of the subject vehicle acceleration. Therefore, it can be concluded that the proposed model is more effective and accurate in predicting the subject vehicle's acceleration during an aggressive lane change by side lane preceding vehicle. For future scope, authors will validate the proposed model for different scenarios such as lane merging, roundabouts, etc.
|
2303.02612 | On triharmonic hypersurfaces in space forms | In this paper we study triharmonic hypersurfaces immersed in a space form
$N^{n+1}(c)$. We prove that any proper CMC triharmonic hypersurface in the
sphere $\mathbb S^{n+1}$ has constant scalar curvature; any CMC triharmonic
hypersurface in the hyperbolic space $\mathbb H^{n+1}$ is minimal. Moreover, we
show that any CMC triharmonic hypersurface in the Euclidean space $\mathbb
R^{n+1}$ is minimal provided that the multiplicity of the principal curvature
zero is at most one. In particular, we are able to prove that every CMC
triharmonic hypersurface in the Euclidean space $\mathbb R^{6}$ is
minimal.These results extend some recent works due to Montaldo-Oniciuc-Ratto
and Chen-Guan, and give affirmative answer to the generalized Chen's
conjecture. | Yu Fu, Dan Yang | 2023-03-05T08:51:17Z | http://arxiv.org/abs/2303.02612v1 | # On triharmonic hypersurfaces in space forms
###### Abstract.
In this paper we study triharmonic hypersurfaces immersed in a space form \(N^{n+1}(c)\). We prove that any proper CMC triharmonic hypersurface in the sphere \(\mathbb{S}^{n+1}\) has constant scalar curvature; any CMC triharmonic hypersurface in the hyperbolic space \(\mathbb{H}^{n+1}\) is minimal. Moreover, we show that any CMC triharmonic hypersurface in the Euclidean space \(\mathbb{R}^{n+1}\) is minimal provided that the multiplicity of the principal curvature zero is at most one. In particular, we are able to prove that every CMC triharmonic hypersurface in the Euclidean space \(\mathbb{R}^{6}\) is minimal. These results extend some recent works due to Montaldo-Oniciuc-Ratto and Chen-Guan, and give affirmative answer to the generalized Chen's conjecture.
Key words and phrases:\(k\)-harmonic maps, Triharmonic hypersurfaces, constant mean curvature, constant scalar curvature 2020 Mathematics Subject Classification: Primary 53C40, 58E20; Secondary 53C42
## 1. Introduction
Let \(\phi:(M,g)\rightarrow(N,\bar{g})\) be a smooth map between Riemannian manifolds \(M\) and \(N\). A \(k\)-harmonic map \(\phi\), proposed by Eells and Lemaire [8], is a critical point of the \(k\)-energy functional
\[E_{k}(\phi)=\frac{1}{2}\int_{M}\big{|}(d+d^{*})^{k}\phi\big{|}^{2}v_{g}.\]
The Euler-Lagrange equation is given by \(\tau_{k}(\phi)\equiv 0\), where \(\tau_{k}(\phi)\) is the \(k\)-tension field. The concept of \(k\)-harmonic map is a natural generalization of harmonic map, and in particular for \(k=2\) and \(k=3\), the critical points of \(E_{2}\) or \(E_{3}\) are biharmonic or triharmonic maps, respectively (c.f. [24], [25], [11], [12]).
In recent years, biharmonic maps and biharmonic submanifolds have been widely studied, see [3, 10, 19] and the references therein. There are also a lot of results on triharmonic maps and triharmonic submanifolds (c.f. [1, 2, 11, 12, 13, 14, 16, 17]). Concerning triharmonic hypersurfaces in a space form \(N^{n+1}(c)\), Maeta [11] proved that any compact CMC triharmonic hypersurface in \(N^{n+1}(c)\) for \(c\leq 0\) is minimal. Recently, Montaldo-Oniciuc-Ratto [15] gave a systematic study of CMC triharmonic hypersurface in space forms. The authors proved that
**Theorem 1.1**.: ([15]) _Let \(M^{n}\) be a CMC triharmonic hypersurface in \(N^{n+1}(c)\) with \(c\leq 0\) and assume that the squared norm of the second fundamental form \(S\) is constant. Then \(M^{n}\) is minimal._
In particular, the assumption \(S\) being constant was removed for \(n=2\) in [15].
**Theorem 1.2**.: ([15]) _Let \(M^{2}\) be a CMC triharmonic surface in \(N^{3}(c)\). Then \(M^{2}\) is minimal if \(c\leq 0\) and \(M^{2}\) is an open part of the small hypersphere if \(c>0\)._
Very recently, Chen-Guan investigated triharmonic CMC hypersurfaces in a space form \(N^{n+1}(c)\) under some assumptions on the number of distinct principal curvatures in [6, 7].
**Theorem 1.3**.: ([6]) _Let \(M^{n}\)\((n\geq 3)\) be a CMC proper triharmonic hypersurface with at most three distinct principal curvatures in \(N^{n+1}(c)\). Then \(M^{n}\) has constant scalar curvature._
**Theorem 1.4**.: ([7]) _Let \(M^{n}\)\((n\geq 4)\) be a CMC proper triharmonic hypersurface with four distinct principal curvatures in \(N^{n+1}(c)\). If zero is a principal curvature with multiplicity at most one, then \(M^{n}\) has constant scalar curvature._
We recall _Chen's conjecture_ in the literature of biharmonic submanifolds: _any biharmonic submanifold in the Euclidean space \(\mathbb{R}^{n+1}\) is minimal_. There are some important progress in recent years to support the conjecture under some geometric restrictions (see, for instances [9, 19]), however the general case remains open. Taking into account Chen's conjecture, Maeta [11] further proposed the generalized Chen's conjecture on \(k\)-harmonic submanifolds:
**Conjecture** : Any \(k\)-harmonic submanifold in the Euclidean space \(\mathbb{R}^{n+1}\) is minimal.
In this paper, we are able to determine the geometry of CMC triharmonic hypersurfaces in a space form \(N^{n+1}(c)\) without the restrictions on the number of principal curvatures. We will prove the following statements:
**Theorem 1.5**.: _Let \(M^{n}\) be a CMC triharmonic hypersurface in the hyperbolic space \(\mathbb{H}^{n+1}\). Then \(M^{n}\) is minimal._
Moreover, restricting ourselves on the case of \(c>0\), we get
**Theorem 1.6**.: _Let \(M^{n}\) be a CMC proper triharmonic hypersurface in the sphere \(\mathbb{S}^{n+1}\). Then \(M^{n}\) has constant scalar curvature._
Combining our Theorem 1.6 with the results of Montaldo-Oniciuc-Ratto (Theorem 1.9, [15]) and Chen-Guan (Corollary 1.7, [7]), we provide a more general result for CMC triharmonic hypersurfaces in \(\mathbb{S}^{n+1}\).
**Corollary 1.7**.: _Let \(M^{n}\) be a CMC proper triharmonic hypersurface in the sphere \(\mathbb{S}^{n+1}\). Then either_
\((1)\)_\(H^{2}=2\) and \(M^{n}\) is an open part of \(S^{n}(1/\sqrt{3})\), or_
\((2)\)_\(H^{2}\in(0,t_{0}]\) and \(H^{2}=t_{0}\) if and only \(M^{n}\) is an open part of \(S^{n-1}(a)\times S^{1}(\sqrt{1-a^{2}})\), where \(a\) is given by_
\[a^{2}=\frac{2(n-1)^{2}}{n^{2}H^{2}+2n(n-1)+nH\sqrt{n^{2}H^{2}+4(n-1)}}\]
_and \(t_{0}\) is the unique real root belonging to \((0,2)\) of the polynomial_
\[f_{n}=n^{4}t^{3}-2n^{2}(n^{2}-5n+5)t^{2}-(n-1)(2n-5)(3n-5)t-(n-1)(n-2)^{2}.\]
Let us recall _the generalized Chern conjecture_ (c.f. [4, 5]), which says that: _any closed hypersurface in the unit sphere \(\mathbb{S}^{n+1}\) with constant mean curvature and constant scalar curvature is isoparametric_. Since the class of CMC proper triharmonic hypersurfaces in a sphere have constant scalar curvature, the next important problem is to study whether these hypersurfaces are isoparametric. The problem remains open in its full generality. The readers may refer to the recent important progress on the generalized Chern conjecture due to Tang and Yan et al. [22, 23].
Considering the case \(c=0\), we obtain a characterization under an assumption on the multiplicity of zero principal curvature.
**Theorem 1.8**.: _Let \(M^{n}\) be a CMC triharmonic hypersurface in the Euclidean space \(\mathbb{R}^{n+1}\). If zero is a principal curvature with multiplicity at most one, then \(M^{n}\) is minimal._
In particular, we can prove
**Theorem 1.9**.: _Any CMC triharmonic hypersurface in the Euclidean space \(\mathbb{R}^{6}\) is minimal._
**Remark 1.10**.: _Note that Theorems 1.8 and 1.9 give partial affirmative answers to the generalized Chen's Conjecture._
**Remark 1.11**.: _The assumption that the multiplicity of the principal curvature zero is at most one was necessary in [7] for treating triharmonic hypersurfaces with four distinct principal curvatures in space forms. In our results, we only need this for \(c=0\) and \(n>5\)._
At last, we point out that for a CMC proper triharmonic hypersurface in \(N^{n+1}(c)\), the two equations in (2.7) are quite similar to the equations of a proper biharmonic hypersurface in a space form, see for instance [9]. This is reasonable because the geometry property of triharmonicity is much weaker than biharmonicity. Hence, it is expected that more geometric features of triharmonic hypersurfaces could be found. Interestingly, we can achieve a complete classification of CMC triharmonic hypersurfaces in \(N^{n+1}(c)\) with \(c\neq 0\). This will benefit us in studying biharmonic hypersurfaces in \(N^{n+1}(c)\).
The paper is organized as follows. In Section 2, we recall some background on the theory of triharmonic hypersurfaces in space forms and derive some useful lemmas, which are very important for us to study the geometric properties of triharmonic hypersurfaces. In Section 3, we give the proofs of Theorems 1.5 and 1.6. In Section 4, we finish the proofs of Theorems 1.8 and 1.9.
## 2. Preliminaries
Let \(N^{n+1}(c)\) be an \((n+1)\)-dimensional Riemannian space form with constant sectional curvature \(c\). For an isometric immersion \(\phi:M^{n}\to N^{n+1}(c)\), we denote by \(\nabla\) the Levi-Civita connection of \(M^{n}\) and \(\widetilde{\nabla}\) the Levi-Civita connection of \(N^{n+1}(c)\). The Riemannian curvature tensors of \(M^{n}\) are respectively given by
\[R(X,Y)Z=(\nabla_{X}\nabla_{Y}-\nabla_{Y}\nabla_{X}-\nabla_{[X,Y] })Z,\] \[R(X,Y,Z,W)=\langle R(X,Y)W,Z\rangle.\]
The Gauss and Weingarten formulae are stated, respectively, as
\[\widetilde{\nabla}_{X}Y =\nabla_{X}Y+h(X,Y)\xi,\] \[\widetilde{\nabla}_{X}\xi =-AX.\]
Here \(X,Y,Z,W\) are tangent vector fields on \(M\), \(\xi\) is the unit normal vector field on \(M\), \(h\) is the second fundamental form of \(M\), and \(A\) is the shape operator.
Let us choose an orthonormal frame \(\{e_{i}\}_{i=1}^{n}\) of \(M\). With this frame, define \(\nabla_{e_{i}}e_{j}=\sum_{k}\Gamma_{ij}^{k}e_{k}\), where \(\Gamma_{ij}^{k}\) are the connection coefficients.
Denote by
\[R_{ijkl}= R(e_{i},e_{j},e_{k},e_{l}),\quad h_{ij}=h(e_{i},e_{j}),\] \[h_{ijk}= e_{k}(h_{ij})-h(\nabla_{e_{k}}e_{i},e_{j})-h(e_{i},\nabla_{e_{k}} e_{j}),\] \[= e_{k}(h_{ij})-\sum_{l}(\Gamma_{ki}^{l}h_{lj}+\Gamma_{kj}^{l}h_{ il}).\]
From the definition of the Gauss curvature tensor we obtain
\[R_{ijkl}=e_{i}(\Gamma_{jl}^{k})-e_{j}(\Gamma_{il}^{k})+\sum_{m}\Big{(}\Gamma_{ jl}^{m}\Gamma_{im}^{k}-\Gamma_{il}^{m}\Gamma_{jm}^{k}-(\Gamma_{ij}^{m}- \Gamma_{ji}^{m})\Gamma_{ml}^{k}\Big{)}. \tag{2.1}\]
Moreover, the Gauss and Codazzi equations are given, respectively, by
\[R_{ijkl} =c(\delta_{ik}\delta_{jl}-\delta_{il}\delta_{jk})+(h_{ik}h_{jl}- h_{il}h_{jk}), \tag{2.2}\] \[h_{ijk} =h_{ikj}. \tag{2.3}\]
The mean curvature function \(H\) and the squared norm of the second fundamental form \(S\) are written respectively as
\[H=\frac{1}{n}\sum_{i=1}^{n}h_{ii}\quad\text{and}\quad S=\sum_{i,j=1}^{n}h_{ij} ^{2}. \tag{2.4}\]
From the Gauss equation, the scalar curvature \(R\) is given by
\[R=n(n-1)c+n^{2}H^{2}-S. \tag{2.5}\]
We recall a fundamental characterization result on CMC triharmonic hypersurfaces in \(N^{n+1}(c)\).
**Proposition 2.1**.: (c.f. [15]) A CMC hypersurface \(\phi:M^{n}\to N^{n+1}(c)\) is triharmonic if the mean curvature \(H\) and the squared norm of the second fundamental
form \(S\) on \(M^{n}\) satisfy
\[\begin{cases}H(\Delta\,S+S^{2}-ncS-n^{2}cH^{2})=0,\\ HA\nabla S=0.\end{cases} \tag{2.6}\]
According to (2.6), it is clear that minimal hypersurfaces are automatically triharmonic in \(N^{n+1}(c)\). A triharmonic hypersurfaces in \(N^{n+1}(c)\) is called _proper_ if it is not minimal.
In the following, we will consider a CMC proper hypersurface \(M^{n}\) in a space form \(N^{n+1}(c)\). Then (2.6) becomes
\[\begin{cases}\Delta\,S+S^{2}-ncS-n^{2}cH^{2}=0,\\ A\nabla S=0.\end{cases} \tag{2.7}\]
For a hypersurface \(M^{n}\) in \(N^{n+1}(c)\), we denote by \(\lambda_{i}\) for \(1\leq i\leq n\) its principal curvatures. The number of distinct principal curvatures is locally constant and the set of all points here is an open and dense subset of \(M^{n}\). Denote by \(M_{A}\) this set. On a non-empty connected component of \(M_{A}\), which is open, the number of distinct principal curvatures is constant. On that connected component, the multiplicities of the distinct principal curvatures are constant and hence \(\lambda_{i}\) are always smooth and the shape operator \(A\) is locally diagonalizable (see [18, 20, 21]).
Denote by \(\mathcal{N}:=\{p\in M:\nabla S(p)\neq 0\}\) and \(\mathcal{N}\subset M_{A}\). If \(S\) is constant, then \(\mathcal{N}\) is an empty set. From now on, we assume that \(S\) is not constant, that is \(\mathcal{N}\neq\emptyset\). We will work in \(\mathcal{N}\).
Observing from the second equation of (2.7), it is known that \(\nabla S\) is a principal direction with the corresponding principal curvature \(0\). Hence, we may choose an orthonormal frame \(\{e_{i}\}_{i=1}^{n}\) such that \(e_{1}\) is parallel to \(\nabla S\) and the shape operator \(A\) is diagonalizable with respect to \(\{e_{i}\}\), i.e., \(h_{ij}=\lambda_{i}\delta_{ij}\), where \(\lambda_{i}\) is the principal curvature and \(\lambda_{1}=0\).
Suppose that \(M^{n}\) has \(d\) distinct principal curvatures \(\mu_{1}=0,\mu_{2},\cdots,\mu_{d}\) with \(d\geq 4\), that is
\[\lambda_{i}=\mu_{\alpha}\quad\text{when}\quad i\in I_{\alpha},\]
where
\[I_{\alpha}=\Big{\{}\sum_{0\leq\beta\leq\alpha-1}n_{\beta}+1,\cdots,\sum_{0 \leq\beta\leq\alpha}n_{\beta}\Big{\}}\]
with \(n_{0}=0\) and \(n_{\alpha}\in\mathbb{Z}_{+}\) satisfying \(\sum_{1\leq\alpha\leq d}n_{\alpha}=n\), namely, \(n_{\alpha}\) is the multiplicity of \(\mu_{\alpha}\). For convenience, we will use the range of indices \(1\leq\alpha,\beta,\gamma,\cdots\leq d\) except special declaration.
We collect a lemma for later use.
**Lemma 2.2**.: (c.f. [7]) _The connection coefficients \(\Gamma_{ij}^{k}\) satisfy:_
(1)_\(\Gamma_{ij}^{k}=-\Gamma_{ik}^{j}\)._
(2)_\(\Gamma_{ii}^{k}=\frac{e_{k}(\lambda_{i})}{\lambda_{i}-\lambda_{k}}\) for \(i\in I_{\alpha}\) and \(k\notin I_{\alpha}\)._
(3)_\(\Gamma_{ij}^{k}=\Gamma_{ji}^{k}\) if the indices satisfy one of the following conditions:_
(3a)_\(i,j\in I_{\alpha}\) but \(k\notin I_{\alpha}\);_
(3b)_\(i,j\geq 2\) and \(k=1\)._
(4) \(\Gamma^{k}_{ij}=0\) _if the indices satisfy one of the following conditions:_
(4a)_\(j=k\);_
(4b)_\(i=j\in I_{1}\) _and_\(k\notin I_{1}\)_;_
(4c)_\(i,k\in I_{\alpha},i\neq k\) and \(j\notin I_{\alpha}\);_
(4d)_\(i,j\geq 2,i\in I_{\alpha},j\in I_{\beta}\) with \(\alpha\neq\beta\) and \(k=1\)._
(5)_\(\Gamma^{k}_{ji}=\frac{\lambda_{j}-\lambda_{k}}{\lambda_{i}-\lambda_{k}}\Gamma^{ k}_{ij}\), \(\Gamma^{i}_{ki}=\frac{\lambda_{k}-\lambda_{i}}{\lambda_{i}-\lambda_{j}}\Gamma^{ j}_{ik}\) for \(\lambda_{i}\), \(\lambda_{j}\) and \(\lambda_{k}\) are mutually different._
(6)_\(\Gamma^{k}_{ij}\Gamma^{k}_{ji}+\Gamma^{j}_{ik}\Gamma^{j}_{ki}+\Gamma^{j}_{jk} \Gamma^{i}_{kj}=0\) for \(\lambda_{i}\), \(\lambda_{j}\) and \(\lambda_{k}\) are mutually different._
We first derive some crucial lemmas for studying CMC proper triharmonic hypersurfaces in a space form.
**Lemma 2.3**.: _Denoting by \(P_{\alpha}=\frac{e_{1}(\mu_{\alpha})}{\mu_{\alpha}}\) for \(2\leq\alpha\leq d\), we have_
\[e_{1}(P_{\alpha}) =P_{\alpha}^{2}+c+\sum_{m\in I_{1}}\Gamma^{m}_{\alpha\alpha} \Gamma^{m}_{11}, \tag{2.8}\] \[e_{j}(P_{\alpha}) =\Gamma^{j}_{\alpha\alpha}P_{\alpha}-\sum_{m\in I_{1}}\Gamma^{m} _{\alpha\alpha}\Gamma^{1}_{jm}\;\;\text{for $j\in I_{1}$ and $j\neq 1$}. \tag{2.9}\]
Proof.: For \(i\in I_{\alpha}\) and \(\alpha\neq 1\), it follows from (2.1) and the terms (1), (2) and (4) in Lemma 2.2 that
\[R_{1i1i}= e_{1}(\Gamma^{1}_{ii})-e_{i}(\Gamma^{1}_{1i})+\sum_{m}\left( \Gamma^{m}_{ii}\Gamma^{1}_{1m}-\Gamma^{m}_{1i}\Gamma^{1}_{im}-(\Gamma^{m}_{1i }-\Gamma^{m}_{i1})\Gamma^{1}_{mi}\right)\] \[= e_{1}(\Gamma^{1}_{ii})+e_{i}(\Gamma^{i}_{11})-\sum_{m}\left( \Gamma^{m}_{ii}\Gamma^{m}_{11}+\Gamma^{m}_{1i}\Gamma^{1}_{im}+(\Gamma^{m}_{1i }+\Gamma^{1}_{im})\Gamma^{1}_{mi}\right)\] \[= e_{1}(\Gamma^{1}_{ii})-\sum_{m\in I_{1}}\left(\Gamma^{m}_{ii} \Gamma^{m}_{11}+\Gamma^{m}_{1i}\Gamma^{1}_{im}+(\Gamma^{m}_{1i}+\Gamma^{1}_{ im})\Gamma^{1}_{mi}\right)\] \[= e_{1}(\Gamma^{1}_{ii})-(\Gamma^{1}_{ii})^{2}-\sum_{m\in I_{1}} \Gamma^{m}_{ii}\Gamma^{m}_{11}.\]
On the other hand, from the Gauss equation (2.2) we get \(R_{1i1i}=c\). Therefore, we obtain (2.8).
For \(i\in I_{\alpha}\) and \(\alpha\neq 1\), \(j\in I_{1}\) and \(j\neq 1\), it follows from (2.1) and Lemma 2.2 that
\[R_{iji1} =e_{i}(\Gamma^{i}_{j1})-e_{j}(\Gamma^{i}_{i1})+\sum_{m}\left( \Gamma^{m}_{j1}\Gamma^{i}_{im}-\Gamma^{m}_{i1}\Gamma^{i}_{jm}-(\Gamma^{m}_{ij }-\Gamma^{m}_{ji})\Gamma^{i}_{m1}\right)\] \[=-e_{i}(\Gamma^{1}_{ji})+e_{j}(\Gamma^{1}_{ii})+\sum_{m}\left( \Gamma^{1}_{jm}\Gamma^{m}_{ii}+\Gamma^{1}_{im}\Gamma^{i}_{jm}+(\Gamma^{m}_{ij }-\Gamma^{m}_{ji})\Gamma^{1}_{mi}\right)\] \[=e_{j}(\Gamma^{1}_{ii})+\sum_{m\in I_{1}}\Gamma^{1}_{jm}\Gamma^{m }_{ii}-\Gamma^{j}_{ii}\Gamma^{1}_{ii}, \tag{2.10}\]
which together with \(R_{iji1}=0\) gives (2.9). Note that \(R_{iji1}=0\) follows from the Gauss equation (2.2) directly. We thus complete the proof.
**Lemma 2.4**.: _For any \(q\in\mathbb{Z}_{+}\), we have_
\[f(q):=\sum_{2\leq\alpha\leq d}n_{\alpha}\mu_{\alpha}P_{\alpha}^{q}=\left\{ \begin{array}{cl}0,&\text{when $q$ is odd};\\ \frac{(q-1)!!}{q!!}(-c)^{\frac{q}{2}}nH,&\text{when $q$ is even}.\end{array}\right. \tag{2.11}\]
Proof.: Since the case \(n_{1}=1\) has been obtained in [7], we only need to prove it for \(n_{1}>1\).
Taking into account the definition of \(H\) from the first expression of (2.4), we have
\[\sum_{2\leq\alpha\leq d}n_{\alpha}\mu_{\alpha}=nH. \tag{2.12}\]
Since \(H\) is constant, differentiating (2.12) with respect to \(e_{1}\), we obtain
\[\sum_{2\leq\alpha\leq d}n_{\alpha}e_{1}(\mu_{\alpha})=0,\]
which is equivalent to
\[\sum_{2\leq\alpha\leq d}n_{\alpha}\mu_{\alpha}P_{\alpha}=0. \tag{2.13}\]
Differentiating (2.12) with respect to \(e_{m}\) for \(m\in I_{1}\) and \(m\neq 1\), we obtain
\[\sum_{2\leq\alpha\leq d}n_{\alpha}e_{m}(\mu_{\alpha})=0. \tag{2.14}\]
Differentiating (2.13) with respect to \(e_{1}\), from (2.8), (2.12), \(\mu_{\alpha}\Gamma_{\alpha\alpha}^{m}=e_{m}(\mu_{\alpha})\) and (2.14) we have
\[0 =\sum_{2\leq\alpha\leq d}n_{\alpha}\Big{(}e_{1}(\mu_{\alpha})P_{ \alpha}+\mu_{\alpha}e_{1}(P_{\alpha})\Big{)}\] \[=\sum_{2\leq\alpha\leq d}n_{\alpha}\Big{(}\mu_{\alpha}P_{\alpha}^ {2}+\mu_{\alpha}(P_{\alpha}^{2}+c+\sum_{m\in I_{1}}\Gamma_{\alpha\alpha}^{m} \Gamma_{11}^{m})\Big{)}\] \[=2\sum_{2\leq\alpha\leq d}n_{\alpha}\mu_{\alpha}P_{\alpha}^{2}+ cnH+\sum_{2\leq\alpha\leq d}\sum_{m\in I_{1}}n_{\alpha}\mu_{\alpha}\Gamma_{ \alpha\alpha}^{m}\Gamma_{11}^{m}\] \[=2\sum_{2\leq\alpha\leq d}n_{\alpha}\mu_{\alpha}P_{\alpha}^{2}+ cnH+\sum_{m\in I_{1}}\Big{(}\sum_{2\leq\alpha\leq d}n_{\alpha}e_{m}(\mu_{ \alpha})\Big{)}\Gamma_{11}^{m}\] \[=2\sum_{2\leq\alpha\leq d}n_{\alpha}\mu_{\alpha}P_{\alpha}^{2}+ cnH,\]
which is equivalent to
\[\sum_{2\leq\alpha\leq d}n_{\alpha}\mu_{\alpha}P_{\alpha}^{2}=-\frac{1}{2}cnH. \tag{2.15}\]
Equations (2.13) and (2.15) imply that (2.11) holds for \(q=1,2\). Next we will prove that it holds for general \(q\) by induction.
Differentiating (2.11) with respect to \(e_{1}\) yields
\[0 =\sum_{2\leq\alpha\leq d}n_{\alpha}\Big{(}e_{1}(\mu_{\alpha})P_{ \alpha}^{q}+q\mu_{\alpha}P_{\alpha}^{q-1}e_{1}(P_{\alpha})\Big{)}\] \[=\sum_{2\leq\alpha\leq d}n_{\alpha}\Big{(}\mu_{\alpha}P_{\alpha}^ {q+1}+q\mu_{\alpha}P_{\alpha}^{q-1}(P_{\alpha}^{2}+c+\sum_{m\in I_{1}}\Gamma_{ \alpha\alpha}^{m}\Gamma_{11}^{m})\Big{)}\] \[=\sum_{2\leq\alpha\leq d}\Big{(}(1+q)n_{\alpha}\mu_{\alpha}P_{ \alpha}^{q+1}+cqn_{\alpha}\mu_{\alpha}P_{\alpha}^{q-1}\Big{)}+q\sum_{2\leq \alpha\leq d}n_{\alpha}P_{\alpha}^{q-1}\sum_{m\in I_{1}}e_{m}(\mu_{\alpha}) \Gamma_{11}^{m}\]
\[=(1+q)f(q+1)+cqf(q-1)+q\sum_{m\in I_{1}}\Big{\{}\sum_{2\leq\alpha\leq d}n_{ \alpha}e_{m}(\mu_{\alpha})P_{\alpha}^{q-1}\Big{\}}\Gamma_{11}^{m}. \tag{2.16}\]
On the other hand, since (2.11) holds for \(f(q-1)\), we differentiate \(f(q-1)=\sum_{2\leq\alpha\leq d}n_{\alpha}\mu_{\alpha}P_{\alpha}^{q-1}\) with respect to \(e_{j}\) for \(j\in I_{1}\) and \(j\neq 1\). It follows from (2.9) that
\[0 =\sum_{2\leq\alpha\leq d}\Big{(}n_{\alpha}e_{j}(\mu_{\alpha})P_{ \alpha}^{q-1}+(q-1)n_{\alpha}\mu_{\alpha}P_{\alpha}^{q-2}e_{j}(P_{\alpha}) \Big{)}\] \[=\sum_{2\leq\alpha\leq d}\Big{(}n_{\alpha}e_{j}(\mu_{\alpha})P_{ \alpha}^{q-1}+(q-1)n_{\alpha}\mu_{\alpha}P_{\alpha}^{q-2}\big{(}\Gamma_{\alpha \alpha}^{j}P_{\alpha}-\sum_{m\in I_{1}}\Gamma_{\alpha\alpha}^{m}\Gamma_{jm}^{1 })\Big{)}\] \[=\sum_{2\leq\alpha\leq d}\Big{(}n_{\alpha}e_{j}(\mu_{\alpha})P_{ \alpha}^{q-1}+(q-1)n_{\alpha}e_{j}(\mu_{\alpha})P_{\alpha}^{q-1}-(q-1)\sum_{m \in I_{1}}n_{\alpha}P_{\alpha}^{q-2}e_{m}(\mu_{\alpha})\Gamma_{jm}^{1}\Big{)}\] \[=\sum_{2\leq\alpha\leq d}\Big{(}qn_{\alpha}e_{j}(\mu_{\alpha})P_{ \alpha}^{q-1}-(q-1)\sum_{m\in I_{1}}n_{\alpha}P_{\alpha}^{q-2}e_{m}(\mu_{ \alpha})\Gamma_{jm}^{1}\Big{)}.\]
Hence the following relation holds for any \(j\in I_{1}\) and \(j\neq 1\)
\[q\sum_{2\leq\alpha\leq d}n_{\alpha}e_{j}(\mu_{\alpha})P_{\alpha}^{q-1}=\sum_{ m\in I_{1}}\Big{\{}(q-1)\sum_{2\leq\alpha\leq d}n_{\alpha}e_{m}(\mu_{\alpha})P_{ \alpha}^{q-2}\Big{\}}\Gamma_{jm}^{1}. \tag{2.17}\]
Since (2.17) holds for any \(q\), letting \(q=2\), (2.17) reduces to
\[2\sum_{2\leq\alpha\leq d}n_{\alpha}e_{j}(\mu_{\alpha})P_{\alpha}=\sum_{m\in I _{1}}\Big{\{}\sum_{2\leq\alpha\leq d}n_{\alpha}e_{m}(\mu_{\alpha})\Big{\}} \Gamma_{jm}^{1},\]
which together with (2.14) yields
\[2\sum_{2\leq\alpha\leq d}n_{\alpha}e_{j}(\mu_{\alpha})P_{\alpha}=0. \tag{2.18}\]
Letting \(q=3\) and using (2.18), (2.17) reduces to
\[3\sum_{2\leq\alpha\leq d}n_{\alpha}e_{j}(\mu_{\alpha})P_{\alpha}^{2}=\sum_{m \in I_{1}}\Big{\{}2\sum_{2\leq\alpha\leq d}n_{\alpha}e_{m}(\mu_{\alpha})P_{ \alpha}\Big{\}}\Gamma_{jm}^{1}=0.\]
Similarly, we can gradually show
\[q\sum_{2\leq\alpha\leq d}n_{\alpha}e_{j}(\mu_{\alpha})P_{\alpha}^{q-1}=0. \tag{2.19}\]
Hence, combing (2.16) with (2.19) gives
\[(q+1)f(q+1)+cqf(q-1)=0. \tag{2.20}\]
When \(q\) is even, both of \(q-1\) and \(q+1\) are odd. From (2.20), \(f(q-1)=0\) can yield \(f(q+1)=0\) as well.
When \(q\) is odd, both of \(q-1\) and \(q+1\) are even. We conclude from (2.20) that
\[f(q+1) =-\frac{cq}{q+1}f(q-1)\] \[=-\frac{cq}{q+1}\times\frac{(q-2)!!}{(q-1)!!}(-c)^{(q-1)/2}nH\] \[=\frac{q!!}{(q+1)!!}(-c)^{(q+1)/2}nH, \tag{2.21}\]
which completes the proof of Lemma 2.4.
**Remark 2.5**.: _While assuming an additional condition that the multiplicity of zero principal curvature is at most one, special cases of Lemmas 2.3 and 2.4 were derived in [7]._
**Lemma 2.6**.: _The equation \(\Delta\,S+S^{2}-ncS-n^{2}cH^{2}=0\) is equivalent to_
\[-6\sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}P_{\alpha}^{2}+2 \Big{(}\sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}P_{\alpha}\Big{)}\Big{(} \sum_{\alpha=2}^{d}n_{\alpha}P_{\alpha}+\sum_{m\in I_{1}}\Gamma_{mm}^{1}\Big{)} +\Big{(}\sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}\Big{)}^{2}\] \[-(n+2)c\Big{(}\sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}\Big{)} -c\Big{(}\sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}\Big{)}^{2}=0.\]
Proof.: Since \(e_{i}(S)=0\) for \(2\leq i\leq n\), it follows from Lemma 2.2 that
\[\Delta S =-\sum_{i=1}^{n}(\nabla_{e_{i}}\nabla_{e_{i}}S-\nabla_{\nabla_{e_ {i}e_{i}}}S)\] \[=-e_{1}e_{1}S+e_{1}(S)\sum_{i=2}^{n}\Gamma_{ii}^{1}\] \[=-e_{1}e_{1}S+e_{1}(S)\Big{(}\sum_{\alpha=2}^{d}n_{\alpha}P_{ \alpha}+\sum_{m\in I_{1}}\Gamma_{mm}^{1}\Big{)}. \tag{2.22}\]
Noting \(S=\sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}\), it follows from (2.8) that
\[e_{1}(S) =2\sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}e_{1}(\mu_{\alpha})=2 \sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}P_{\alpha}, \tag{2.23}\] \[e_{1}e_{1}(S) =4\sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}P_{\alpha}^{2}+2 \sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}P_{\alpha}^{2}+2c\sum_{\alpha=2}^ {d}n_{\alpha}\mu_{\alpha}^{2}+2\sum_{\alpha=2}^{d}\sum_{m\in I_{1}}n_{\alpha} \mu_{\alpha}^{2}\Gamma_{\alpha\alpha}^{m}\Gamma_{11}^{m}\] \[=6\sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}P_{\alpha}^{2}+2c \sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}+2\sum_{m\in I_{1}}\Big{\{}\sum_ {\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}\Gamma_{\alpha\alpha}^{m}\Big{\}} \Gamma_{11}^{m}. \tag{2.24}\]
Differentiating \(S=\sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}\) with respect to \(e_{m}\) for \(m\in I_{1}\) and \(m\neq 1\), we get
\[0=e_{m}(S)=2\sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}e_{m}(\mu_{\alpha})=2 \sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}\Gamma_{\alpha\alpha}^{m}. \tag{2.25}\]
Combining (2.24) with (2.25) gives
\[e_{1}e_{1}(S)=6\sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}P_{\alpha}^{2}+2c \sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}. \tag{2.26}\]
Substituting (2.23) and (2.26) into (2.22), we have
\[\Delta S= -6\sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}P_{\alpha}^{2}+2 \Big{(}\sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}P_{\alpha}\Big{)}\Big{(} \sum_{\alpha=2}^{d}n_{\alpha}P_{\alpha}+\sum_{m\in I_{1}}\Gamma_{mm}^{1}\Big{)}\]
\[-2c\sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}.\]
Hence, the proof has been completed.
## 3. Proof of Theorems 1.5 and 1.6
**The proof of Theorems 1.5 and 1.6**: We will prove Theorems 1.5 and 1.6 by deriving a contradiction from the assumption that \(\mathcal{N}=\{p\in M^{n}:\nabla S(p)\neq 0\}\neq\emptyset\).
Taking \(q=1,3,5,\cdots,2d-3\) in Lemma 2.4, we have
\[\left\{\begin{array}{l}n_{2}\mu_{2}P_{2}+n_{3}\mu_{3}P_{3}+\cdots+n_{d}\mu_{ d}P_{d}=0,\\ n_{2}\mu_{2}P_{2}^{3}+n_{3}\mu_{3}P_{3}^{3}+\cdots+n_{d}\mu_{d}P_{d}^{3}=0,\\ \vdots\\ n_{2}\mu_{2}P_{2}^{2d-3}+n_{3}\mu_{3}P_{3}^{2d-3}+\cdots+n_{d}\mu_{d}P_{d}^{2 d-3}=0,\end{array}\right. \tag{3.1}\]
which is a \((d-1)\)-th order equation system with a non-zero solution. Hence on \(\mathcal{N}\) we have
\[\left|\begin{array}{cccc}P_{2}&P_{3}&\cdots&P_{d}\\ P_{2}^{3}&P_{3}^{3}&\cdots&P_{d}^{3}\\ \vdots&\vdots&\cdots&\vdots\\ P_{2}^{2d-3}&P_{3}^{2d-3}&\cdots&P_{d}^{2d-3}\end{array}\right|=P_{2}\cdots P _{d}\prod_{2\leq\alpha\leq\beta\leq d}(P_{\alpha}^{2}-P_{\beta}^{2})=0. \tag{3.2}\]
Taking \(q=2,4,6,\cdots,2d-2\) in Lemma 2.4, we obtain
\[\left\{\begin{array}{l}n_{2}\mu_{2}P_{2}^{2}+n_{3}\mu_{3}P_{3}^{2}+\cdots+n _{d}\mu_{d}P_{d}^{2}=-\frac{1}{2}ncH,\\ n_{2}\mu_{2}P_{2}^{4}+n_{3}\mu_{3}P_{3}^{4}+\cdots+n_{d}\mu_{d}P_{d}^{4}=\frac{ 3}{8}nc^{2}H,\\ \vdots\\ n_{2}\mu_{2}P_{2}^{2d-2}+n_{3}\mu_{3}P_{3}^{2d-2}+\cdots+n_{d}\mu_{d}P_{d}^{2 d-2}=\frac{(2d-3)!!}{(2d-2)!!}(-c)^{d-1}nH.\end{array}\right. \tag{3.3}\]
Next we consider all possible cases.
**Case 1.**\(P_{2}P_{3}\cdots P_{d}\neq 0\) at some \(p\in\mathcal{N}\). Then from (3.2) we have that \(\prod_{2\leq\alpha\leq\beta\leq d}(P_{\alpha}^{2}-P_{\beta}^{2})=0\) at \(p\). Without loss of generality, we assume \(P_{2}^{2}-P_{3}^{2}=0\), i.e., \(P_{2}=\pm P_{3}\). Now the former \(d-2\) equations in the system of equations (3.1) determine a new system of equations as follows:
\[\left\{\begin{array}{l}(n_{2}\mu_{2}\pm n_{3}\mu_{3})P_{3}+\cdots+n_{d}\mu_ {d}P_{d}=0,\\ (n_{2}\mu_{2}\pm n_{3}\mu_{3})P_{3}^{3}+\cdots+n_{d}\mu_{d}P_{d}^{3}=0,\\ \vdots\\ (n_{2}\mu_{2}\pm n_{3}\mu_{3})P_{3}^{2d-5}+\cdots+n_{d}\mu_{d}P_{d}^{2d-5}=0, \end{array}\right. \tag{3.4}\]
which has a non-zero solution. Then we have
\[\left|\begin{array}{cccc}P_{3}&P_{4}&\cdots&P_{d}\\ P_{3}^{3}&P_{4}^{3}&\cdots&P_{d}^{3}\\ \vdots&\vdots&\cdots&\vdots\\ P_{3}^{2d-5}&P_{4}^{2d-5}&\cdots&P_{d}^{2d-5}\end{array}\right|=P_{3}\cdots P _{d}\prod_{3\leq\alpha\leq\beta\leq d}(P_{\alpha}^{2}-P_{\beta}^{2})=0.\]
Since \(P_{3}P_{4}\cdots P_{d}\neq 0\), we have that \(\prod_{3\leq\alpha\leq\beta\leq d}(P_{\alpha}^{2}-P_{\beta}^{2})=0\). Without loss of generality, we assume that \(P_{3}^{2}=P_{4}^{2}\). Proceeding in this way, we obtain that \(P_{2}^{2}=P_{3}^{2}=\cdots=P_{d}^{2}:=P^{2}\) at \(p\). Now (3.3) becomes
\[\left\{\begin{array}{l}-\frac{1}{2}ncH=n_{2}\mu_{2}P_{2}^{2}+n_{3}\mu_{3}P_{ 3}^{2}+\cdots+n_{d}\mu_{d}P_{d}^{2}=nHP^{2},\\ \frac{3}{8}nc^{2}H=n_{2}\mu_{2}P_{2}^{4}+n_{3}\mu_{3}P_{3}^{4}+\cdots+n_{d}\mu _{d}P_{d}^{4}=nHP^{4},\\ \quad\vdots\\ \frac{(2d-3)!!}{(2d-2)!!}(-c)^{d-1}nH=n_{2}\mu_{2}P_{2}^{2d-2}+n_{3}\mu_{3}P_ {3}^{2d-2}+\cdots+n_{d}\mu_{d}P_{d}^{2d-2}=nHP^{2d-2}.\end{array}\right.\]
Since \(nH\neq 0\), the first two equations of the above system imply that
\[P^{2}=-\frac{1}{2}c\quad\text{and}\quad P^{4}=\frac{3}{8}c^{2},\]
and hence \(c=P=0\). It is a contradiction, so this case is ruled out.
**Case 2.**\(P_{\alpha}=0\) for all \(\alpha=2,\cdots,d\) at some \(p\in\mathcal{N}\). In this case, we have
\[e_{1}S=e_{1}\sum_{2\leq\alpha\leq d}n_{\alpha}\mu_{\alpha}^{2}=2\sum_{2\leq \alpha\leq d}n_{\alpha}\mu_{\alpha}e_{1}(\mu_{\alpha})=2\sum_{2\leq\alpha\leq d }n_{\alpha}\mu_{\alpha}^{2}P_{\alpha}=0.\]
This contradicts \(\nabla S\neq 0\) at \(p\) and this case is also ruled out.
**Case 3.** For any given point \(p\in\mathcal{N}\), some terms of \(P_{\alpha}\) are zero and the others are not zero. In this case, without loss of generality, assume \(P_{\alpha}=0\) for \(\alpha=2,\cdots,r\) and \(P_{\alpha}\neq 0\) for \(\alpha=r+1,\cdots,d\). Then the first \(d-r\) equations in (3.1) form a new system of equations
\[\left\{\begin{array}{l}n_{r+1}\mu_{r+1}P_{r+1}+n_{r+2}\mu_{r+2}P_{r+2}+ \cdots+n_{d}\mu_{d}P_{d}=0,\\ n_{r+1}\mu_{r+1}P_{r+1}^{3}+n_{r+2}\mu_{r+2}P_{r+2}^{3}+\cdots+n_{d}\mu_{d}P_ {d}^{3}=0,\\ \quad\vdots\\ n_{r+1}\mu_{r+1}P_{r+1}^{2(d-r)-1}+n_{r+2}\mu_{r+2}P_{r+2}^{2(d-r)-1}+\cdots+n _{d}\mu_{d}P_{d}^{2(d-r)-1}=0,\end{array}\right. \tag{3.5}\]
which is a \((d-r)\)-th order equation system with non-zero solutions. So the coefficient determinant is zero, that is \(\prod_{r+1\leq\alpha\leq\beta\leq d}(P_{\alpha}^{2}-P_{\beta}^{2})=0\). Without loss of generality, we assume that \(P_{r+1}^{2}=P_{r+2}^{2}\). Proceeding in this way, we can show that \(P_{r+1}^{2}=\cdots=P_{d}^{2}\neq 0\). Denote by \(P^{2}:=P_{r+1}^{2}=\cdots=P_{d}^{2}\). Then (3.3) becomes
\[\left\{\begin{array}{l}(n_{r+1}\mu_{r+1}+\cdots+n_{d}\mu_{d})P^{2}=-\frac{1} {2}ncH,\\ (n_{r+1}\mu_{r+1}+\cdots+n_{d}\mu_{d})P^{4}=\frac{3}{8}nc^{2}H,\\ (n_{r+1}\mu_{r+1}+\cdots+n_{d}\mu_{d})P^{6}=-\frac{5}{16}nc^{3}H,\\ \quad\vdots\\ (n_{r+1}\mu_{r+1}+\cdots+n_{d}\mu_{d})P^{2d-2}=\frac{(2d-3)!!}{(2d-2)!!}(-c)^ {d-1}nH.\end{array}\right. \tag{3.6}\]
The above system of equations means that \(n_{r+1}\mu_{r+1}+\cdots+n_{d}\mu_{d}\neq 0\) since \(c\neq 0\) and \(H\neq 0\). The first two equations of (3.6) force that \(P^{2}=-\frac{3}{4}c\), and the second and the third equation of (3.6) force that \(P^{2}=-\frac{5}{6}c\). Hence we have \(P=c=0\), a contradiction.
In conclusion, we have that \(\mathcal{N}=\{p\in M:\nabla S(p)\neq 0\}\) is empty and hence \(S\) has to be a constant. From (2.5), we conclude that the scalar curvature \(R\) of \(M^{n}\) is constant as well. But for \(c<0\), the first equation of (2.7) means that
\(S^{2}-ncS-n^{2}cH^{2}=0\), a contradiction. This completes the proof of Theorems 1.5 and 1.6.
## 4. Proofs of Theorems 1.8 and 1.9
In this section, we mainly concern CMC triharmonic hypersurfaces in the Euclidean space \(\mathbb{R}^{n+1}\). For any dimension \(n\), we need another assumption that the multiplicity of the zero principal curvature is at most one as discussed in [7] for four distinct principal curvatures.
**The proof of Theorem 1.8**: Assume that \(\mathcal{N}=\{p\in M^{n}:\nabla S(p)\neq 0\}\neq\emptyset\). We will prove Theorems 1.8 by deriving a contradiction.
Taking \(q=1,2,3,\cdots,d-1\) in Lemma 2.4, we have
\[\left\{\begin{array}{l}n_{2}\mu_{2}P_{2}+n_{3}\mu_{3}P_{3}+\cdots+n_{d}\mu_ {d}P_{d}=0,\\ n_{2}\mu_{2}P_{2}^{2}+n_{3}\mu_{3}P_{3}^{2}+\cdots+n_{d}\mu_{d}P_{d}^{2}=0,\\ \vdots\\ n_{2}\mu_{2}P_{2}^{d-1}+n_{3}\mu_{3}P_{3}^{d-1}+\cdots+n_{d}\mu_{d}P_{d}^{d-1 }=0,\end{array}\right. \tag{4.1}\]
which is a \((d-1)\)-th order equation system with a non-zero solution. Hence on \(\mathcal{N}\) we have
\[\left|\begin{array}{cccc}P_{2}&P_{3}&\cdots&P_{d}\\ P_{2}^{2}&P_{3}^{2}&\cdots&P_{d}^{2}\\ \vdots&\vdots&\cdots&\vdots\\ P_{2}^{d-1}&P_{3}^{d-1}&\cdots&P_{d}^{d-1}\end{array}\right|=P_{2}\cdots P_{d} \prod_{2\leq\alpha\leq\beta\leq d}(P_{\alpha}-P_{\beta})=0. \tag{4.2}\]
Next we consider three possible cases.
**Case 1. \(P_{2}P_{3}\cdots P_{d}\neq 0\)** at some \(p\in\mathcal{N}\). Then from (4.2) we have that \(\prod_{2\leq\alpha\leq\beta\leq d}(P_{\alpha}-P_{\beta})=0\) at \(p\). Without loss of generality, we assume \(P_{2}-P_{3}=0\). Now the former \(d-2\) equations in the system of equations (4.1) determine a new system of equations as follows:
\[\left\{\begin{array}{l}(n_{2}\mu_{2}+n_{3}\mu_{3})P_{3}+\cdots+n_{d}\mu_{d} P_{d}=0,\\ (n_{2}\mu_{2}+n_{3}\mu_{3})P_{3}^{2}+\cdots+n_{d}\mu_{d}P_{d}^{2}=0,\\ \vdots\\ (n_{2}\mu_{2}+n_{3}\mu_{3})P_{3}^{d-2}+\cdots+n_{d}\mu_{d}P_{d}^{d-2}=0,\end{array}\right. \tag{4.3}\]
which has a non-zero solution. Then we have
\[\left|\begin{array}{cccc}P_{3}&P_{4}&\cdots&P_{d}\\ P_{3}^{2}&P_{4}^{2}&\cdots&P_{d}^{2}\\ \vdots&\vdots&\cdots&\vdots\\ P_{3}^{d-2}&P_{4}^{d-2}&\cdots&P_{d}^{d-2}\end{array}\right|=P_{3}\cdots P_{d} \prod_{3\leq\alpha\leq\beta\leq d}(P_{\alpha}-P_{\beta})=0.\]
Since \(P_{3}P_{4}\cdots P_{d}\neq 0\), we have that \(\prod_{3\leq\alpha\leq\beta\leq d}(P_{\alpha}-P_{\beta})=0\). Without loss of generality, we assume that \(P_{3}=P_{4}\). Similarly, we obtain that \(P_{3}=P_{4}=\cdots=P_{d}:=P\) at \(p\). Since \(nH\neq 0\), the first equation of the above system (4.3) implies
that
\[\Big{(}\sum_{2\leq\alpha\leq d}n_{\alpha}\mu_{\alpha}\Big{)}P=nHP=0\]
and hence \(P=0\). It is a contradiction.
**Case 2.**\(P_{\alpha}=0\) for all \(\alpha=2,\cdots,d\) at some \(p\in\mathcal{N}\). Then
\[e_{1}S=e_{1}\sum_{2\leq\alpha\leq d}n_{\alpha}\mu_{\alpha}^{2}=2\sum_{2\leq \alpha\leq d}n_{\alpha}\mu_{\alpha}e_{1}(\mu_{\alpha})=2\sum_{2\leq\alpha\leq d }n_{\alpha}\mu_{\alpha}^{2}P_{\alpha}=0,\]
which contradicts \(\nabla S\neq 0\) at \(p\).
**Case 3.** For any given point \(p\in\mathcal{N}\), some terms of \(P_{\alpha}\) are zero and the others are not zero. In this case, without loss of generality, assume \(P_{\alpha}=0\) for \(\alpha=2,\cdots,r\) and \(P_{\alpha}\neq 0\) for \(\alpha=r+1,\cdots,d\). Then the first \(d-r\) equations in (4.1) form a new system of equations
\[\left\{\begin{array}{l}n_{r+1}\mu_{r+1}P_{r+1}+n_{r+2}\mu_{r+2}P_{r+2}+ \cdots+n_{d}\mu_{d}P_{d}=0,\\ n_{r+1}\mu_{r+1}P_{r+1}^{2}+n_{r+2}\mu_{r+2}P_{r+2}^{2}+\cdots+n_{d}\mu_{d}P_ {d}^{2}=0,\\ \vdots\\ n_{r+1}\mu_{r+1}P_{r+1}^{d-r}+n_{r+2}\mu_{r+2}P_{r+2}^{d-r}+\cdots+n_{d}\mu_{ d}P_{d}^{d-r}=0,\end{array}\right. \tag{4.4}\]
which is a \((d-r)\)-th order equation system with non-zero solutions. Thus the coefficient determinant is zero, that is \(\prod_{r+1\leq\alpha\leq\beta\leq d}(P_{\alpha}-P_{\beta})=0\). Without loss of generality, we assume that \(P_{r+1}=P_{r+2}\). Similar discussion as the above yields \(P_{r+1}=\cdots=P_{d}\neq 0\). Denote by \(P:=P_{r+1}=\cdots=P_{d}\).
It follows from (4.4) that \(n_{r+1}\mu_{r+1}+\cdots+n_{d}\mu_{d}=0\), because of \(P\neq 0\). In this case, we have \(n_{2}\mu_{2}+\cdots n_{r}\mu_{r}=nH\). Therefore we deduce from Lemma 2.6 that
\[-6\sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}P_{\alpha}^{2}+2\Big{(}\sum_{ \alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}P_{\alpha}\Big{)}\Big{(}\sum_{\alpha=2 }^{d}n_{\alpha}P_{\alpha}+\sum_{m\in I_{1}}\Gamma_{mm}^{1}\Big{)}+\Big{(}\sum _{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}\Big{)}^{2}=0,\]
that is
\[2\Big{(}\sum_{\alpha=r+1}^{d}n_{\alpha}-3\Big{)}P^{2}\sum_{\alpha =r+1}^{d}n_{\alpha}\mu_{\alpha}^{2}+2\Big{(}\sum_{\alpha=r+1}^{d}n_{\alpha} \mu_{\alpha}^{2}\Big{)}P\sum_{m\in I_{1}}\Gamma_{mm}^{1}\] \[+\Big{(}\sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}\Big{)}^{2}=0, \tag{4.5}\]
where we have used \(P_{\alpha}=0\) for \(\alpha=2,\ldots,r\).
Because we have assumed that the multiplicity of the zero principal curvature is one, i.e. \(I_{1}=\{1\}\), (4.5) becomes
\[2\Big{(}\sum_{\alpha=r+1}^{d}n_{\alpha}-3\Big{)}P^{2}\sum_{\alpha=r+1}^{d}n_{ \alpha}\mu_{\alpha}^{2}+\Big{(}\sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2} \Big{)}^{2}=0. \tag{4.6}\]
Differentiating with respect to \(e_{1}\) on both sides of equation (4.6), it follows from (2.8) that
\[8\Big{(}\sum_{\alpha=r+1}^{d}n_{\alpha}-3\Big{)}P^{3}\sum_{\alpha=r+1}^{d}n_{ \alpha}\mu_{\alpha}^{2}+4\Big{(}\sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2} \Big{)}\Big{(}\sum_{\alpha=r+1}^{d}n_{\alpha}\mu_{\alpha}^{2}\Big{)}P=0. \tag{4.7}\]
Dividing (4.7) by \(4P\), and then subtracting (4.6), we have
\[\Big{(}\sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}\Big{)}\Big{(} \sum_{\alpha=2}^{r}n_{\alpha}\mu_{\alpha}^{2}\Big{)}=0. \tag{4.8}\]
Since \(S=\sum_{\alpha=2}^{d}n_{\alpha}\mu_{\alpha}^{2}\neq 0\), it follows from (4.8) that \(\sum_{\alpha=2}^{r}n_{\alpha}\mu_{\alpha}^{2}=0\), and hence \(\mu_{\alpha}=0\) for \(\alpha=2,\cdots,r\). This is a contradiction since \(\sum_{\alpha=2}^{r}n_{\alpha}\mu_{\alpha}=nH\neq 0\).
**The proof of Theorem 1.9**: Let us restrict the case of a CMC triharmonic hypersurface \(M^{5}\) in \(\mathbb{R}^{6}\). We will still work on \(\mathcal{N}=\{p\in M^{5}:\nabla S(p)\neq 0\}\neq\emptyset\) and prove Theorem 1.9 with a contradiction.
When the multiplicity of zero principal curvature is one, we can derive a contradiction from Theorem 1.8.
By Theorem 1.3, we only need to deal with the case that the multiplicity of zero principal curvature is two, i.e. \(I_{1}=\{1,2\}\). In this case, the principle curvatures on \(M^{5}\) are respectively \(0,0,\lambda_{3},\lambda_{4},\lambda_{5}\) with \(\lambda_{3}+\lambda_{4}+\lambda_{5}=5H\).
According to the proof of Theorem 1.8, we can deduce a contradiction when \(P_{\alpha}\neq 0\) or \(P_{\alpha}=0\) for all \(P_{\alpha}\). Let us consider the remaining case that \(P_{3}=0\), \(P_{4}=P_{5}\neq 0\). It follows from (4.4) that \(\lambda_{4}+\lambda_{5}=0\) and \(\lambda_{3}=5H\). For simplicity, we denote \(\lambda_{4}=-\lambda_{5}=:\mu\) and \(P_{4}=P_{5}=:P\). Then the squared norm of the second fundamental form \(S\) is given by \(S=2\mu^{2}+25H^{2}\). Since \(e_{2}(S)=0\), we have \(e_{2}(\mu)=0\) and hence \(\Gamma_{44}^{2}=\Gamma_{55}^{2}=0\). In this case, we deduce from (2.9) that \(e_{2}(P)=\Gamma_{44}^{2}P-\Gamma_{44}^{2}\Gamma_{22}^{1}=0\). Moreover, \(e_{1}(S)=4\mu e_{1}(\mu)=4\mu^{2}P\) and hence \(e_{2}e_{1}(S)=0\). Since \(e_{1}(S)\neq 0\), \(e_{i}(S)=0\) for \(i\geq 2\) and \(\Gamma_{21}^{1}=0\), we find
\[0=e_{1}e_{2}(S)-e_{2}e_{1}(S)=[e_{1},e_{2}](S)=(\nabla_{e_{1}}e_ {2}-\nabla_{e_{2}}e_{1})S=\Gamma_{12}^{1}e_{1}(S),\]
which means that \(\Gamma_{12}^{1}=-\Gamma_{11}^{2}=0\). Then (2.8) turns into
\[e_{1}(P)=P^{2}. \tag{4.9}\]
One the other hand, taking into account the Gauss equation and (2.2), from Lemma 2.2 we obtain
\[0=R_{1212}= e_{1}(\Gamma_{22}^{1})-e_{2}(\Gamma_{12}^{1})+\sum_{m}\Big{(} \Gamma_{22}^{m}\Gamma_{1m}^{1}-\Gamma_{12}^{m}\Gamma_{2m}^{1}-(\Gamma_{12}^{m }-\Gamma_{21}^{m})\Gamma_{m2}^{1}\Big{)}\] \[= e_{1}(\Gamma_{22}^{1})-(\Gamma_{22}^{1})^{2}. \tag{4.10}\]
Based on the above discussion, (4.5) becomes
\[-4\mu^{2}P^{2}+4\mu^{2}P\Gamma_{22}^{1}+(2\mu^{2}+25H^{2})^{2}=0, \tag{4.11}\]
and hence
\[\Gamma_{22}^{1}=\frac{4\mu^{2}P^{2}-(2\mu^{2}+25H^{2})^{2}}{4\mu^ {2}P}. \tag{4.12}\]
Noting \(e_{1}(\mu)=\mu P\) and differentiating (4.11) with respect to \(e_{1}\), from (4.9) and (4.10) we have
\[-4P^{2}+3P\Gamma_{22}^{1}+(\Gamma_{22}^{1})^{2}+2(2\mu^{2}+25H^{2})=0. \tag{4.13}\]
Substituting (4.12) into (4.13) and eliminating the terms of \(\Gamma^{1}_{22}\) one gets
\[16\mu^{4}P^{2}\Big{(}2(2\mu^{2}+25H^{2})-4P^{2}\Big{)}+12\mu^{2}P^{ 2}\Big{(}4\mu^{2}P^{2}-(2\mu^{2}+25H^{2})^{2}\Big{)}\] \[+\Big{(}4\mu^{2}P^{2}-(2\mu^{2}+25H^{2})^{2}\Big{)}^{2}=0,\]
that is
\[32\mu^{4}P^{2}(2\mu^{2}+25H^{2})-20\mu^{2}P^{2}(2\mu^{2}+25H^{2})^{ 2}+(2\mu^{2}+25H^{2})^{4}=0. \tag{4.14}\]
Because \(2\mu^{2}+25H^{2}\neq 0\), (4.14) becomes
\[32\mu^{4}P^{2}-20\mu^{2}P^{2}(2\mu^{2}+25H^{2})+(2\mu^{2}+25H^{2} )^{3}=0. \tag{4.15}\]
Differentiating with respect to \(e_{1}\) on (4.15), from (4.9) and (4.10) we have
\[-12\mu^{2}P^{2}-500H^{2}P^{2}+3(2\mu^{2}+25H^{2})^{2}=0. \tag{4.16}\]
Differentiating with respect to \(e_{1}\) on (4.16) yields
\[-12\mu^{2}P^{2}-250H^{2}P^{2}+6\mu^{2}(2\mu^{2}+25H^{2})=0. \tag{4.17}\]
Eliminating the terms of \(P^{2}\) between (4.16) and (4.17) gives
\[2\mu^{2}(12\mu^{2}+500H^{2})=(2\mu^{2}+25H^{2})(12\mu^{2}+250H^{2}),\]
that is
\[4\mu^{2}H^{2}-125H^{4}=0, \tag{4.18}\]
which implies that \(\mu\) is constant and \(S=2\mu^{2}+25H^{2}\) is constant as well. This contradicts \(\nabla S(p)\neq 0\).
In summary, we conclude that \(\mathcal{N}=\{p\in M:\nabla S(p)\neq 0\}=\emptyset\) and according to (2.6) we know that \(M^{n}\) is minimal. This completes the proof of Theorem 1.9.
**Acknowledgement:** The authors are supported by the NSFC (No.11801246) and Liaoning Provincial Education Department Project (No.LJKMZ20221561)
|
2310.15256 | SimBIG: Field-level Simulation-Based Inference of Galaxy Clustering | We present the first simulation-based inference (SBI) of cosmological
parameters from field-level analysis of galaxy clustering. Standard galaxy
clustering analyses rely on analyzing summary statistics, such as the power
spectrum, $P_\ell$, with analytic models based on perturbation theory.
Consequently, they do not fully exploit the non-linear and non-Gaussian
features of the galaxy distribution. To address these limitations, we use the
{\sc SimBIG} forward modelling framework to perform SBI using normalizing
flows. We apply SimBIG to a subset of the BOSS CMASS galaxy sample using a
convolutional neural network with stochastic weight averaging to perform
massive data compression of the galaxy field. We infer constraints on $\Omega_m
= 0.267^{+0.033}_{-0.029}$ and $\sigma_8=0.762^{+0.036}_{-0.035}$. While our
constraints on $\Omega_m$ are in-line with standard $P_\ell$ analyses, those on
$\sigma_8$ are $2.65\times$ tighter. Our analysis also provides constraints on
the Hubble constant $H_0=64.5 \pm 3.8 \ {\rm km / s / Mpc}$ from galaxy
clustering alone. This higher constraining power comes from additional
non-Gaussian cosmological information, inaccessible with $P_\ell$. We
demonstrate the robustness of our analysis by showcasing our ability to infer
unbiased cosmological constraints from a series of test simulations that are
constructed using different forward models than the one used in our training
dataset. This work not only presents competitive cosmological constraints but
also introduces novel methods for leveraging additional cosmological
information in upcoming galaxy surveys like DESI, PFS, and Euclid. | Pablo Lemos, Liam Parker, ChangHoon Hahn, Shirley Ho, Michael Eickenberg, Jiamin Hou, Elena Massara, Chirag Modi, Azadeh Moradinezhad Dizgah, Bruno Regaldo-Saint Blancard, David Spergel | 2023-10-23T18:05:32Z | http://arxiv.org/abs/2310.15256v1 | # SimBIG: Field-level Simulation-Based Inference of Galaxy Clustering
###### Abstract
We present the first simulation-based inference (SBI) of cosmological parameters from field-level analysis of galaxy clustering. Standard galaxy clustering analyses rely on analyzing summary statistics, such as the power spectrum, \(P_{\ell}\), with analytic models based on perturbation theory. Consequently, they do not fully exploit the non-linear and non-Gaussian features of the galaxy distribution. To address these limitations, we use the SimBIG forward modelling framework to perform SBI using normalizing flows. We apply SimBIG to a subset of the BOSS CMASS galaxy sample using a convolutional neural network with stochastic weight averaging to perform massive data compression of the galaxy field. We infer constraints on \(\Omega_{m}=0.267^{+0.033}_{-0.029}\) and \(\sigma_{8}=0.762^{+0.036}_{-0.035}\). While our constraints on \(\Omega_{m}\) are in-line with standard \(P_{\ell}\) analyses, those on \(\sigma_{8}\) are 2.65% tighter. Our analysis also provides constraints on the Hubble constant \(H_{0}=64.5\pm 3.8\) km/s/Mpc from galaxy clustering alone. This higher constraining power comes from additional non-Gaussian cosmological information, inaccessible with \(P_{\ell}\). We demonstrate the robustness of our analysis by showcasing our ability to infer unbiased cosmological constraints from a series of test simulations that are constructed using different forward models than the one used in our training dataset. This work not only presents competitive cosmological constraints but also introduces novel methods for leveraging additional cosmological information in upcoming galaxy surveys like DESI, PFS, and _Euclid_.
+
Footnote †: preprint: APS/123-QED
## I Introduction:
Precision measurements of cosmological parameters, such as the matter density and the expansion rate of the Universe, play a crucial role in shaping our understanding of the evolution and structure of the cosmos. These parameters can be inferred from a variety of observational data, including measurements of the statistical properties of the large-scale structure (LSS) of the universe traced by the distribution of galaxies.
Traditionally, cosmological parameter inference has relied on analyzing the distribution of galaxies using summary statistics -- most often the power spectrum, \(P_{\ell}(k)\)[_e.g._ 1, 2, 3, 4, 5, 6, 7, 8, 9]. In addition, these analyses incorporate analytical modeling of galaxy clustering through perturbation theory [PT; see 10, 11, for recent reviews]. Consequently, these analyses have been limited to large, weakly non-linear scales where the deviation from PT is small. By only considering the power spectrum, these analyses can not exploit the rich non-Gaussian information in the galaxy distribution, which is only weakly imprinted on the power spectrum.
Recent analyses of BOSS data have now established that there is in fact significant non-Gaussian cosmological information on non-linear scales in galaxy clustering. Furthermore, previous galaxy clustering analyses using higher-order clustering statistics have produced significantly tighter constraints than with \(P_{\ell}\) alone [_e.g._ 12, 13, 14, 15]. Furthermore, forecasts that employ various summary statistics beyond \(P_{\ell}\)[_e.g._ 16, 17, 18, 19, 20, 21] have been shown to produce even tighter constraints by including non-linear scales. Nonetheless, these applications remain limited by the inability of PT to model galaxy clustering at scales beyond the quasi-linear, especially for higher-order statistics.
Another major challenge of galaxy clustering analyses is their inability to fully account for observational systematics. For example, fiber collisions have been
shown to significantly bias \(P_{\ell}\) on scales smaller than \(k\sim 0.1\,h/\)Mpc [22; 23]. Observational effects in targeting, imaging, and completeness also significantly impact clustering measurements [24; 25]. Finally, these analyses assume a Gaussian functional form of the likelihood function used in their Bayesian framework. This assumption does not necessarily hold in general [26; 27; 28].
To overcome these limitations, we instead use Simulation-Based Inference2 (SBI). SBI uses forward models of the observables, instead of analytic models, and then infers a posterior distribution over the parameters (or a likelihood, that can then be converted into the posterior with Bayes' theorem). This method enables us to leverage high-fidelity simulations that accurately model complex physical processes, leading to more robust inferences than methods based on analytical models.
Footnote 2: The terms ‘likelihood-free inference’ and ‘implicit likelihood inference’ have also been used to refer to the same method
There have already been multiple applications of SBI in astronomy [_e.g._ 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 223; 224; 225; 226; 227; 228; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 282; 284; 286; 287; 289; 291; 288; 289; 285; 286; 287; 288; 289; 292; 293; 294; 295; 296; 297; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 308; 309; 310; 311; 329; 333; 341; 342; 343; 344; 35; 356; 357; 368; 379; 380; 311; 339; 342; 343; 358; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 371; 372; 373; 374; 375; 376; 377; 377; 378; 379; 381; 382; 383; 384; 385; 386; 387; 388; 388; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 400; 40; 40; 40; 411; 42; 43; 44; 45; 46; 47; 48; 49; 50; 41; 43; 45; 47; 49; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 82; 89; 83; 85; 87; 89; 84; 86; 88; 89; 91; 85; 87; 88; 89; 92; 89; 93; 86; 89; 94; 95; 96; 97; 98; 99; 101; 102; 103; 104; 105; 106; 107; 108; 109; 109; 111; 113; 108; 114; 115; 116; 117; 118; 119; 120; 121; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 133; 134; 135; 136; 137; 138; 139; 140; 143; 144; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 170; 171; 173; 175; 176; 177; 178; 179; 180; 183; 184; 185; 186; 187; 188; 189; 190; 187; 189; 191; 192; 188; 189; 193; 194; 195; 196; 197; 198; 199; 200; 210; 222; 231; 240; 241; 25; 26; 27; 28; 299; 30; 31; 32; 333; 342; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 59; 50; 60; 61; 63; 64; 65; 66; 67; 68; 69; 70; 72; 73; 74; 75; 76; 77; 78; 79; 80; 83; 84; 85; 86; 87; 88; 89; 93; 94; 95; 96; 97; 98; 98; 99; 99; 101; 102; 133; 14; 15; 16; 17; 18; 199; 19; 20; 21; 222; 23; 24; 25; 26; 27; 28; 29; 31; 32; 33; 34; 33; 35; 36; 37; 38; 39; 50; 88; 89; 90; 910; 103; 11; 12; 13; 144; 15; 16; 17; 18; 19; 21; 22; 23; 24; 26; 27; 28; 29; 30; 32; 34; 31; 35; 36; 37; 38; 39; 51; 39; 52; 32; 30; 310; 31; 34; 31; 35; 36; 37; 38; 39; 52; 33; 39; 60; 31; 30; 32; 311; 34; 35; 311; 32; 33; 35; 36; 37; 38; 39; 50; 73; 74; 75; 76; 77; 78; 89; 99; 101; 11; 12; 13; 14; 15; 16; 17; 18; 19; 22; 23; 33; 34; 35; 36; 37; 38; 39; 51; 39; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 73; 88; 89; 94; 95; 96; 97; 98; 101; 11; 12; 13; 14; 15; 17; 18; 19; 20; 21; 23;
are determined by five \(\Lambda\)CDM cosmological parameters, \(\Omega_{m},\Omega_{b},h,n_{s},\sigma_{8}\), and nine HOD parameters. We refer readers to Hahn _et al._[45], Hahn _et al._[48] for further details.
To construct our training set, we use 2,518 high-resolution QUIJOTE \(N\)-body simulations5 arranged in a Latin hypercube configuration (LHC), which imposes priors on the cosmological parameters that conservatively encompass the _Planck_ cosmological constraints. For each simulation, we forward-model 10 CMASS-like galaxy catalogs using unique HOD parameters randomly sampled from a conservative prior. While this is suboptimal, as it leads to samples that are not independent and identically distributed (i.i.d.), this factor 10 increase in the number of available simulations greatly improves our results, and we expect regularization to deal with any potential issues arising from not i.i.d. samples. We split the resulting \(25,180\) simulations into a \(20,000\) and \(5,180\) training and validation set.
Footnote 5: We supplement the 2,000 QUIJOTE \(N\)-body simulations used in [45] with 518 additionally constructed simulations.
### Test simulations
In order to demonstrate that we can infer accurate and unbiased cosmological constraints, we test our analysis on three different sets of realistic test simulations that differ from the training dataset and have been developed within SimBIG and introduced in [48]: TEST0, TEST1, and TEST2.
TEST0 uses QUIJOTE \(N\)-body simulations that have the same specifications as those arranged in the LHC, but were run at a fiducial cosmology with \(\Omega_{m}=0.3175,\Omega_{b}=0.049,h=0.6711,n_{s}=0.9624,\sigma_{8}=0.834\). The halo finder, HOD framework and survey realism are the same as those used in the training set, but the HOD parameters span a narrower prior. This test dataset contains 500 synthetic galaxy catalogs.
TEST1 involves the same \(N\)-body simulations as TEST0, but a different halo finder: the Friend-of-Friend algorithm [57]. Assembly, concentration, and satellite velocity biases are also not considered in the HOD model. Central velocity bias is implemented, as the halo velocities in FoF halo catalogs correspond to the bulk velocity of the dark matter particles in the halo rather than the
Figure 1: Schematic illustrating the various elements of the SimBIG forward modelling pipeline. First, we generate synthetic galaxy catalogs that mimic the real BOSS observations. Then, we train a data compression step using our CNN to compress the catalog to its cosmological parameters. Next, we train a neural posterior estimator on the estimating parameters and the true parameters to estimate posteriors over the cosmological parameters. Once our data compression and neural posterior estimator are trained, we apply our pipeline to infer cosmological parameters from the real BOSS observations.
velocity of the central density peak of the halo. This test dataset contains 500 synthetic galaxy catalogs.
TEST2 uses 25 AbacusSummit\(N\)-body simulations [58] in the "base" configuration of the suite. The simulations contain \(6912^{3}\) particles in a \((2h^{-1}\text{Gpc})^{3}\) volume box. Halo catalogs are constructed from these simulations using the CompaSO halo finder [59] and each of them is divided into 8 boxes of volume \(1\,(h^{-1}\text{Gpc})^{3}\). Halos are populated with galaxies using the same HOD model implemented in the training set, with HOD parameters that sample the same narrower priors used in TEST0. This test dataset contains 1,000 synthetic galaxy catalogs.
All three test datasets incorporate the same survey realism as the training dataset to produce CMASS-like galaxy catalogs.
It would be ideal to have a third set, that we only tested on after passing validation tests on TEST0, TEST1 and TEST2. However, due to the high computational cost of our simulations, this was unfeasible.
### Galaxy Density Field
, To apply CNNs to our observational and simulated galaxy samples, we mesh the galaxy distribution into a box, with voxel size \(64\times 128\times 128\). We choose this size because divisibility by two allows for easier downsampling in the CNN. First, we place the distribution into a \([707,1414,1414]\) Mpc\(/h\) box and convert it into a 3D density field using a cloud-in-cell mass assignment [60]. For our observational sample, we include systematics weights for multiple effects [redshift failures, stellar density, and seeing conditions; 24; 61] in the mass assignment.
Since our data occupies a \([577.3,1414,1224]\) Mpc\(/h\) box, we fill some of the box with zero-valued voxels. Our voxels have size \(\sim[11,11,11]\) Mpc\(/h\), thus we impose an effective scale cut of \(k<k_{\text{max}}=0.28\)\(h/\)Mpc. While this is larger than the scale cut imposed in the SimBIG \(P_{\ell}\) analysis [62], we find that it is sufficient to place significant cosmological constraints. Moreover, pushing to even smaller scale cuts presents its own set of challenges. For one, smaller scale cuts present significant computational challenges in terms of the required memory to train on larger forward model sizes. Additionally, we find that models trained on smaller scale cuts tend to overfit on the training dataset significantly, limiting the robustness of their inferred parameters.
## III Methods
Our approach to field-level inference of cosmological parameters consists of two main components: a massive data compression/feature extraction step performed by a CNN, followed by SBI. In the following section, we describe each step in more detail6. We also describe two additional elements of our analysis, designed to ensure accurate posterior estimates: weight marginalization and validation with coverage probability tests.
Footnote 6: We also attempted a one-step approach, where the CNN served as embedding to the SBI step, however, we found that constraints were significantly weaker when using this approach.
### CNN-based Feature Extraction
CNNs are flexible machine learning models that can be optimized to extract maximally relevant features from their inputs across a wide variety tasks. They consist of multiple layers of specialized kernels that are convolved across the input to extract features in a hierarchical scheme. These networks are particularly well-suited for image-based tasks due to their ability to (1) exploit local receptive fields, (2) recognize patterns regardless of their position in the input due to translational invariance, and (3) extract increasingly complex features by combining lower-level features from previous layers hierarchically [for a review of CNNs, see 63].
In this study, we train a three-dimensional CNN to compress the galaxy density fields produced by the SimBIG forward models to the cosmological parameters of those models. Specifically, the CNN takes as input the three-dimensional tensor representing the discretized forward model, \(x\in\mathbb{R}^{64\times 128\times 128}\), and outputs a prediction, \(\hat{\mathbf{\theta}}\), of the \(\Lambda\)CDM cosmological parameters, \(\{\Omega_{m},\Omega_{b},h,n_{s},\sigma_{8}\}\), used to generate that forward model.
The CNN architecture consists of 5 convolutional blocks. Each convolutional block begins with a convolutional layer that convolves its input with a number of \(3\times 3\times 3\) kernels. This convolution is performed with 1-voxel zero-padding. This is followed by a rectified linear unit (ReLU). The output of the ReLU unit is then downsampled using max-pooling, which enables the network to learn features at increasing scales by reducing the size of its internal representations. Finally, batch normalization is applied, which typically speeds up training and has been shown to help with generalization [64]. Following the convolutional blocks, the activation maps are flattened and fed into three fully-connected layers that output \(\hat{\mathbf{\theta}}\). These layers also use ReLU activation functions, but do not perform batch normalization.
In order to prevent overfitting on the training simulations, we include in the CNN's final architecture significant levels of dropout. This technique randomly sets to zero a percentage of neuron activations during training. Specifically, we use dropout percentages of \(p=0.15\) for each convolutional block and \(p=0.4\) for each fully connected blocks. Additionally, we introduce a large \(\ell_{2}\) penalty term with normalization value \(\lambda=0.0275\) on the network weights. In applying dropout in both the convolutional and fully connected layers, the network is forced
to train on a smaller subset of active neurons, leading to underutilization of the network's capacity. Moreover, with the \(\ell_{2}\) penalty term in the loss function, the network's flexibility, and subsequently its ability to learn specific features, is limited. While these measures ultimately limit the constraining power of the CNN, they ensure robustness and generalizability, and thus protect against the fact that the SimBIG forward models, and in general any forward model, are approximate.
CNN training is performed using a supervised learning approach. We optimize the weights of the network to minimize the mean-squared-error (MSE) loss between \(\mathbf{\hat{\theta}}_{\text{normed}}\) and \(\mathbf{\theta}_{\text{normed}}^{\text{true}}\), where we normalize both \(\mathbf{\hat{\theta}}\) and \(\mathbf{\theta}^{\text{true}}\) to \((0,1)\), to prevent their varying ranges from affecting the loss differently. The optimization is performed using stochastic gradient descent with momentum \(\beta=0.9\). The neural network is trained in mini-batches of 32 galaxy fields. We use the OneCycleLR learning rate scheduler, which involves gradually increasing and then decreasing the learning rate during a single training cycle, and has been shown to lead to faster convergence and improved generalization [65]. We use a maximal learning rate of \(r=0.01\). During training, the input fields are also randomly flipped horizontally and vertically with \(p=0.5\) to further improve network generalization. We train the CNN on a single A100 GPU core until the MSE computed on the validation set has not improved for 20 consecutive epochs. Training the CNN in this context takes roughly 8 hours.
The CNN's architecture and hyperparameters are determined through experimentation and are roughly modelled off of previous successful image classifiers in [66, 67]. To determine the specifics of our network, we train 60 networks using the Optuna hyperparameter framework [68]. Specifically, we vary the number of convolutional blocks between 3 and 6, the number of fully-connected layers between 1 and 6, the base number of channels of the convolutional blocks between 2 and 14, the width of the fully-connected layers between 128 and 1024, the dropout in both convolutional and fully-connected layers between \(p=0\) and \(p=0.5\), the \(\ell_{2}\) penalty between \(\lambda=10^{-4}\) and \(\lambda=10^{-1}\), and the max learning rate between \(r=10^{-5}\) and \(r=10^{-2}\). Ultimately, we aim to maximize the network's ability to extract relevant features from the galaxy density field while maintaining its ability to generalize beyond the SimBIG training simulations. To that end, we select the network configuration that maximizes the network's MSE on the held-out validation models while minimizing the ratio between training MSE and validation MSE. However, in order to pass the validation tests on the out-of-distribution TEST1 and TEST2, we found that it was necessary to impose slightly stricter regularization on the network. Thus, the dropout and \(\ell_{2}\) terms were increased through trial-and-error from the optuna output to their reported values. Ultimately, the significant amounts of regularization are included due to the model's tendency to overfit on the relatively small dataset size.
### Weight Marginalization
In order to further prevent the CNN from overfitting on the training set, we perform a weight marginalization step, converting our CNN into a Bayesian Neural Network (BNN). In contrast to other neural networks, BNNs train the model weights as a distribution rather than searching for an optimal value. This allows them to capture the uncertainty in the weights and outputs of the model. The ultimate goal of BNNs is to quantify the uncertainty introduced by the models in terms of outputs and weights so as to explain the trustworthiness of the prediction.
In this work, we use Stochastic Weight Averaging [SWA; 69, 70]. SWA is predicated on the observation that the parameters of deep neural networks often converge to the edges of low-loss regions. This edge-type convergence is sub-optimal, as these solutions are more susceptible to the shift between train and test error surfaces. SWA approximates the posterior distribution of the weights of the CNN as a Normal distribution, whose mean and covariance are given by
\[\bar{w}=\frac{1}{N_{\text{swa}}}\sum_{n=1}^{N_{\text{swa}}}w_{n},\quad\Sigma= \frac{1}{N_{\text{swa}}}\sum_{n=1}^{N_{\text{swa}}}(w_{n}-\bar{w})(w_{n}-\bar{ w})^{T}, \tag{1}\]
respectively, where \(w\) are the weights of the network, \(n\) is the time step during network optimization/training, and \(N_{\text{swa}}\) are the total steps over which SWA is performed.
By adopting this scheme, SWA solutions tend to converge to the center of flat loss regions, thereby leading to more stable and generalizeable solutions. Indeed, SWA has already been shown to lead to better generalization to out-of-distribution data [70], which is expected to improve the robustness of our analysis. Moreover, SWA has been shown to outperform competing methods in multiple tasks [69], and has been previously applied to astrophysics [71] and cosmology [72]. We use the publicly available cosmoSWAG implementation7. The compressed galaxy field that we feed as input to SBI is the output of the SWA network: a set of 10 samples of the posterior distribution weights of the CNN -- a 50-dimensional data vector.
Footnote 7: [https://github.com/Pablo-Lemos/cosmoSWAG](https://github.com/Pablo-Lemos/cosmoSWAG)
### Simulation-based inference
After training the CNN, we use the SimBIG SBI framework to estimate posterior distributions of the cosmological parameters, \(\mathbf{\theta}\), from the compressed representation of the observables obtained from the CNN, \(\mathbf{\hat{\theta}}\). We represent this posterior as \(p(\mathbf{\theta}|\mathbf{\hat{\theta}})\).
There are multiple existing frameworks for SBI, such as Approximate Bayesian Computation [_e.g._[73, 74, 75, 76, 77], Neural Ratio Estimation [_e.g._[78, 79, 80, 81, 82]], Neural Likelihood Estimation [_e.g._[83, 84, 85]], and Neural Posterior Score Estimation [_e.g._[86, 87]]. We use Neural Posterior Estimation [NPE; _e.g._[88, 89, 90, 91, 92]], which uses a neural density estimator (NDE) to estimate the posterior distribution from a training set. In this case, the training set consists of the ground-truth/CNN-compressed \(\{\mathbf{\theta},\mathbf{\hat{\theta}}\}\) parameter pairs of the SimBIG forward models. We use the publicly available sbi implementation from Tejero-Cantero _et al._[93].
Previous SimBIG analyses employed a Masked Autoregressive Flow [94] as the density estimator. For our density estimator, we instead use Neural Spline Flows [NSF; 95], a more expressive alternative. Denoting our NSF as \(q_{\phi}(\mathbf{\theta}|\mathbf{\hat{\theta}})\), where \(\phi\) represents its hyperparameters, we train \(q_{\phi}\) by minimizing the KL divergence between \(p(\mathbf{\theta},\mathbf{\hat{\theta}})\) and \(q_{\phi}(\mathbf{\theta}|\mathbf{\hat{\theta}})p(\mathbf{\hat{\theta}})\). This is equivalent to maximizing the log-likelihood over the training set of SimBIG forward models. In practice, we split the catalogs into a training and validation set with 90/10 split, and use an early stopping procedure to prevent overfitting by stopping training when the validation log-likelihood has failed to increase after 20 epochs. Additionally, to improve the robustness of our NDE, we use an ensemble of five NSFs, which has been shown to produce more reliable approximations [96, 97].
### Validation
Before analyzing observations, we first validate our posterior estimation in two stages. First, we validate on the \(5,180\) simulations that were excluded from the training of our pipeline. We refer to this as the "NDE accuracy test". Second, we conduct the SimBIG "mock challenge", where we validate our analysis on the suite of test simulations described in SSII.
For the NDE accuracy test, we use the Tests of Accuracy with Random Point (TARP) expected coverage probability (ECP) test as our metric. ECP is a necessary and sufficient test for the optimality of the estimated posterior, \(q_{\phi}\)[98]8. \(p(\mathbf{\theta}|\,\mathbf{\hat{\theta}})\equiv q_{\phi}(\mathbf{\theta}|\,\mathbf{\hat{ \theta}})\) is only true in the limit of infinite data, and therefore we can only test for approximate equality, which is satisfied if and only if
Footnote 8: [https://github.com/Ciela-Institute/tarp](https://github.com/Ciela-Institute/tarp)
\[\text{ECP}(\hat{p},\alpha)=1-\alpha\qquad\forall\alpha\in[0,1], \tag{10}\]
where \(\text{ECP}(\hat{p},\alpha)\) is the expected coverage probability of the posterior estimate \(\hat{p}\). TARP coverage probabilities are a robust method for estimating ECP that do not rely on evaluations of the posterior estimate. We can use it to calculate ECP for both the full-dimensional parameter space, or for each parameter separately. The latter is equivalent to the Simulation-Based Calibration [99] used in the other SimBIG analyses.
We present the results of our NDE accuracy test using TARP in Fig. 2, where we plot the ECP versus the confidence level \(1-\alpha\)9. We evaluate the TARP ECP over the full dimensionality of our parameter space. If the ECP and confidence level are equal for every \(\alpha\in[0,1]\), _i.e._ it follows a diagonal line, then the estimator is well calibrated since the probability of our posterior estimate containing the true parameter values matches the actual confidence level. We find that the NDE accuracy test is perfectly calibrated, as the ECP line is perfectly in the diagonal.
Footnote 9: This figure is often referred to as a probability-probability (PP) plot.
We then move on to the SimBIG mock challenge. Fig. 3 (a) shows the marginalized two-dimensional posterior distribution of \(\{\Omega_{m},\sigma_{8}\}\) for 9 randomly selected simulations from each of the test sets: TEST0 (top), TEST1 (center), and TEST2 (bottom). We mark the true \(\Omega_{m}\) and \(\sigma_{8}\) values in each panel (black x). For all three test sets the posteriors appear to be well calibrated and unbiased, which qualitatively demonstrate the robustness of our analysis.
Next, we assess robustness more quantitatively using TEST0, TEST1, and TEST2. For the test simulations, we cannot use the same method as the NDE accuracy test due to the fact that the ECP relies on averaging over the prior distribution, but all of these simulations are run at fixed fiducial cosmologies. Therefore, we follow [48] and
Figure 2: TARP expected coverage probability vs probability level. For an accurate posterior estimator, the line will follow the diagonal, while deviations from the diagonal are indicative of over or under confidence. We show the _NDE Accuracy test_, using \(5,180\) of our base simulations that were not used in the training of our CNN.
we assess robustness by comparing the likelihoods over the three test sets. Specifically, we compute the posterior mean \(\mu\) and standard deviation \(\sigma\) for each \(\Lambda\)CDM parameter for each suite of test simulations. Then, we analyze the difference between \(\mu\) and the true parameter value \(\theta^{\text{fid}}\) in units of \(\sigma\). For a robust pipeline, we expect to find consistency of these estimates across all three datasets. On the other hand, variations between the distributions would be indicative of likelihood variations that come from changing the forward model and imply that our analysis is sensitive to model variations.
In Fig. 3 (b), we present the likelihoods of TEST0 (blue), TEST1 (orange), and TEST2 (green) for each of the \(\Lambda\)CDM parameters. We find consistent distributions for all parameters across test sets. This indicates that our posterior inference is robust to variations in the forward model. It also suggests that our use of weight marginalization led to better generalization properties.
These validation tests form a crucial part of our analysis. We note that it is possible to obtain significantly tighter constraints that pass only the NDE accuracy test. However, in doing so, we would need to assume that our forward model accurately models every aspect of the observations. Given the complexities of galaxy formation, _any_ forward model of galaxy clustering is an approximate model. Hence, validating that we can successfully infer unbiased cosmological constraints from simulated test galaxy catalogs generated with different forward models (TEST1 and TEST2) serves as a powerful test against model misspecification, even if it come at the expense of significant constraining power. In future work, we will explore additional tests of model misspecification and "blind challenges" where we test our analysis on simulations without knowing the true cosmological parameters or the forward model used to generate them.
Figure 3: Validation of our model on the SimBIG mock challenge data.
## IV Results:
In Fig. 4, we present the posterior distribution of all \(\Lambda\)CDM cosmological parameters inferred from our field-level analysis of the BOSS CMASS SGC using SimBIG (orange). In the right panels, we focus on the growth of structure parameters \(\Omega_{m}\) and \(\sigma_{8}\). The diagonal panels present the 1D marginalized posteriors; the rest of the panels present marginalized 2D posteriors of different parameter pairs. The contours represent the 68 and 95 percentiles and the ranges of the panels match the prior. For comparison, we include posteriors from the SimBIG \(P_{\ell}(k_{\text{max}}<0.5\,h/\text{Mpc})\) analysis [grey; 62] as well as the constraints from the PT based \(P_{\ell}(k_{\text{max}}<0.25\,h/\text{Mpc})\) analysis of the CMASS SGC sample [dashed; 8].
Overall, our field-level analysis using the CNN provides tighter, yet consistent, cosmological constraints to the previous BOSS analyses. Specifically, our constraints on \(\Omega_{m}\) and \(\sigma_{8}\) are \(1.76\times\) and \(1.92\times\) tighter than the SimBIG \(P_{\ell}\) analysis. Moreover, our constraints on \(\Omega_{m}\) are in-line with the PT-based \(P_{\ell}\) analysis, and those on \(\sigma_{8}\) are \(2.65\times\) tighter. This higher constraining power is expected. Indeed, by using the full galaxy field, we are able to exploit non-Gaussian cosmological information on non-linear scales that is inaccessible to \(P_{\ell}\) analyses. Moreover, in using the SimBIG SBI approach, we are able to more robustly account for observational systematics compared to the standard clustering analyses.
In fact, with the added constraining power of our field-level analysis, we also can place significant constraints on \(H_{0}=63.1\pm 4.1\) km/s/Mpc, albeit weaker than those on \(\Omega_{m}\) and \(\sigma_{8}\). This is in contrast to standard \(P_{\ell}\) analyses, which cannot independently constrain \(H_{0}\) and typically rely on priors from Big Bang Nucleosynthesis or CMB experiments. Our constraints support a low value of \(H_{0}\) in good agreement with _Planck_ constraints [100]. However, we do not have not have enough constraining power to make strong statements. We will further investigate the cosmological implications of this result and how they compare with other surveys and cosmological probes in an accompanying paper.
## V Conclusions:
In this paper, we present cosmological constraints from a field-level analysis of the CMASS galaxy catalogs using simulation-based inference. We demonstrate that our
Figure 4: _Left_: Posterior distributions for all \(\Lambda\)CDM cosmological parameters from our CNN-based field level inference of BOSS observations (orange). For comparison, we include the SimBIG \(P_{\ell}\) analysis (gray). The contours represent the 68% and 95% confidence intervals. Our CNN-based field-level inference produces tighter, yet consistent, constraints to the SimBIG \(P_{\ell}\) analysis. _Right_: Posterior distributions for \(\Omega_{m}\) and \(\sigma_{8}\). For comparison, we include posteriors from the SimBIG \(P_{\ell}\) analysis (gray) and the standard PT-based \(P_{\ell}\) analysis [black dashed; 8]. Our analysis constrains \(\Omega_{m}\) and \(\sigma_{8}\) 1.76 and 1.92\(\times\) tighter than the SimBIG \(P_{\ell}\) analysis. Moreover, our constraints on \(\Omega_{m}\) are in-line with the PT-based \(P_{\ell}\) analysis, and those on \(\sigma_{8}\) are \(2.65\times\) tighter
analysis passes a number of stringent validation tests, including a robustness test based on simulations constructed using different forward models. These test sets provide key validation against model misspecification and demonstrates some robustness against discrepancies between observations and our forward model.
Furthermore, we show that our cosmological parameter constraints are consistent but significantly tighter than those from \(P_{\ell}\) analyses. In particular, our constraints on \(\Omega_{m}\) and \(\sigma_{8}\) are in-line and \(2.65\times\) tighter than the standard PT-based \(P_{\ell}\) analyses. We are even able to produce significant constraints on \(H_{0}\), without any priors from external experiments. These improvements demonstrate that our method successfully extracts additional non-Gaussian and non-linear cosmological information from the galaxy distribution.
As simulations become more realistic and efficient in the future, we will be able to extend our analyses to smaller scales an the larger volumes covered by upcoming surveys such as the Dark Energy Spectroscopic Instrument [102, 103, 104, 105], Subaru Prime Focus Spectrograph [PFS; 105, 106], the ESA _Euclid_ satellite mission [107], and the Nancy Grace Roman Space Telescope [108, 109]. Our results demonstrate that these analyses will be able to produce leading cosmological constraints from galaxy clustering. The methodology and tests presented in this paper lay the groundwork for such analyses.
In accompanying papers we present the SimBIG analysis of galaxy clustering using two summary statistics: the galaxy bispectrum and the wavelet scattering transform statistics. Furthermore, in [101], we present a comparison of the different SimBIG analyses, including the field-level constraints presented in this work. We also discuss their cosmological implications and present forecasts for extending SimBIG to upcoming galaxy surveys.
## Acknowledgements
It is a pleasure to thank Mikhail M. Ivanov for providing us with the posteriors used for comparison, and Ben Wandelt for discussions that greatly helped the papers. We thank the Learning the Universe Collaboration for helpful feedback and stimulating discussions. PL acknowledges support from the Simons Foundation. JH has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 101025187. AMD acknowledges funding from Tomalla Foundation for Research in Gravity.
|
2301.03375 | One-Shot Achievable Secrecy Rate Regions for Quantum Interference
Wiretap Channel | In this paper, we want to derive achievable secrecy rate regions for quantum
interference channel with classical inputs under one-shot setting. The main
idea to this end is to use the combination of superposition and rate splitting
for encoding scheme and constructing a decoding scheme based on simultaneous
decoding. | Hadi Aghaee, Bahareh Akhbari | 2023-01-09T14:27:08Z | http://arxiv.org/abs/2301.03375v1 | # One-Shot Achievable Secrecy Rate Regions for Quantum Interference Wiretap Channel
###### Abstract
In this paper, we want to derive achievable secrecy rate regions for quantum interference channel with classical inputs under one-shot setting. The main idea to this end is to use the combination of superposition and rate splitting for encoding scheme and constructing a decoding scheme based on simultaneous decoding.
Quantum Channel; Mutual Information; Secrecy Capacity; Multiple Access Channel
## I Introduction
The physical layer security was introduced by Shannon for the first time [1]. After that, the wiretap channel was presented by Wyner, in which a sender transmits its message to a legitimate receiver in the presence of a passive eavesdropper [2]. Moreover, Csiszar and Korner introduced the broadcast channel with confidential messages [3].
However, the physical layer security problems have been extended to multi-terminal channels like multiple access channels (MACs), Interference channels (ICs), relay channels, etc., due to their importance and their usage in practical systems [4-10].
In recent decades, with development in quantum data processing and its applications, a significant effort has begun to use the natural features of quantum mechanics to improve communication. Some of these features are as follows: entanglement, uncertainty, no-cloning theorem, superposition, etc. [11]. These natural features help the communication to be faster and more secure.
Moreover, the security problem plays a critical role in quantum communication and devotes a considerable part of the research to itself. In this regard, the quantum wiretap channel (QWTC) was firstly introduced in [12] and [13].
Then, secrecy constraints are extended to multi-user quantum channels such as quantum interference channel (QIC) [14] and quantum multiple access channel (QMAC) [15-18]. The interference phenomenon is one of the major problems in communication systems.
In this paper, we derive some achievable secrecy rate regions for quantum interference channel with classical inputs.
One of the major open problems in the quantum information theory is related to the simultaneous decoder for quantum channels with three or more senders (i.e., jointly typical decoder). However, this problem has been solved for some cases, such as the min-entropy case and the case of the quantum multiple access channels (QMACs), in which the output systems are commutative [19]. Therefore, in the independent and identical distributed (i.i.d.) case, we have to use successive decoding combined with time-sharing. In contrast, for the one-shot case, we have to use the simultaneous decoder. Sen proved a joint typicality lemma which is helpful to decode any number of messages simultaneously in the one-shot case [19].
In this paper, we want to study secure communication over a classical-quantum interference wiretap channel (C-QI-WTC) under the one-shot setting. Up to the best knowledge, it is the first time that this channel is studied. Even in the classical case, the security problem of interference channel has been investigated under a different scenario called interference channel with confidential messages. Also, another feature of our problem is that the channel is considered under the one-shot setting. This choice is due to the fact that there is not a proven joint typicality lemma in the asymptotic i.i.d. case for general quantum channels (i.e., quantum channels with any number of senders). Therefore, all of the obtained results are new, and the proposed strategies in the paper can be applied to the classical interference channel.
The paper is organized as follows: In Section II, some seminal definitions are presented. In Section III, the main channel and information processing tasks are presented. In Section IV, the results and main theorems are presented. Section V is dedicated to discussion and future works.
## II Preliminaries
Let A (Alice) and B (Bob) be two quantum systems. These quantum systems can be denoted by their corresponding Hilbert spaces as \(\mathcal{H}^{A}\), \(\mathcal{H}^{B}\). The states of the above quantum systems are presented as density operators \(\rho^{A}\) and \(\rho^{B}\), respectively, while the shared state between Alice and Bob is denoted by \(\rho^{AB}\). A density operator is a positive semidefinite operator with a unit trace. Alice or Bob's state can be defined by a partial trace operator over the shared state. The partial trace is used to model the lack of access to a quantum system. Thus, Alice's density operator using partial trace is \(\rho^{A}=Tr_{B}(\rho^{AB})\), and Bob's density operator is \(\rho^{B}=Tr_{A}\{\rho^{AB}\}\). We use \(\left|\psi\right\rangle^{A}\) to denote the pure state of system A. The corresponding density operator is \(\psi^{A}=\left|\psi\right\rangle\!\left\langle\psi\right|^{A}\). The von Neumann entropy of the state \(\rho^{A}\) is defined by \(H(A)_{\rho}=-Tr\{\rho^{A}\log\rho^{A}\}\). For an arbitrarily state such as \(\sigma^{AB}\), the quantum conditional entropy is defined by \(H(A|B)_{\sigma}=H(A,B)_{\sigma}-H(B)_{\sigma}\). The quantum mutual information is defined by \(I(A;B)_{\sigma}=H(A)_{\sigma}+H(B)_{\sigma}-H(A,B)_{\sigma}\), and the conditional quantum mutual information is defined by:
\[I(A;B|C)_{\sigma}=H(A|C)_{\sigma}+H(B|C)_{\sigma}-H(A,B|C)_{\sigma}\]
Quantum operations can be denoted by _completely positive trace-preserving_ (CPTP) maps \(\mathcal{N}^{A\to B}\). The CPTP maps accept input states in A and output states in B. The distance between two quantum states, such as A and B is defined by trace distance. The trace distance between two arbitrarily states such as \(\sigma\) and \(\rho\) is:
\[\left\|\sigma-\ \rho\right\|_{1}=Tr|\sigma-\ \rho| \tag{1}\]
where \(\left|\Psi\right|=\sqrt{\Psi^{\dagger}\Psi}\). This quantity is zero for two similar and perfectly distinguishable states.
_Fidelity_ is defined as \(F(\rho,\sigma)=\left\|\sqrt{\rho}\sqrt{\sigma}\right\|_{1}^{2}\), and _purified distance_ is a metric on \(\mathcal{D}(\mathcal{H})\) and is defined as \(P(\rho,\sigma)\coloneqq\sqrt{1-F(\rho,\sigma)^{2}}\). Most of the above definitions are given from [20].
_Definition 1: (Hypothesis testing mutual information)_ Hypothesis testing mutual information is denoted by \(I_{R}^{e}(X;Y)\)\(\coloneqq D_{R}^{e}(\rho_{XY}\left\|\rho_{X}\otimes\rho_{Y}),\epsilon\in(0,1)\) and is considered as _quantum hypothesis testing divergence_[21] where \(D_{R}^{e}(\left\|.\right\|_{\cdot})\) is _hypothesis testing relative entropy_[21]. \(\rho^{\mathcal{H}_{X}\mathcal{H}_{Y}}\) is the joint state of input and output over their Hilbert spaces \((\mathcal{H}_{X},\mathcal{H}_{Y})\), and it can be shown as \(\rho_{XY}\):
\[\rho_{XY}=\sum_{x}p_{X}(x)\left|x\right\rangle\!\left\langle x\right|_{X} \otimes\rho_{Y}^{x}\]
where \(p_{X}\) is the input distribution.
_Definition 2: (Quantum relative entropy [22])_: Consider states \(\rho_{X}\), \(\sigma_{X}\in\mathcal{D}(\mathcal{H}_{X})\). The Quantum relative entropy is defined as:
\[\begin{array}{l}D(\rho_{X}\|\sigma_{X})\\ \coloneqq\begin{cases}Tr\{\rho_{X}[\log_{2}\rho_{X}-\log_{2}\sigma_{X}]\}&supp (\rho_{X})\subseteq supp(\sigma_{X})\\ +\infty&otherwise\end{cases}\end{array}\]
where \(supp(\sigma_{X})\) refers to the _set-theoretic support_\(\sigma\). \(supp(\sigma)\) is the subspace of \(\mathcal{H}\) spanned by all eigenvectors of \(\sigma\) with non-zero eigenvalues.
**Fact**: The following relation exists between the quantum relative entropy and hypothesis testing relative entropy for \(\epsilon\in(0,1)\)[21]:
\[D_{R}^{e}(\rho_{X}\|\sigma_{X})\leq\frac{1}{1-\epsilon}[D(\rho_{X}\|\sigma_{X })+h_{b}(\epsilon)]\]
where \(h_{b}(\epsilon)\coloneqq-\epsilon\log_{2}\epsilon-(1-\epsilon)\log_{2}(1-\epsilon)\) is a binary entropy function.
_Definition 3: (Max mutual information [23])_ Consider a bipartite state \(\rho_{XY}\) and a parameter \(\epsilon\in(0,1)\). The max mutual information can be defined as follows:
\[I_{max}(X;Y)_{\rho}\coloneqq D_{max}(\rho_{XY}\left\|\rho_{X}\otimes\rho_{Y} \right.)_{\rho}\]
where \(\rho\) refers to the state \(\rho_{XY}\) and \(D_{max}(\left|\right|)\) is the _max-relative entropy_[24] for \(\rho_{X},\sigma_{X}\in\mathcal{H}_{X}\):
\[D_{max}(\rho_{X}\left\|\sigma_{X}\right\rangle\coloneqq\inf\{\gamma\in \mathbb{R}\colon\rho_{X}\leq 2^{\gamma}\sigma_{X}\}\]
_Definition 4: (Quantum smooth max relative entropy [24])_ Consider states \(\rho_{X}\), \(\sigma_{X}\in\mathcal{D}(\mathcal{H}_{X})\) and \(\epsilon\in(0,1)\). The quantum smooth max relative entropy is defined as:
\[D_{max}^{e}(\rho_{X}\|\sigma_{X})\coloneqq\inf_{\rho_{X}^{\prime}\in\mathcal{B }^{e}(\rho_{X})}D_{max}(\rho_{X}^{e}\left\|\sigma_{X}\right.)\]
where \(\mathcal{B}^{e}(\rho_{X})\coloneqq\{\rho_{X}^{\prime}\in\mathcal{D}(\mathcal{H }_{X})\colon P(\rho_{X},\rho_{X})\leq\epsilon\}\) is \(\epsilon\)_-ball_ for \(\rho_{XY}\).
_Definition 5: (Quantum smooth max mutual information [23])_ Consider \(\rho_{XY}\coloneqq\sum_{x\in X}P_{X}(x)\left|x\right\rangle\!\left\langle x \right|_{X}\otimes\rho_{Y}^{x}\) as a classical-quantum state and a parameter \(\epsilon\in(0,1)\). The smooth max mutual information between the systems \(X\) and \(Y\) can be defined as follows:
\[\begin{array}{l}I_{max}^{e}(X;Y):=\inf_{\rho_{XY}^{\prime}\in\mathcal{B}^{e }(\rho_{XY})}D_{max}(\rho_{XY}^{e}\left\|\rho_{X}\otimes\rho_{Y}\right.)\\ =\inf_{\rho_{XY}^{\prime}\in\mathcal{B}^{e}(\rho_{XY})}l_{max}(X;Y)_{\rho^{ \prime}}\,\end{array}\]
where \(\mathcal{B}^{e}(\rho_{XY})\coloneqq\{\rho_{XY}^{\prime}\in\mathcal{D}(\mathcal{ H}_{X}\otimes\mathcal{H}_{Y})\colon P(\rho_{XY}^{\prime},\rho_{XY})\leq\epsilon\}\) is \(\epsilon\)-ball for \(\rho_{XY}\).
_Definition 6: (Conditional smooth hypothesis testing mutual information [25])_ Consider \(\rho_{XY2}\coloneqq\sum_{x\in Z}P_{Z}(z)\left|z\right\rangle\!\left\langle z \right|_{Z}\otimes\rho_{XY}^{x}\) be a tripartite classical-quantum state and \(\epsilon\in(0,1)\). We define,
\[I_{R}^{e}(X;Y|Z)_{\rho}\coloneqq\max_{\rho^{\prime}}\min_{z\in supp(\rho_{Z} )}I_{R}^{e}(X;Y)_{\rho_{XY}^{\prime}}\,\]
where maximization is over all \(\rho_{Z}^{\prime}=\sum_{x\in Z}p_{Z}(z)\left|z\right\rangle\!\left\langle z \right|_{Z}\) satisfying \(P(\rho_{Z}^{\prime},\rho_{Z})\leq\epsilon\).
_Definition 7: (Conditional smooth max mutual information [25])_ Consider \(\rho_{XYZ}\coloneqq\sum_{x\in Z}P_{Z}(z)\left|z\right\rangle\!\left\langle z \right|_{Z}\otimes\rho_{XY}^{x}\) be a tripartite classical-quantum state and \(\epsilon\in(0,1)\). We define,
\[I_{max}^{e}(X;Y|Z)_{\rho}\coloneqq\max_{\rho^{\prime}}\min_{z\in supp(\rho_{Z} )}I_{max}^{e}(X;Y)_{\rho_{XY}^{\prime}}\,\]
where maximization is over all \(\rho_{Z}^{\prime}=\sum_{x\in Z}p_{Z}(z)\left|z\right\rangle\!\left\langle z \right|_{Z}\) satisfying \(P(\rho_{Z}^{\prime},\rho_{Z})\leq\epsilon\).
_Definition 8: (Quantum Renyi relative entropy of order \(\alpha\)[21])_ For a state \(\rho\in\mathcal{D}(\mathcal{H})\) and a positive semidefinite operator \(\sigma\), the _quantum Renyi relative entropy of order \(\alpha\)_, where \(\alpha\in[0,1)\cup(1,+\infty)\) is defined as:
\[D_{\alpha}(\rho\|\sigma)\equiv\frac{1}{\alpha-1}\log_{2}Tr\{\rho^{\alpha} \sigma^{1-\alpha}\}\]
Also, _Renyi entropy of order \(\alpha\)_ can be defined as follows:
\[H_{\alpha}(A)_{\rho}\equiv\frac{1}{1-\alpha}\log_{2}Tr\{\rho_{A}^{\alpha}\}\]
_Definition 9: (One-shot inner bound of a classical-quantum multiple access channel)_[19] A two user C-QMAC under the one-shot setting is a triple \((\mathcal{X}_{1}\times\mathcal{X}_{2},\mathcal{N}_{X_{1}\mathcal{X}_{2}\sim Y}( \mathcal{X}_{1},\mathcal{X}_{2})\equiv\rho_{Y}^{x_{1}x_{2}},\mathcal{H}_{Y})\), where \(\mathcal{X}_{1}\) and \(\mathcal{X}_{2}\) are the alphabet sets of two classical inputs, and \(Y\) is the output system. \(\rho_{x_{1}x_{2}}^{Y}\) is a quantum state, and the channel has a completely positive trace-preserving map (CPTP) \(\mathcal{N}_{\mathcal{X}_{1}x_{2}\sim Y}\).
Considering the joint typicality lemma introduced in [Corollary 4, 19], the one-shot inner bound of a C-QMAC is as follows:
\[R_{1}\leq I_{H}^{\varepsilon}(X_{1}\colon X_{2}Y)_{\rho}-2+\log\epsilon\]
\[R_{2}\leq I_{H}^{\varepsilon}(X_{2}\colon X_{1}Y)_{\rho}-2+\log\epsilon\]
\[R_{1}+R_{2}\leq I_{H}^{\varepsilon}(X_{1}X_{2}\colon Y)_{\rho}-2+\log\epsilon\]
where \(I_{H}^{\varepsilon}(\cdot)\) is the hypothesis testing mutual information defined in Definition 1 with respect to the controlling state:
\[\rho_{Q\mathbf{x}_{1}\mathbf{x}_{2}\mathbf{x}^{\prime}}:=\sum_{q\mathbf{x}_{1}\mathbf{x}_{2}}p(q) p(\mathbf{x}_{1}|q)p(\mathbf{x}_{2}|q)|q\mathbf{x}_{1}\mathbf{x}_{2}\rangle\langle q\mathbf{x}_{1} \mathbf{x}_{2}|^{Q\mathbf{x}_{1}\mathbf{x}_{2}}\]
\[\bigotimes\rho_{Y}^{\mathbf{x}_{1}\mathbf{x}_{2}}\]
and \(Q\) is a time-sharing variable.
Note that \(I_{H}^{\varepsilon}(\cdot)\) is the difference between a _Renyi entropy_ of order two and a conditional quantum entropy.
## III Channel Model
A two-user C-QI-WTC is a triple \((\mathcal{X}_{1}\times\mathcal{X}_{2},\mathcal{N}^{\mathbf{x}_{1}\mathbf{x}_{2}-\mathbf{ \tau}_{1}\mathbf{\tau}_{2}\mathbf{\tau}_{1}\mathbf{\tau}_{2}\mathbf{\tau}_{1}\mathbf{\tau}_{2}}( \mathbf{x}_{1},\mathbf{x}_{2})\equiv\rho_{\mathbf{x}_{1}\mathbf{x}_{2}}^{\mathbf{\tau}_{1}\mathbf{\tau }_{2}\mathbf{\tau}},\mathcal{H}^{\mathbf{\tau}_{1}}\bigotimes\mathcal{H}^{\mathbf{\tau}_{2} }\bigotimes\mathcal{H}^{\mathbf{\tau}_{2}}\bigotimes\mathcal{H}^{\mathbf{\tau}_{2}})\),
where \(\mathcal{X}_{i},i\in\{1,2\}\) denote the input alphabet sets, and \(Y_{1},Y_{2}\), \(Z\) denote the output systems \((Y_{1},Y_{2}\) denote the channel outputs at the two legitimate receivers and \(Z\) is the channel outputs at the eavesdropper). \(\rho_{\mathbf{x}_{1}\mathbf{x}_{2}}^{\mathbf{\tau}_{1}\mathbf{\tau}_{2}\mathbf{\tau}_{1}}\) is the system output's quantum state. Each user wants to transmit its message as secure as possible over a C-QI-WTC to its intended receiver.
The main channel (i.i.d. case) is illustrated in Figure 1.
Consider the main channel illustrated in Figure 1 under the one-shot setting. Each user chooses its message \(m_{i};i\in\{1,2\}\) from its message set \(\mathcal{M}_{i}=[1\colon|\mathcal{M}_{i}|=2^{R_{i}}];i\in\{1,2\}\), and send it over a C-QI-WTC. The users also use two junk variables \(k_{i};i\in\{1,2\}\) from two amplification sets \(\mathcal{K}_{i}=[1\colon|\mathcal{X}_{i}|=2^{R_{i}}];i\in\{1,2\}\) for randomizing Eve's knowledge.
We have two doubly indexed codebooks \(x_{1}(m_{1},k_{1})\) and \(x_{2}(m_{2},k_{2})\) for user-1 and user-2, respectively. The above channel can be divided into two sub C-QMA-WTCs (one from both users to \((Y_{1},Z)\) and another from both users to \((Y_{2},Z)\)).
## IV Main Results
In this section, we present the main results.
**Theorem 1**: _(One-shot achievable rate region for C-QI-WTC) Consider a two-user C-QI-WTC which accepts \(X_{1}\) and \(X_{2}\) as inputs and \(Y_{1}\), \(Y_{2}\) and \(Z\) as outputs. \(\rho_{\mathbf{x}_{1}\mathbf{x}_{2}}^{\mathbf{\tau}_{1}\mathbf{\tau}_{2}\mathbf{\tau}_{1}}\) is the channel density operator. For any fixed \(\epsilon\in(0,1),\epsilon^{\prime}\in(0,\delta^{\prime})\) and \(\delta,\delta^{\prime}\) such that \(\delta,\delta^{\prime}>0\), the rate pair \(R_{i}=log|\mathcal{M}_{i}|+\delta,i\in\{1,2\}\) is achievable to satisfy the following inequalities:_
\[R_{1}\leq\min\{I_{H}^{\varepsilon}(X_{1} \colon X_{2}Y_{1}|Q)_{\rho},I_{H}^{\varepsilon}(X_{1}\colon X_{2}Y_ {2}|Q)_{\rho}\}\] \[-I_{max}^{\eta}(X_{1}\colon Z|Q)_{\rho}+\log\epsilon-1-\log\frac{3 }{\epsilon^{\prime 3}}\] \[+\frac{1}{4}\log\delta\] \[R_{2}\leq\min\{I_{H}^{\varepsilon}(X_{2} \colon X_{1}Y_{1}|Q)_{\rho},I_{H}^{\varepsilon}(X_{2}\colon X_{1}Y_{2}|Q)_{\rho}\}\] \[-I_{max}^{\eta}(X_{2}\colon ZX_{1}|Q)_{\rho}+\log\epsilon-1-\log \frac{3}{\epsilon^{\prime 3}}\] \[+\frac{1}{4}\log\delta\] \[R_{1}+R_{2}\leq\min\{I_{H}^{\varepsilon}(X_{1}X_{2}\colon Y_{1}|Q)_ {\rho},I_{H}^{\varepsilon}(X_{1}X_{2}\colon Y_{2}|Q)_{\rho}\}\] \[-I_{max}^{\eta}(X_{1}\colon Z|Q)_{\rho}-I_{max}^{\eta}(X_{2} \colon ZX_{1}|Q)_{\rho}\] \[+\log\epsilon-1-2\log\frac{3}{\epsilon^{\prime 3}}+\frac{1}{2}\log \delta+O(1)\]
_where \(\eta=\delta^{\prime}-\epsilon^{\prime}\) and the union is taken over input distribution \(p_{0}(q)p_{X_{1}|Q}(x_{1}|q)p_{X_{2}|Q}(x_{2}|q)\). \(Q\) is the time-sharing random variable, and all of the mutual information quantities are taken with respect to the following state:_
\[\rho^{Q\mathbf{x}_{1}\mathbf{x}_{2}\mathbf{\tau}_{1}\mathbf{\tau}_{2}\mathbf{\tau}}\equiv\] \[\bigotimes|x_{1}\rangle\langle x_{1}|^{X_{1}}\otimes|x_{2}\rangle \langle x_{2}|^{X_{2}}\] \[\bigotimes\rho_{x_{1}\mathbf{\tau}_{2}\mathbf{\tau}_{2}}^{\mathbf{\tau}_{1}\bm {\tau}_{2}\mathbf{\tau}} \tag{4}\]
_Proof_: See Appendix A.
_Sketch of proof_: The channel can be split into two sub-QMA-WTCs with classical inputs. One from \((X_{1},X_{2})\) to \((Y_{1},Z)\) and another from \((X_{1},X_{2})\) to \((Y_{2},Z)\). Using the proposed method by El-Gamal and H. Kim [26] helps to prove this theorem.
Theorem 1 gives the simplest achievable rate region for C-QI-WTC under the one-shot setting. Without considering the secrecy constraints, Han and Kobayashi obtained the best achievable rate region for interference channel (i.i.d. setting) using rate splitting that the messages are split into common and personal messages. This technique is extended to the quantum case with some limits [14]. Using the Han-Kobayashi's technique, the message \(X_{i}\) is split into \(X_{i0}\) (common part) and \(X_{ii}\) (personal part), where \(i\in\{1,2\}\).
The structure of the C-QI-WTC under the Han-Kobayashi's setting is illustrated in Figure 2. The following channel can be divided into two separate sub 3-user C-QMA-WTCs: one from \((X_{10},X_{11},X_{20})\) to \((Y_{1},Z)\) and another from \((X_{20},X_{22},X_{10})\) to \((Y_{2},Z)\).
As mentioned before, there is not a proven quantum simultaneous decoder for decoding three or more messages in general and it remains a conjecture (except some cases such as the commutative version of output states and min-entropy cases [14]).
Figure 1: The C-QI-WTC model
\[\{l_{max}^{\eta}(m_{10}m_{11}m_{20}m_{22}:Z)_{\rho}\leq\varepsilon_{3}\big{|}l_{ max}^{\eta}(m_{10}m_{11}m_{20}:Z)_{\rho}\leq\varepsilon_{1},l_{max}^{\eta}(m_{10}m_{2 0}m_{22}:Z)_{\rho}\leq\varepsilon_{2}\} \tag{5}\]
where \(\varepsilon_{1},\varepsilon_{2}\) and \(\varepsilon_{3}\) are arbitrary small numbers.
\[R_{1}\leq l_{H}^{\varepsilon}(X_{10}X_{11}:Y_{1}X_{20})_{\rho}-l_{max}^{\delta ^{\prime}-\varepsilon^{\prime}}(X_{10}:Z)_{\rho}-l_{max}^{\delta^{\prime}- \varepsilon^{\prime}}(X_{11}:ZX_{10}X_{20})_{\rho}-2\log\frac{3}{\epsilon^{ \prime\,3}}+\frac{1}{2}\log\delta^{\prime}+\log\varepsilon-2+\mathcal{O}(1) \tag{6}\]
\[R_{1}\leq l_{H}^{\varepsilon}(X_{11}:Y_{1}X_{10}X_{20})_{\rho}+l_{H}^{ \varepsilon}(X_{10}:Y_{2}X_{20}X_{22})_{\rho}-l_{max}^{\delta^{\prime}- \varepsilon^{\prime}}(X_{10}:Z)_{\rho}-l_{max}^{\delta^{\prime}-\varepsilon^ {\prime}}(X_{11}:ZX_{10}X_{20})_{\rho}-2\log\frac{3}{\epsilon^{\prime\,3}}+ \frac{1}{2}\log\delta^{\prime}\] \[\qquad\qquad\qquad\qquad+2\log\epsilon-4+\mathcal{O}(1) \tag{7}\]
\[R_{2}\leq l_{H}^{\varepsilon}(X_{20}X_{22}:Y_{2}X_{10})_{\rho}-l_{max}^{ \delta^{\prime}-\varepsilon^{\prime}}(X_{20}:ZX_{10})_{\rho}-l_{max}^{\delta ^{\prime}-\varepsilon^{\prime}}(X_{22}:ZX_{10}X_{11}X_{20})_{\rho}-2\log \frac{3}{\epsilon^{\prime\,3}}+\frac{1}{2}\log\delta^{\prime}+\log \varepsilon-2\] \[\qquad\qquad\qquad+\mathcal{O}(1) \tag{8}\]
\[R_{2}\leq l_{H}^{\varepsilon}(X_{20}:Y_{1}X_{10}X_{11})_{\rho}+l_{H}^{ \varepsilon}(X_{22}:Y_{2}X_{10}X_{20})_{\rho}-l_{max}^{\delta^{\prime}- \varepsilon^{\prime}}(X_{20}:ZX_{10})_{\rho}-l_{max}^{\delta^{\prime}- \varepsilon^{\prime}}(X_{22}:ZX_{10}X_{11}X_{20})_{\rho}-2\log\frac{3}{ \epsilon^{\prime\,3}} \tag{9}\]
\[R_{1}+R_{2}\leq l_{H}^{\varepsilon}(X_{11}:Y_{2}X_{10}X_{20})_{ \rho}+l_{H}^{\varepsilon}(X_{10}X_{11}X_{20}:Y_{2})_{\rho}-l_{max}^{\delta^{ \prime}-\varepsilon^{\prime}}(X_{10}:Z)_{\rho}-l_{max}^{\delta^{\prime}- \varepsilon^{\prime}}(X_{11}:ZX_{10}X_{20})_{\rho}\] \[\qquad\qquad\qquad-l_{max}^{\delta^{\prime}-\varepsilon^{\prime} }(X_{20}:ZX_{10})_{\rho}-l_{max}^{\delta^{\prime}-\varepsilon^{\prime}}(X_{22 }:ZX_{10}X_{11}X_{20})_{\rho}-4\log\frac{3}{\epsilon^{\prime\,3}}+\log \delta^{\prime}+2\log\varepsilon-4+\mathcal{O}(1) \tag{10}\]
\[R_{1}+R_{2}\leq l_{H}^{\varepsilon}(X_{11}:Y_{1}X_{20}X_{10})_{ \rho}+l_{H}^{\varepsilon}(X_{22}X_{20}X_{10}:Y_{2})_{\rho}-l_{max}^{\delta^{ \prime}-\varepsilon^{\prime}}(X_{10}:Z)_{\rho}-l_{max}^{\delta^{\prime}- \varepsilon^{\prime}}(X_{11}:ZX_{10}X_{20})_{\rho}\] \[\qquad\qquad\qquad\qquad-l_{max}^{\delta^{\prime}-\varepsilon^{ \prime}}(X_{20}:ZX_{10})_{\rho}-l_{max}^{\delta^{\prime}-\varepsilon^{\prime}}( X_{22}:ZX_{10}X_{11}X_{20})_{\rho}-4\log\frac{3}{\epsilon^{\prime\,3}}+\log \delta^{\prime}+2\log\varepsilon-4+\mathcal{O}(1) \tag{11}\]
\[R_{1}+R_{2}\leq l_{H}^{\varepsilon}(X_{11}X_{20}:Y_{1}X_{10})_{ \rho}+l_{H}^{\varepsilon}(X_{22}X_{10}:Y_{2}X_{20})_{\rho}-l_{max}^{\delta^{ \prime}-\varepsilon^{\prime}}(X_{10}:Z)_{\rho}-l_{max}^{\delta^{\prime}- \varepsilon^{\prime}}(X_{11}:ZX_{10}X_{20})_{\rho}\] \[\qquad\qquad\qquad-l_{max}^{\delta^{\prime}-\varepsilon^{\prime} }(X_{20}:ZX_{10})_{\rho}-l_{max}^{\delta^{\prime}-\varepsilon^{\prime}}(X_{22 }:ZX_{10}X_{11}X_{20})_{\rho}-4\log\frac{3}{\epsilon^{\prime\,3}}+\log\delta^{ \prime}+2\log\varepsilon-4+\mathcal{O}(1) \tag{12}\]
\[2R_{1}+R_{2}\leq l_{H}^{\varepsilon}(X_{11}:Y_{1}X_{10}X_{20})_{ \rho}+l_{H}^{\varepsilon}(X_{10}X_{22}:Y_{2}X_{20})_{\rho}+l_{H}^{\varepsilon}( X_{11}X_{10}X_{20}:Y_{2})_{\rho}-2l_{max}^{\delta^{\prime}-\varepsilon^{\prime}}(X_{10}:Z)_{\rho}\] \[\qquad\qquad\qquad-2l_{max}^{\delta^{\prime}-\varepsilon^{\prime} }(X_{11}:ZX_{10}X_{20})_{\rho}-l_{max}^{\delta^{\prime}-\varepsilon^{\prime}}( X_{20}:ZX_{10})_{\rho}-l_{max}^{\delta^{\prime}-\varepsilon^{\prime}}(X_{22}:ZX_{10}X_{11}X_{20})_{ \rho}-6\log\frac{3}{\epsilon^{\prime\,3}}\] \[\qquad\qquad\qquad+\frac{3}{2}\log\delta^{\prime}+3\log\varepsilon-6+ \mathcal{O}(1) \tag{13}\]
\[R_{1}+2R_{2}\leq l_{H}^{\varepsilon}(X_{11}X_{20}:Y_{1}X_{10})_{ \rho}+l_{H}^{\varepsilon}(X_{22}:Y_{2}X_{10}X_{20})_{\rho}+l_{H}^{\varepsilon}( X_{22}X_{20}X_{10}:Y_{1})_{\rho}-l_{max}^{\delta^{\prime}-\varepsilon^{\prime}}(X_{10}:Z)_{\rho}\] \[\qquad\qquad\qquad-l_{max}^{\delta^{\prime}-\varepsilon^{\prime} }(X_{11}:ZX_{10}X_{20})_{\rho}-2l_{max}^{\delta^{\prime}-\varepsilon^{ \prime}}(X_{20}:ZX_{10})_{\rho}-l_{max}^{\delta^{\prime}-\varepsilon^{\prime}}(X_{22 }:ZX_{10}X_{11}X_{20})_{\rho}-6\log\frac{3}{\epsilon^{\prime\,3}}\] \[\qquad\qquad\qquad+\frac{3}{2}\log\delta^{\prime}+3\log\varepsilon-6+ \mathcal{O}(1) \tag{14}\]
_Remark 1_: Note that, to take the intersection of the private regions for two 3-sender MACs raised in Theorem 1, we used the method of [26]. Another approach can be using _Furier-Motzkin elimination_[Appendix D, 26] which gives achievable rate region similar to the Han-Kobayashi expression.
_Remark 2_: The Han-Kobayashi technique is based on rate splitting. It should be noted that the split messages are not independent of each other. Thus, obtaining secrecy against the eavesdropper by Wyner's randomizing technique becomes problematic in this setting. In other words, we cannot randomize over a block independently. For example, \(m_{1}\) should be randomized using the product of two junk variables (\(k_{10}\).\(k_{11}\)).
_Conjecture: (An inner bound on the one-shot secrecy capacity region of the C-QI-WTC) Consider the region:_
\[\mathcal{R}(N)=\bigcup_{\pi}\{(R_{1},R_{2})\in R^{2}|Eqns.\,(\ref{eq:1})-(\ref{eq:1}) \ hold\}\]
_Proof:_ In Appendix B.
_Sketch of proof:_ We consider two sub C-QMA-WTCs. Therefore, from the perspective of the first receiver (\(Y_{1}\)), there
Figure 2: The structure of the C-QI-WTC under the Han-Kobayashi settings.
are three messages \(\left(m_{10},m_{11},m_{20}\right)\rightarrow\left(Y_{1},Z\right),\) and for the second receiver, there are three messages \(\left(m_{20},m_{22},m_{10}\right)\rightarrow\left(Y_{2},Z\right).\) The paper [27] introduces the same setting, but it considers a randomized order such as \(m_{10}\to m_{20}\to m_{11}.\) For the first C-QMA-WTC, Alice should randomize over a total block of size \(\left(k_{10}.k_{11}\right)\). For the second C-QMA-WTC, Bob should randomize over a total block of size \(\left(k_{20}.k_{22}\right)\). Then, we can analyze both sub-channels.
**Remark 3**: _The above conjecture holds if and only if condition (5) holds. Because taking the intersection of the private regions for two 3-sender C-QMACs is not enough to get a private region for the full C-QI-WTC._
To overcome the above problem, we should change the encoding process, which results in the following theorem.
**Theorem 2**: _(An inner bound on the one-shot secrecy capacity region of the C-QI-WTC) Consider the region:_
\[\mathcal{R}(N)=\bigcup_{\pi}\{(R_{1},R_{2})\in R^{2}|Eqns.\left(\ref{eq:CQMA-WTC} \right)-\left(\ref{eq:CQMA-WTC}\right)\ hold\}\]
_Proof_: In Appendix C.
_Sketch of proof_: The overall sketch of the proof is the same as that for the above Conjecture with one difference: Suppose that both receivers want to decode non-interfering messages. Also, this setting is similar to Theorem 1. It can be helpful for the receivers to decode their messages, including the intended messages and interfering messages. In other words, \(X_{10}\) and \(X_{20}\) can be used as side information. Therefore, the first sub-channel can be modeled as \(\left(X_{10}X_{11}X_{2}\right)\rightarrow\left(Y_{1},Z\right).\) All steps, such as encoding and decoding, are the same as for the above Conjecture.
_Secrecy criterion_: The secrecy criterion for the channel can be defined as follows:
\[I(M_{1},M_{2};Z)\leq\theta\] For Theorem 1 \[I(M_{10},M_{11},M_{20},M_{22};Z)\leq\theta\] For Conjecture and Theorem 2 \[\text{}\]
This means that the mutual information between the sent messages and the wiretapper should be bounded above by an arbitrarily small number.
## V Discussion And Future Works
In this paper, the problem of secure communication over a quantum interference channel has been studied. The main approach for decoding sent messages is simultaneous decoding (one-shot quantum joint typicality lemma) [19]. Also, we used the method of [27] to randomize Eve's knowledge and calculate leaked information. The mentioned Conjecture gives a one-shot achievable rate region for C-QI-WTC in the form of the Han-Kobayashi rate region. Still, it is not clear how we can conclude secrecy requirement for this channel from secrecy criterion of sub C-QMA-WTCs. However, Theorem 2 solves this problem using a new encoding.
## Appendix
### (Proof of the Theorem 1)
The channel in the Figure 1 can be split into two sub-QMA-WTCs with classical inputs. One from both users to \(\left(Y_{1},Z\right)\) and another from both users to \(\left(Y_{2},Z\right).\) At last, the overall achievable secrecy rate region can be calculated as:
\[\mathcal{R}_{C-QI-WTC}\leq\min\{\mathcal{R}_{C-QMA-WTC_{1}},\mathcal{R}_{C- QMA-WTC_{2}}\}\]
Consider the first sub-channel. From Sen's jointly typical decoder [19] and [Lemma 3.2, 27], it is clear that:
\[R_{1}\leq I_{H}^{\varepsilon}(X_{1} :X_{2}Y_{1}|Q)_{\rho}-I_{max}^{\eta}(X_{1} :Z|Q)_{\rho}+\log\epsilon-1-\log\frac{3}{\epsilon^{\prime\,3}}\] \[+\frac{1}{4}\log\delta\] \[R_{2}\leq I_{H}^{\varepsilon}(X_{2} :X_{1}Y_{1}|Q)_{\rho}-I_{max}^{\eta}(X_{1} :ZX_{2}|Q)_{\rho}+\log\epsilon-1\] \[-\log\frac{3}{\epsilon^{\prime\,3}}+\frac{1}{4}\log\delta\] \[R_{1}+R_{2}\leq I_{H}^{\varepsilon}(X_{1} X_{2} :Y_{1}|Q)_{\rho}-I_{max}^{\eta}(X_{1} :Z|Q)_{\rho}\] \[-I_{max}^{\eta}(X_{1} :ZX_{2}|Q)_{\rho}+\log\epsilon-1-2\log\frac{3}{\epsilon^{\prime\,3}}\] \[+\frac{1}{2}\log\delta+\mathcal{O}(1)\]
There are similar rates for the second sub-channel. Taking the intersection of the derived regions for the two sub-channels completes the proof.
_Secrecy criterion:_ The secrecy constraint requires that Eve just could be able to decode a negligible information:
\[I_{max}^{\eta}\left(M_{1}M_{2}\!:\!Z\right)_{\rho}\leq\vartheta\]
It is obvious that [Lemma 3.2, 27] guarantees the secrecy criterion.
### (Proof of the Conjecture)
To bypass the problem raised in Remark 1 and recover the non-corner points in the secrecy rate region, we use rate splitting. We apply the following setting:
We consider two sub C-QMA-WTCs. Therefore, from the perspective of the first receiver \(\left(Y_{1}\right)\), there are three messages \(\left(m_{10},m_{11},m_{20}\right)\rightarrow\left(Y_{1},Z\right)\), and for the second receiver, there are three messages \(\left(m_{20},m_{22},m_{10}\right)\rightarrow\left(Y_{2},Z\right)\). The paper [27] introduces the same setting, but it considers a randomized order such as \(m_{10}\to m_{20}\to m_{11}.\) This order has not impact on decoding the messages, but it is helpful to compute leaked information. Also, it should be considered that in the one-shot case, we do not use the successive decoder because the time-sharing strategy gives only finite achievable rate pair. Instead, we use the one-shot jointly typical decoder [19] for both sub-channels.
For the first C-QMA-WTC, Alice should randomize over total block of size \(\left(k_{10}.k_{11}\right)\). It refers to the fact that the split messages are dependent. There is a detailed discussion in [28].
For the C-QI-WTC, the controlling state is as follows:
\[R_{1}\leq\min\{I_{H}^{e}(X_{10}X_{11}\colon Y_{1}X_{2})_{\rho},I_{H}^{e}(X_{1 }\colon Y_{2}X_{20}X_{22})_{\rho}\}-I_{max}^{\delta^{\prime}-\epsilon^{\prime}} (X_{10}\colon Z)_{\rho}-I_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X_{11} \colon ZX_{10}X_{20})_{\rho}-2\log\frac{3}{\epsilon^{\prime\,3}}\] \[\qquad\qquad\qquad\qquad+\frac{1}{2}\log\delta^{\prime}+\log \epsilon-2+\mathcal{O}(1) \tag{15}\]
\[R_{1}\leq\{I_{H}^{e}(X_{11}\colon Y_{1}X_{10}X_{2})_{\rho}+I_{H}^{e}(X_{1 0}\colon Y_{1}X_{11}X_{2})_{\rho},I_{H}^{e}(X_{1}X_{20}\colon Y_{2}X_{22})_{ \rho},I_{H}^{e}(X_{1}X_{22}\colon Y_{2}X_{20})_{\rho}\}-I_{max}^{\delta^{\prime }-\epsilon^{\prime}}(X_{10}\colon Z)_{\rho}\] \[\qquad\qquad\qquad\qquad-I_{max}^{\delta^{\prime}-\epsilon^{\prime }}(X_{11}\colon ZX_{10}X_{20})_{\rho}-2\log\frac{3}{\epsilon^{\prime\,3}}+ \frac{1}{2}\log\delta^{\prime}+\log\epsilon-2+\mathcal{O}(1)\] (16-17)
\[R_{2}\leq\min\{I_{H}^{e}(X_{20}X_{22}\colon Y_{2}X_{2})_{\rho},I_{H}^{e}(X_{ 2}\colon Y_{1}X_{10}X_{11})_{\rho}\}-I_{H}^{e}(X_{20}X_{22}\colon Y_{2}X_{10} )_{\rho}-I_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X_{20}\colon ZX_{10})_{\rho}\] \[\qquad\qquad\qquad\qquad-I_{max}^{\delta^{\prime}-\epsilon^{ \prime}}(X_{22}\colon ZX_{10}X_{11}X_{20})_{\rho}-2\log\frac{3}{\epsilon^{ \prime\,3}}+\frac{1}{2}\log\delta^{\prime}+\log\epsilon-2+\mathcal{O}(1) \tag{18}\]
\[R_{2}\leq\{I_{H}^{e}(X_{22}\colon Y_{2}X_{20}X_{1})_{\rho}+I_{H}^{e}(X_{20} \colon Y_{2}X_{22}X_{1})_{\rho},I_{H}^{e}(X_{2}X_{10}\colon Y_{1}X_{11})_{ \rho},I_{H}^{e}(X_{2}X_{11}\colon Y_{1}X_{10})_{\rho}\}-I_{max}^{\delta^{ \prime}-\epsilon^{\prime}}(X_{20}\colon ZX_{10})_{\rho}\] \[\qquad\qquad\qquad-I_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X _{22}\colon ZX_{10}X_{11}X_{20})_{\rho}-2\log\frac{3}{\epsilon^{\prime\,3}}+ \frac{1}{2}\log\delta^{\prime}+2\log\epsilon-4+\mathcal{O}(1)\] (19-21)
\[R_{1}+R_{2}\leq\min\{I_{H}^{e}(X_{11}X_{10}X_{2}\colon Y_{1})_{\rho},I_{H}^{e} (X_{22}X_{20}X_{1}\colon Y_{2})_{\rho}\}-I_{max}^{\delta^{\prime}-\epsilon^{ \prime}}(X_{10}\colon Z)_{\rho}-I_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X _{11}\colon ZX_{10}X_{2})_{\rho}\] \[\qquad\qquad\qquad-I_{max}^{\delta^{\prime}-\epsilon^{\prime}}( X_{20}\colon ZX_{10})_{\rho}-I_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X_{22} \colon ZX_{1}X_{20})_{\rho}-4\log\frac{3}{\epsilon^{\prime\,3}}+\log\delta^{ \prime}+2\log\epsilon-4+\mathcal{O}(1) \tag{22}\]
\[R_{1}+R_{2}\leq\{I_{H}^{e}(X_{11}\colon Y_{1}X_{10}X_{2})_{\rho}+I_{H}^{e}(X_{ 10}X_{20}\colon Y_{1}X_{11})_{\rho},I_{H}^{e}(X_{22}\colon Y_{2}X_{20}X_{1})_{ \rho}+I_{H}^{e}(X_{20}X_{10}\colon Y_{1}X_{11})_{\rho},I_{H}^{e}(X_{22}X_{1} \colon Y_{2}X_{20})_{\rho}\] \[\qquad\qquad\qquad+I_{H}^{e}(X_{20}\colon Z_{22}X_{1}X_{20})_{ \rho},I_{H}^{e}(X_{11}X_{2}\colon Y_{1}X_{10})_{\rho}+I_{H}^{e}(X_{10}\colon X _{11}X_{2}Y_{1})_{\rho}\}-I_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X_{10} \colon Z)_{\rho}\] \[\qquad\qquad\qquad-I_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X _{11}\colon ZX_{20}X_{10})_{\rho}-I_{max}^{\delta^{\prime}-\epsilon^{\prime}}( X_{20}\colon ZX_{10})_{\rho}-I_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X_{22} \colon ZX_{10}X_{11}X_{20})_{\rho}-4\log\frac{3}{\epsilon^{\prime\,3}}+\log\delta^{\prime}\] \[\qquad\qquad\qquad+2\log\epsilon-4+\mathcal{O}(1)\] (23-26)
\[2R_{1}+R_{2}\leq I_{H}^{e}(X_{20}X_{1}\colon Y_{2}X_{22})_{\rho}+I_{H}^{e}(X_{ 1}X_{22}\colon Y_{2}X_{20})_{\rho}-2I_{max}^{\delta^{\prime}-\epsilon^{\prime}}( X_{10}\colon Z)_{\rho}-2I_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X_{11} \colon ZX_{10}X_{20})_{\rho}\] \[\qquad\qquad\qquad-I_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X _{20}\colon ZX_{10})_{\rho}-I_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X_{22} \colon ZX_{10}X_{11}X_{20})_{\rho}-6\log\frac{3}{\epsilon^{\prime\,3}}+\frac{3}{2} \log\delta^{\prime}+2\log\epsilon-4+\mathcal{O}(1) \tag{27}\]
\[R_{1}+2R_{2}\leq I_{H}^{e}(X_{10}X_{2}\colon Y_{1}X_{11})_{\rho}+I_{H}^{e}(X_{ 2}X_{11}\colon Y_{1}X_{10})_{\rho}-I_{max}^{\delta^{\prime}-\epsilon^{\prime}}( X_{10}\colon Z)_{\rho}-I_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X_{11}\colon ZX_{10}X_{2 })_{\rho}\] \[\qquad\qquad\qquad-2I_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X _{20}\colon ZX_{10})_{\rho}-2I_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X_{22} \colon ZX_{10}X_{11}X_{20})_{\rho}-6\log\frac{3}{\epsilon^{\prime\,3}}+\frac{3}{2} \log\delta^{\prime}++2\log\epsilon-4\] \[\qquad\qquad\qquad+\mathcal{O}(1) \tag{28}\]
Also, for the second C-QMAC there are similar rates. It should be noted that \(R_{1}=R_{10}+R_{11}\) and \(R_{2}=R_{20}+R_{22}\). After eliminating redundant rates and using the _Furier-Motzkin elimination_, we have:
\[\mathcal{R}_{C-QC}=\] \[R_{1}^{\prime}\leq I_{H}^{e}(X_{10}X_{11}\colon Y_{1}X_{20})_{\rho}+ \log\epsilon-2\] \[R_{1}^{\prime}\leq I_{H}^{e}(X_{11}\colon Y_{1}X_{20}X_{20})_{\rho}+I _{H}^{e}(X_{10}\colon Y_{2}X_{20}X_{22})_{\rho}+2\log\epsilon-4\] \[R_{2}^{\prime}\leq I_{H}^{e}(X_{20}X_{22}\colon Y_{2}X_{10})_{\rho}+ \log\epsilon-2\] \[R_{2}^{\prime}\leq I_{H}^{e}(X_{20}\colon Y_{1}X_{10}X_{11})_{\rho}+ I_{H}^{e}(X_{22}\colon Y_{2}X_{10}X_{20})_{\rho}+2\log\epsilon-4\] \[R_{1}^{\prime}+R_{2}^{\prime}\leq I_{H}^{e}(X_{11}\
\[R_{1}\leq I_{H}^{e}(X_{10}X_{11}\colon Y_{1}X_{2})_{\rho}-I_{max}^{ \delta^{\prime}-\epsilon^{\prime}}(X_{10}\colon Z)_{\rho}-I_{max}^{\delta^{ \prime}-\epsilon^{\prime}}(X_{11}\colon ZX_{10}X_{2})_{\rho}-2\log\frac{3}{ \epsilon^{\prime\,3}}+\frac{1}{2}\log\delta^{\prime}+\log\epsilon-2+\mathcal{O}(1) \tag{30}\]
\[R_{1}\leq I_{H}^{e}(X_{11}\colon Y_{1}X_{10}X_{2})_{\rho}+I_{H}^{e }(X_{10}\colon Y_{1}X_{11}X_{2})_{\rho}-I_{max}^{\delta^{\prime}-\epsilon^{ \prime}}(X_{10}\colon Z)_{\rho}-I_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X_ {11}\colon ZX_{10}X_{2})_{\rho}-2\log\frac{3}{\epsilon^{\prime\,3}}+\frac{1}{2 }\log\delta^{\prime} \tag{31}\] \[\qquad\qquad\qquad\qquad\qquad+2\log\epsilon-4+\mathcal{O}(1) \tag{32}\]
\[R_{2}\leq I_{H}^{e}(X_{2}\colon Y_{1}X_{10}X_{11})_{\rho}-I_{max}^{ \delta^{\prime}-\epsilon^{\prime}}(X_{20}\colon ZX_{10})_{\rho}-I_{max}^{ \delta^{\prime}-\epsilon^{\prime}}(X_{22}\colon ZX_{10}X_{11}X_{20})_{\rho}-2 \log\frac{3}{\epsilon^{\prime\,3}}+\frac{1}{2}\log\delta^{\prime}+\log \epsilon-2 \tag{33}\] \[\qquad\qquad\qquad\qquad+\mathcal{O}(1) \tag{34}\]
\[R_{1}+R_{2}\leq I_{H}^{e}(X_{11}\colon Y_{1}X_{10}X_{2})_{\rho}+I_{H}^{e }(X_{10}X_{20}\colon Y_{1}X_{11})_{\rho}-I_{max}^{\delta^{\prime}-\epsilon^{ \prime}}(X_{10}\colon Z)_{\rho}-I_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X _{11}\colon ZX_{10}X_{2})_{\rho} \tag{35}\]
\[R_{1}+R_{2}\leq I_{H}^{e}(X_{11}\colon Y_{1}X_{10}X_{2})_{\rho}+I_{H}^{e}(X_{1 0}X_{20}\colon Y_{1}X_{11})_{\rho}-I_{max}^{\delta^{\prime}-\epsilon^{\prime} }(X_{10}\colon Z)_{\rho}-I_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X_{11} \colon ZX_{10}X_{2})_{\rho}-I_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X_{20 }\colon ZX_{10})_{\rho} \tag{36}\]
\[R_{1}+R_{2}\leq I_{H}^{e}(X_{11}X_{10}X_{2}\colon Y_{1})_{\rho}-I_{max}^{ \delta^{\prime}-\epsilon^{\prime}}(X_{10}\colon Z)_{\rho}-I_{max}^{\delta^{ \prime}-\epsilon^{\prime}}(X_{11}\colon ZX_{10}X_{2})_{\rho}-I_{max}^{\delta^ {\prime}-\epsilon^{\prime}}(X_{20}\colon ZX_{10})_{\rho} \tag{37}\]
\[R_{1}+2R_{2}\leq I_{H}^{e}(X_{10}X_{2}\colon Y_{1}X_{11})_{\rho}+I_{H}^{e}(X_{ 2}X_{11}\colon Y_{1}X_{10})_{\rho}-I_{max}^{\delta^{\prime}-\epsilon^{\prime} }(X_{10}\colon Z)_{\rho}-I_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X_{11} \colon ZX_{10}X_{2})_{\rho} \tag{38}\]
\[R_{1}+R_{2}\leq I_{H}^{e}(X_{11}X_{10}X_{2}\colon Y_{1})_{\rho}-I_{max}^{ \delta^{\prime}-\epsilon^{\prime}}(X_{10}\colon Z)_{\rho}-I_{max}^{\delta^{ \prime}-\epsilon^{\prime}}(X_{11}\colon ZX_{10}X_{2})_{\rho}-I_{max}^{\delta^ {\prime}-\epsilon^{\prime}}(X_{20}\colon ZX_{10})_{\rho} \tag{39}\]
\[R_{1}+R_{2}\leq I_{H}^{e}(X_{11}X_{10}X_{2}\colon Y_{1}X_{10})_{\rho}-I_{max}^{ \delta^{\prime}-\epsilon^{\prime}}(X_{10}\colon Z)_{\rho}-I_{max}^{\delta^{ \prime}-\epsilon^{\prime}}(X_{11}\colon ZX_{10}X_{2})_{\rho}-I_{max}^{\delta^ {\prime}-\epsilon^{\prime}}(X_{20}\colon ZX_{10})_{\rho} \tag{40}\]
\[R_{1}+R_{2}\leq I_{H}^{e}(X_{11}X_{10}X_{2}\colon Y_{1}X_{10})_{\rho}-I_{max}^{ \delta^{\prime}-\epsilon^{\prime}}(X_{10}\colon Z)_{\rho}-I_{max}^{\delta^{ \prime}-\epsilon^{\prime}}(X_{11}\colon ZX_{10}X_{2})_{\rho} \tag{41}\]
\[R_{1}+R_{2}\leq I_{H}^{e}(X_{11}X_{10}X_{2}\colon Y_{1})_{\rho}-I_{max}^{ \delta^{\prime}-\epsilon^{\prime}}(X_{10}\colon Z)_{\rho}-I_{max}^{\delta^{ \prime}-\epsilon^{\prime}}(X_{11}\colon ZX_{10}X_{2})_{\rho} \tag{42}\]
\[R_{1}+R_{2}\leq I_{H}^{e}(X_{11}X_{10}X_{2}\colon Y_{1}X_{10})_{\rho}-I_{max}^{ \delta^{\prime}-\epsilon^{\prime}}(X_{10}\colon Z)_{\rho}-I_{max}^{\delta^{ \prime}-\epsilon^{\prime}}(X_{11}\colon ZX_{10}X_{2})_{\rho} \tag{43}\]
\[R_{1}+R_{2}^{e}(X_{11}X_{2}\colon Y_{1}X_{20}X_{10})_{\rho}+I_{H}^{e}(X_{22}X_{20 }X_{10}\colon Y_{2})_{\rho} \tag{44}\]
\[R_{1}+2R_{2}\leq I_{H}^{e}(X_{11}X_{2}\colon Y_{1}X_{10}X_{20})_{\rho}-I_{max}^{ \delta^{\prime}-\epsilon^{\prime}}(X_{22}\colon ZX_{10}X_{11}X_{20})_{\rho}-6 \log\frac{3}{\epsilon^{\prime\,3}}+\frac{3}{2}\log\delta^{\prime}+\log\epsilon-2+ \mathcal{O}(1) \tag{45}\]
\[R_{1}+2R_{2}\leq I_{H}^{e}(X_{10}X_{2}\colon Y_{1}X_{11})_{\rho}+I_{H}^{e}(X_{22} X_{20}X_{10}\colon Y_{2})_{\rho} \tag{46}\]
\[R_{1}+R_{2}^{e}\leq I_{H}^{e}(X_{11}X_{2}\colon Y_{1}X_{10})_{\rho}+I_{H}^{e}(X_{22 }X_{20}X_{10}\colon Y_{2})_{\rho} \tag{47}\]
\[2R_{1}^{\prime}+R_{2}^{e}\leq I_{H}^{e}(X_{11}\colon Y_{1}X_{10}X_{20})_{\rho}+I_{H}^{e }(X_{10}X_{22}\colon Y_{2}X_{20})_{\rho} \tag{48}\]
\[R_{1}^{\prime}+2R_{2}^{e}\leq I_{H}^{e}(X_{11}X_{20}\colon Y_{1}X_{10})_{\rho}+I_{H}^{e }(X_{22}\colon Y_{2}X_{10}X_{20})_{\rho} \tag{49}\]
\[R_{1}^{\prime}+2R_{2}^{e}\leq I_{H}^{e}(X_{11}X_{20}\colon Y_{1}X_{10})_{\rho}+I_{H}^{e }(X_{22}\colon Y_{2}X_{10}X_{20})_{\rho} \tag{50}\]
\[R_{1}^{\prime}+2R_{2}^{e}\leq I_{H}^{e}(X_{11}X_{20}\colon Y_{1}X_{10})_{\rho}+I_{H}^{e }(X_{22}\colon Y_{2}X_{10}X_{20})_{\rho} \tag{51}\]
\[R_{1}^{\prime}+2R_{2}^{e}\leq I_{H}^{e}(X_{11}X_{20}\colon Y_{1}X_{10})_{\rho}+I_{H}^{e }(X_{22}\colon Y_{2}X_{10}X_{20})_{\rho} \tag{52}\]
\[R_{1}^{\prime}+2R_{
\[\log|\mathcal{K}_{22}|\geq I_{max}^{\delta^{\prime}-\epsilon^{\prime}}( X_{22}:ZX_{10}X_{11}X_{20})_{\rho}+\log\frac{3}{\epsilon^{\prime,3}}-\frac{1}{4} \log\delta^{\prime}\\ +\mathcal{O}(1)\]
_the following holds,_
\[\mathbb{E}_{\begin{subarray}{c}x_{10}-Px_{110}\\ x_{11}-Px_{11}\\ x_{20}-Px_{20}\end{subarray}}\left\|\frac{1}{|\mathcal{K}_{1}||\mathcal{K}_{2 }|}\sum_{k=1}^{|\mathcal{K}_{22}|}\sum_{l=1}^{|\mathcal{K}_{11}||X_{20}|}\sum _{j=1}^{|\mathcal{K}_{20}|}\sum_{l=1}^{|\mathcal{K}_{10}|}\rho_{x_{10}^{l}x_{2 0}^{j}x_{11}^{l}x_{22}^{k}}^{-}-\rho^{2}\right\|_{1}\]
\[\leq 60\delta^{\frac{1}{9}}\]
_Proof:_ The proof is similar to the two-user case explained in [27].
As mentioned before, let \(k_{1}=k_{10}.k_{11}\) and \(k_{2}=k_{20}.k_{22}\). Note that, \(R_{1}=R_{1}^{\prime}-\log k_{1}\), \(R_{2}=R_{2}^{\prime}-\log k_{2}\). Using the above lemma completes the proof.
**Appendix C**: (Proof of the Theorem 2)_ As mentioned in Appendix A, the secrecy constraint requires that Eve just could be able to decode a negligible information:
\[I_{max}^{\eta}(m_{10}m_{11}m_{20}m_{22}:Z)_{\rho}\leq\theta \tag{39}\]
_Encoding_: Suppose that both receivers want to decode non-interfering messages. This setting is similar to Theorem 1. It can be helpful for the receivers to decode their messages, including the intended messages and interfering messages. In other words, \(X_{10}\) and \(X_{20}\) can be used as side information. Therefore, the first sub-channel can be modeled as \((X_{10}X_{11}X_{2})\rightarrow(Y_{1},Z)\).
Consider the first C-QMA-WTC \((X_{10}X_{11}X_{2})\rightarrow(Y_{1},Z)\). From [27], we know that an achievable rate region can be calculated as stated in (30)-(38).
For the second C-QMA-WTC \((X_{1}X_{20}X_{22})\rightarrow(Y_{2},Z)\), there are similar achievable rates. Taking the intersection of the secrecy regions for both sub-channels can be calculated as stated in (15)-(28). Against Conjecture, Lemma 1 guarantees that the secrecy constraint for this problem (39) holds. This completes the proof.
|
2306.08287 | MIXALIME: MIXture models for ALlelic IMbalance Estimation in
high-throughput sequencing data | Modern high-throughput sequencing assays efficiently capture not only gene
expression and different levels of gene regulation but also a multitude of
genome variants. Focused analysis of alternative alleles of variable sites at
homologous chromosomes of the human genome reveals allele-specific gene
expression and allele-specific gene regulation by assessing allelic imbalance
of read counts at individual sites. Here we formally describe an advanced
statistical framework for detecting the allelic imbalance in allelic read
counts at single-nucleotide variants detected in diverse omics studies
(ChIP-Seq, ATAC-Seq, DNase-Seq, CAGE-Seq, and others). MIXALIME accounts for
copy-number variants and aneuploidy, reference read mapping bias, and provides
several scoring models to balance between sensitivity and specificity when
scoring data with varying levels of experimental noise-caused overdispersion. | Georgy Meshcheryakov, Sergey Abramov, Aleksandr Boytsov, Andrey I. Buyan, Vsevolod J. Makeev, Ivan V. Kulakovskiy | 2023-06-14T06:49:28Z | http://arxiv.org/abs/2306.08287v6 | # MIXALIME: MIXture models for ALlelic IMbalance Estimation in high-throughput sequencing data
###### Abstract
Modern high-throughput sequencing assays efficiently capture not only gene expression and different levels of gene regulation but also a multitude of genome variants. Focused analysis of alternative alleles of variable sites at homologous chromosomes of the human genome reveals allele-specific gene expression and allele-specific gene regulation by assessing allelic imbalance of read counts at individual sites. Here we formally describe an advanced statistical framework for detecting the allelic imbalance in allelic read counts at single-nucleotide variants detected in diverse omics studies (ChIP-Seq, ATAC-Seq, DNase-Seq, CAGE-Seq, and others). **MIXALIME** accounts for copy-number variants and aneuploidy, reference read mapping bias, and provides several scoring models to balance between sensitivity and specificity when scoring data with varying levels of experimental noise-caused overdispersion.
1
Footnote 1: Institute of Protein Research, Russian Academy of Sciences, Puschino, Russia
2
Footnote 2: Altius Institute for Biomedical Sciences, Seattle, WA, United States
3
Footnote 3: Moscow Institute of Physics and Technology, Moscow, Russia
4
Footnote 4: Vavilov Institute of General Genetics, Russian Academy of Sciences, Moscow, Russia
## 1 Introduction
Let \(\{c_{i}\}_{i=1}^{n}\) denote random variables (r.vs.) that model a single read emitted from a chromosome carrying a reference allele of single-nucleotide variant (SNV) in a high-throughput sequencing experiment with a total number of reads at the SNV being \(n\). Straightforward reasoning implies that \(c_{i}\) is a Bernoulli r.v. with some success probability \(p\): \(c_{i}\sim\texttt{Bernoulli}(p)\). Likewise, reads from an alternative allele are distributed as \(\hat{c_{i}}\sim\texttt{Bernoulli}(1-p)\); \(p\) is usually a known fixed value, e.g. in the case of a diploid genome without any copy-number variants \(p=\frac{1}{2}\)). The number of reads supporting a reference allele \(x\) (or an alternative allele \(y\)) is distributed as a binomial random variable by a definition as a sum of i.i.d. Bernoulli r.vs.:
\[y\sim\texttt{Binom}(n,p),\ \ x\sim\texttt{Binom}(n,1-p). \tag{1}\]
The model assumption holds for the case when there is no allele-specificity (AS), and we expect AS SNVs to deviate from this model, which can be tested with the simple two-tailed binomial test (Collins, 2010). This reasoning sometimes suffices and is employed by existing methods such as **AlleleSeq**(Rozowsky et al., 2011). A more robust approach is to assume that \(p\) is not a fixed known value, but a \(\texttt{Beta}(\alpha,\beta)\) r.v., where \(\alpha,\beta\) are parameters to be estimated. Parameter \(p\) can be marginalized out by integration and then
\[y\sim\texttt{BetaBinom}(n,\alpha_{x},\beta_{x}),\ \ x\sim\texttt{BetaBinom}(n, \alpha_{y},\beta_{y}).\]
For instance, this approach is followed by **StratAS**(Grishin and Gusev, 2022) with parameters estimated locally in continuous genomic regions with copy numbers known from external annotation.
One alternative approach is to assume that \(y\) is distributed as a negative binomial random variable conditioned on the \(x\) and vice-versa (Graze et al., 2012):
\[y\sim\texttt{Avg}(x,p),\ \ x\sim\texttt{Avg}(y,1-p)\]
There are numerous ways to motivate the choice of NB as a read counts distribution:
1. Non-rigorously, we know that there were at least \(r=y\) failures among a total of \(n=x+y\) read "attempts". This corresponds to the way NB is usually introduced intuitively;
2. Read counts \(y\) can be thought of as a Poisson random variable whose expectation itself is a Gamma random variable: \[y\sim\mathpzc{Poisson}(\lambda),\ \ \lambda\sim\mathpzc{Gamma}(x,\frac{p}{1-p}).\] This Poisson-Gamma mixture marginalizes to a negative binomial distribution.
3. Let's revisit the binomial distribution once more. The probability of observing \(y\) given a total number of reads \(n=x+y\) is \(\binom{n}{y}p^{y}(1-p)^{n-y}=\binom{x+y}{y}p^{y}(1-p)^{x}\). After the substitution of \(n=x+y\) this equation, however, is no longer a viable probability mass function (PMF) for a varying \(x\): note that what was the \(n\) parameter increases together with \(x\), and, consequently, the value of the equation is non-zero for all \(x\in\mathbb{Z}_{+}\), whereas for the binomial random variable the support is bounded at the fixed value \(n\). The valid PMF is easily obtained by estimating the normalizing constant: \(c=\sum_{i=0}^{\infty}\binom{x+y}{y}p^{y}(1-p)^{x}=\frac{1}{1-p}\) and then: \[f(y|r=x,p)=\frac{\binom{x+y}{y}p^{y}(1-p)^{x}}{c}=\binom{x+y}{y}p^{y}(1-p)^{x+ 1}=\binom{x+y}{x}p^{y}(1-p)^{x+1}.\] That's very similar to the negative binomial PMF, albeit with a tiny difference - the \(r=x\) parameter is shifted by \(1\). This is actually reasonable as a read count on an allele is never below \(1\).
However, the naive binomial approach doesn't account for neither the problem of reference mapping bias nor the possible presence of CNVs. The less naive beta-binomial and negative-binomial approaches does so only indirectly by increasing the dispersion of the null distribution. Next, we describe a family of alternative approaches that tackle:
* reference mapping bias;
* CNVs and/or aneuploidy;
* other non-attributed e.g. experiment-specific sources of noise and variation in the underlying data.
All members of the proposed family are based on the negative binomial approach explained above. The necessity of choosing NB over binomial distribution is due to the assumption that the mean number of reads mapped to one allele (proportional to \(r\)) is linearly dependent on the read counts mapped to the other allele. We incorporate this assumption into the model by having \(r\) linearly depend on the read count at the other allele:
\[r(x,b,a)=bx+a, \tag{2}\]
where \(b,a\) are some parameters that ought to be estimated from the data. The adequacy of this assumption is evaluated in Appendix B.
Note that naturally, we should've considered a hypothetical joint distribution of \(x\) and \(y\) with a probability mass function \(f(x,y)\). This, though, is inconceivable for us and we limit ourselves to the conditional distributions \(f(y|x=r_{y}),f(x|y=r_{x})\), i.e. a distribution of reference allele read counts given alternative allele read counts and vice-versa. One can think of it as considering a distribution of read counts obtained by taking a horizontal slice at a given \(r_{x}\) level (and a vertical slice for \(x\) at a given \(r_{y}\) level). This is illustrated by Figure 1-B. In turn, we try to approximate the joint distribution of \(x,y\) by considering all possible horizontal and vertical slices with \(r\) varying with each slice as in Equation 2, which effectively links "slices" together. The key feature of this approach is that it enables separate scoring of allelic imbalance favoring each of the two alleles. This way we model the reference mapping bias implicitly as a difference between \(r\) parameters for reference and alternative distributions.
## 2 Negative binomial model
We assume linear reference bias as in Equation 2 and a negative binomial distribution of \(y\) for a given \(x\), and, symmetrically, the same for \(y\),
\[y \sim\mathpzc{Left Truncated}\mathpzc{B}(r(x,b_{x},a_{x}),p,l), \tag{3}\] \[x \sim\mathpzc{Left Truncated}\mathpzc{B}(r(y,b_{y},a_{y}),p,l),\]
where \(\mathpzc{Left Truncated}\mathpzc{B}\) is the left truncated at \(l\) negative binomial distribution.
As for the left truncation at \(l\), we introduced it as the low-coverage SNVs are often noisy due to the SNP-calling errors and should be filtered out from the data, thus requiring to augment the distribution function accordingly. If a PMF of some distribution \(\mathcal{D}\) is \(g(y)\), its cumulative distribution function (CDF) is \(G(y)\), then PMF of the left truncated at \(l\) version of \(\mathcal{D}\) is \(f(y)\)(Lawless, 2014):
\[f(y,l)=\frac{g(y)\mathbb{1}_{y\geq l}}{1-G(l)}, \tag{4}\]
where 1 is an indicator function.
## 3 Beta negative binomial model
Similarly to a beta-binomial model, we can assume that \(p\sim\mathpzc{Beta}(\alpha,\beta)\). We apply a convenient reparametrization of \(\mathpzc{Beta}\) in terms of its mean \(\mu\) and "concentration" \(\kappa\)(Meshcheryakov et al., 2022):
\[p\sim\mathpzc{Beta}(\mu,\kappa),\ \alpha=\mu\kappa,\ \beta=(1-\mu)\kappa. \tag{5}\]
Note that \(\mathbb{E}[p]=\mu\) and \(var[p]=\frac{(1-\mu)\kappa}{\kappa+1}\). It follows that \(\mu\) is indeed the mean of \(p\) and \(\kappa\) controls the variance of \(p\). The higher the "concentration" \(\kappa\), the lower the variance. Then, by combining the previous model from the Equation 3 with Equation 5 and integrating out \(p\) we obtain the following model:
\[y \sim\mathpzc{Left}\textsc{TruncatedBeta}\mathpzc{VB}(r(x,b_{x},a _{x}),\mu,\kappa,l), \tag{6}\] \[x \sim\mathpzc{Left}\textsc{TruncatedBeta}\mathpzc{VB}(r(y,b_{y}, a_{y}),\mu,\kappa,l),\]
where \(\mathpzc{Left}\textsc{TruncatedBeta}\mathpzc{VB}\) is the left truncated at \(l\) beta negative binomial distribution (Wang, 2011). Likewise, the computation of the truncated distribution is performed with the general Equation 4.
Note that as \(\kappa\to\infty\), the beta negative binomial model converges to the negative binomial model. In practice, due to the limitations of numerical algebra and finiteness of underlying data, the exact convergence does not happen. This might lead to a loss of sensitivity as the beta negative binomial distribution is heavy-tailed. Therefore, the beta negative binomial model should not be used as a general substitute for the regular negative binomial model, but can be used as a very conservative test.
## 4 Marginalized compound negative binomial model
When formulating NB (Equation 3) and BetaNB (Equation 6) models, we assumed that the read count at the preselected (fixed) allele is precisely known. In practice this is not true, as read counts are prone to errors, i.e. the read counts at a preselected allele should be a random variable themselves. We consider an assumption
Figure 1: Heatmap and density plots for allelic read read counts. Heatmap is plotted for counts starting at 5 (as counts below 5 were filtered from the dataset as noisy observations). **A**. Binomial distributions \(P(x|x+y=45),P(y|x+y=45)\) fitted to all counts that lie on the black dashed line. **B**. Negative binomial distributions \(P(x|y=15),P(y|x=15)\) fitted to counts that line on the red and the orange dashed lines for reference and alternative allele read counts respectively.
that measurements of an alternative allele read count \(y\) are distributed as a zero-truncated binomial random variable (next _ZIPBin_) to be a reasonable one. The zero-truncation is necessary to accommodate for the two facts: allele-specificity does not make sense at homozygous SNVs, and technically \(y>0\) in a \(NB\) distribution.
Let's consider the following model:
\[y\sim\mathcal{AG}(\hat{x},p),\ \hat{x}\sim\mathcal{ZIPBin}(r,1-p). \tag{7}\]
It turns out that \(\hat{x}\) can be marginalized out and a marginal distribution of \(y\) can be obtained:
\[f_{\mathcal{AGVB}}(y|r,p)=\frac{r(p-1)^{2}p^{r+y-1}\,_{2}F_{1}\left(1-r,y+1;2; -\frac{(1-p)^{2}}{p}\right)}{1-p^{r}}, \tag{8}\]
where \(\,{}_{2}F_{1}\) is Gauss hypergeometric function defined as \(\,{}_{2}F_{1}(a,bc;z)=\sum_{i=0}^{\infty}\beta_{i},\ \beta_{0}=1,\frac{\beta_{i+1}}{\beta_{i}}= \frac{(i+a)(i+b)}{(i+c)(i+1)}z^{i}\). We avoid computing it using the definition and instead use therecurrent formulae for \(f_{\mathcal{MCVB}}\) that we've inferred (see Appendix D). We shall call this law "Marginalized compound negative binomial" distribution or MCNB for short (Meshcheryakov et al., 2023). For proof on why the Equation 8 holds, see Appendix C. So, the model proposed in this section is
\[\begin{split}& y\sim\mathcal{Left TruncatedMCVB}(r(x,b_{x},a_{x}),p,l),\\ & x\sim\mathcal{Left TruncatedMCVB}(r(y,b_{y},a_{y}),p,l).\end{split} \tag{9}\]
Note: see in Table 1, that interpretation of the \(r\) parameter for MCNB varies significantly from \(NB\) and \(BetaNB\). That's because for the latter two \(r\) can be thought of as a number of counts of an alternative allele, whereas here \(r\) was introduced as a total number \(x+y\) of read counts. It becomes more evident as \(\lim_{r\to\infty}\mathbb{E}[x]=rp\), which is exactly 2 times greater than a mean of NB for \(p=\frac{1}{2}\). We shall deal with this nuisance later in Section 6.
## 5 Mixture model
Suppose that \(p\neq\frac{1}{2}\) (or \(\mu\neq\frac{1}{2}\) in the case of \(BetaNB\)) which happens for SNVs located in CNVs or duplicated chromosomes. For instance, there might have 3 copies of a maternal allele and one copy of a paternal allele in a tetraploid organism, which results in \(p_{m}=\frac{3}{4}\) and \(p_{p}=\frac{1}{4}=1-p_{m}\). Most of the time, the completely phased personal genome and even partial haplotypes are not available, i.e. the exact number of copies of the reference and the alternative allele for any particular SNP remain unknown. However, it is often possible to estimate the ratio of the major to the minor allele copy numbers, that is the relative background allelic dosage (BAD), directly from SNP calls with an unsupervised approach (Abramov and Boytsov, 2022) or from an experimentally obtained CNV map (Abramov et al., 2021).
We tackle this problem by assuming that each read is coming from the one (e.g.'maternal') chromosome with probability \(w\) and from the other chromosome (e.g. paternal) with probability \(1-w\), where the balance between \(w\) and \(1-w\) reflects BAD. This is done naturally with the mixture distribution:
\[f_{\mathcal{MG}}(x|p,\hat{\theta})=wf(x|p,\hat{\theta})+(1-w)f(x|1-p,\hat{ \theta}),\]
where \(f\) is a distribution function of either NB, BetaNB or MCNB models, \(\hat{\theta}\) is a parameter vector with \(p\) excluded, \(p=\frac{BAD}{BAD+1}\) and \(w\) is a weight in the mixture model (see Figure 2), an active parameter to be estimated from the data.
## 6 Regularization by reparametriation of \(r\)
Following the original definition, the \(p\) variable is effectively linked to BAD and can be interpreted as a fraction of copies of a genome segment carrying the major allele of an SNP. This interpretation stands for the binomial (Equation 1) or the NB model (Equation 3).
\begin{table}
\begin{tabular}{c|c|c|c} & NB & MCNB & BetaNB \\ \hline \(\mathbb{E}[x]\) & \(\frac{rp}{1-p}\) & \(\frac{rp}{1-p^{r}}\) & \(\frac{rp\mu k}{(1-\mu)\kappa-1}\) \\ \(var[x]\) & \(\frac{rp}{(1-p)^{2}}\) & \(\frac{pr(p^{2}+(p^{2}r-1)-pr-1)p^{r}+1)}{(1-p)(1-p)^{r}}\) & \(\frac{(\kappa-1)\kappa\kappa r(\kappa(\mu-1)-r+1)}{(\kappa(\mu-1)+1)(2\kappa(\mu -1)+2)}\) \\ \(\frac{var[x]}{\mathbb{E}[x]}\) & \(\frac{1}{1-p}\) & \(\frac{pr}{p^{r}-1}+\frac{p^{2}+1}{1-p}+pr\) & \(\frac{(1-\kappa)(\kappa(\mu-1)-r+1)}{(\kappa(\mu-1)+1)(\kappa(\mu-1)+2)}\) \\ \end{tabular}
\end{table}
Table 1: Mean and variances for NB, MCNB and BetaNB distributions. For the derivation of MCNB moments, see Appendix E.
However, BetaNB mean value is shifted for small values of \(\kappa\) relatively to NB. For MCNB this interpretation is also misleading due to the different nature of \(r\) parameter, which reflects rather a total read count rather than the read count supporting an alternative allele), see Table 1. Therefore, to make BetaNB and MCNB work adequately with BADs higher than 1 and maintain interpretability of the \(r\) parameter, we transform it for NB and BetaNB so the expected values of the distributions agree with that of NB:
\[r_{\mathpzc{MG}\cap\emptyset}=r\frac{1-p^{r}}{1-p},\ r_{\mathpzc{ BetaNB}}=r\frac{(1-\mu)\kappa-1}{\kappa(1-\mu)}.\]
Figure 2: Graphical representation of the mixture distribution idea for case when \(BAD=2\). Two components of the \(f_{\mathpzc{MG}_{\text{L}}}\) mixture are colored differently. The same logic applies to CNVs as well.
\(r_{\mathcal{R}\text{U}\mathcal{Y}\mathcal{Y}\mathcal{Y}}\) merely rescales \(r\) parameter and does not constrain the parameter space. \(r_{\mathcal{R}\text{U}\mathcal{Y}\mathcal{Y}}\), on the hand, links \(r\) and \(\kappa\) together that is more prominent for low values of the concentration parameter \(\kappa\) and doesn't exist for high values \(\kappa\) (which is expected as \(\mathcal{R}\text{U}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y }\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{YY}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y} \mathcal{Y}\mathcal{Y}\mathcal{Y}\mathcal{Y}\
where \(X=\{x_{i}\}_{i=1}^{n}\), \(Y=\{y_{i}\}_{i=1}^{n}\) (i.e. a pair \((X,Y)\) symbolizes the whole dataset, with \(X,Y\) being alternative allele counts and reference allele counts respectively). Symmetrically, we do the same for \(\mathcal{L}(\theta,Y,X,l)=\sum_{i=1}^{n}\ln f(x_{i}|\theta,y_{i},l)\).
The Equation 10 is maximized with the Sequential Least Squares Programming (SLSQP) algorithm (Kraft, 1988) provided with the **scipy** package (Virtanen et al., 2020).
### Regularization to enlarge \(\kappa\) in the BetaNB model
BetaNB model tends to provide ultra-conservative P-value estimates, see Section 9 for details of the scoring procedure. This happens due to the fact that the beta negative binomial distribution is significantly more heavy-tailed than a negative binomial distribution for small values of \(\kappa\). Therefore, it might be useful to compromise the goodness of fit for greater sensitivity of the model by encouraging higher values of the \(\kappa\) parameter. On the other hand, we also observed that high coverage data has lower variance, i.e. higher values of \(\kappa\) for high values of \(y\) are expected. We introduce a regularization that accommodates for this observation by assuming that
\[\frac{1}{\kappa}\sim\mathpzc{L}\mathpzc{L}\mathpzc{L}\mathpzc{L}\mathpzc{L}(0,b(\alpha,y,n)),b(\alpha,y,n)=\alpha ny, \tag{11}\]
where the Laplace distribution has the probability density function \(g(z|b)=\frac{1}{2b}e^{-\frac{b}{\delta}}\), \(b\) is a scale parameter, \(\alpha\) is a regularization multiplier/hyperparameter, \(n\) is a total number of observations in a current window, \(y\) is the fixed allele value. Here, the scale parameter gradually increases as we slide farther across the dataset with the window (see Section 7.3), and the window size multiplier \(n\) attempts to make \(\alpha\) more dataset-agnostic.
When applying this regularization, instead of MLE we use maximum-a-posteriori (MAP) approach - instead of maximizing loglikelihood \(\sum_{i=1}^{n}lnf_{\mathpzc{BetaNB}}(x_{i}|\theta,\kappa)\) with respect to \((\theta,\kappa)\) as in Equation 10, we maximize a sum of logarithms of joint densities of \(x\) and \(\kappa\):
\[p(x,\kappa|\theta)=f_{\mathpzc{BetaNB}}(x|\kappa,\theta)h(z|\theta),\]
where \(h(z|\theta)\) is PDF of \(\kappa\), that can obtained from the assumption given at Equation 11:
\[h(z|\theta)=\frac{\partial}{\partial\kappa}P(z<\kappa)=\frac{\partial}{ \partial\kappa}\left(1-P\left(\frac{1}{\kappa}<\frac{1}{z}\right)\right)= \frac{1}{\kappa^{2}}g\left(\frac{1}{\kappa}\right).\]
So, maximizing MAP objective \(\hat{l}\) is equivalent to maximizing ML objective \(l\) in Equation 10 with an extra penalty term:
\[\hat{\mathcal{L}}(\theta,X,Y,l)=\mathcal{L}(\theta,X,Y,l)-n\left(\ln(2b)+ \frac{1}{\kappa b}+2\ln\kappa\right).\]
### Standard errors of parameter estimates
**MIXALIME** can produce standard errors of MLEs on a user request. Standard errors are calculated with the help of Rao-Cramer inequality that provides a lower bound on the estimates' variance:
\[var(\hat{\theta})\geq\mathcal{I}(\theta)^{-1},\]
where \(\hat{\theta}\) is a vector of MLEs, and \(\mathcal{I}\) is a Fisher information matrix. Theoretical or "expected Fisher information" is intractable for NB, BetaNB, and MCNB, thus we use Fisher information instead \(\hat{\mathcal{I}}\) that is defined as
\[\hat{\mathcal{I}}=-\sum_{i=1}^{n}\frac{\partial^{2}}{\partial\theta^{2}}\ln f( x_{i}|\theta,y_{i},l)=-\frac{\partial^{2}}{\partial\theta^{2}}\mathcal{L}( \theta,X,Y,l). \tag{12}\]
In other words, we use the negative Hessian matrix as an approximation to the expected Fisher information matrix. The Hessian is computed by the **JAX** framework.
Nota bene: Although **MIXALIME** will output standard errors when requested for MAP estimates of the BetaNB regularized model (see Section 7.1), they should be ignored.
### Parameter estimation with the sliding window
In particular cases, including pooled datasets, the reference bias Equation 2 does not adequately reflect the observed read counts at high-coverage SNVs. This happens both due to non-linearity not taken into account, as well as due to the fact that the high-coverage SNVs occur at systematically lower frequency hence granting them lower weight in the parameter estimation procedure. At complex data, instead of fitting a single model
to the whole dataset, it is more reliable to fit multiple models with a sliding window scanning a range of counts supporting the preselected allele. E.g. when estimating the parameter for scoring reference alleles, the subsets are chosen in a sliding window with respect to the alternative allele \(y\) counts: the window is expanded in both directions from \(y\) until the number of observations in the window reaches a predefined user-set value \(m\) (\(m=10000\) is enough in practice), see Figure 4.
Given the parameter estimation is performed with the maximum likelihood estimation (MLE) method, the "windowed" approach corresponds to the the local likelihood estimation (Tibshirani & Hastie, 1987).
## 8 Estimating gradient
We rely on the automatic differentiation framework **JAX**(Bradbury et al., 2018) to obtain an analytical gradient of the log likelihood function 10. This, obviously, requires PMF of the model in Equation 4 to be differentiable in the first place. This condition is met if we compute \(G(l)\) straightforwardly according to the definition: \(G(l)=\sum_{n=0}^{l}f(n)\). However, as the truncation boundary \(l\) increases, increases the computational burden: note that each computation of \(f\), both in the case of the Negative Binomial \(f_{\lambda\emptyset}\) and the Beta Negative Binomial \(g_{\mathcal{BetaWB}}\) require evaluations of the Euler's Gamma functions \(\Gamma\) and Beta functions \(B\):
\[f_{\lambda\emptyset}(x|\theta)=\frac{\Gamma(x+r)}{\Gamma(r)\Gamma(x+1)}(1-p)^ {r}p^{x},\ \ f_{\mathcal{BetaWB}}(x|\theta)=\frac{B(r+x,\kappa)}{B(r,\mu\kappa)}\frac{ \Gamma(x+(1-\mu)\kappa)}{x!\Gamma((1-\mu)\kappa)}.\]
In appendices F and G we propose differentiable numerical schemes to calculate CDFs \(G_{\lambda\emptyset}\) and \(G_{\mathcal{BetaWB}}\) of the Negative Binomial and Beta Negative Binomial distributions respectively whose computational complexity does not depend on \(l\).
Figure 4: Schematic explanation of the procedure used for building a window. Here, a window is built for reference allele counts \(x\) around horizontal slice at \(y=3\). In heatmaps, pink colored areas contain ”points” inside a window.
## 9 Scoring individual SNVs
The SNV scoring scheme can be outlined as follows:
1. Obtain model parameter estimates using reference allele counts conditioned on the alternative allele counts (and vice versa);
2. Calculate rightssided p-values and effect size estimates for all observations;
3. Combine p-values across samples (e.g. replicates) with Mudholkar-George logitp method Mudholkar and George (1983);
4. Estimate a weighted average effect-size of across samples/resplicates (see Section 9.2);
5. For each SNV, select the least of 2 combined p-values and its corresponding effect-size as the final quantitative estimate of the allele-specificity.
### Computation of p-values
Right-sided p-value is defined as \(p=P(z>=x)\), that's it, one should be able to compute CDF for a given distribution. Although p-values could be computed directly for all the models following the definition \(cdf(x)=\sum_{z=0}^{x-1}f(z)\), that has two downsides, namely:
* PMFs of all available in **MIXALIME** distributions require computations of gamma, beta and, in the case of MCNB, hypergeometric functions. They can be computed only approximately, and each PMF in the summation introduces an additional error, making the most important low p-values unreliable;
* Excess computations;
Therefore, we use recurrent formulae for calculation of p-values. In Appendix D we have inferred the recurrent formula for MCNB distribution. For the negative binomial model, we use Panjer recursion Sundt and Jewell (1981) and for the beta negative binomial model, we take advantage of formulae provided by Hesselager (1994). Thus, for the negative binomial model,
\[f_{\mathfrak{NB}}(x|r,p)=prf_{\mathfrak{NB}}(x-1|r,p),\ f_{\mathfrak{NB}}(0|r,p)=(1-p)^{r}\]
and for the beta negative binomial model
\[f_{\mathfrak{Beta}\mathfrak{NB}}(x|r,\mu,\kappa)= \frac{(x+r-1)(x+(1-\mu)\kappa-1)}{x(x+\kappa+r-1)}f_{\mathfrak{ Beta}\mathfrak{NB}}(x-1|r,\mu,\kappa),\] \[f_{\mathfrak{Beta}\mathfrak{NB}}(0|r,\mu,\kappa)= \frac{\Gamma(\kappa)\Gamma((1-\mu)\kappa+r)}{\Gamma((1-\mu)\kappa )\Gamma(\kappa+r)}.\]
When computing p-values using the recurrent formulae specified above, we rely on the multiple-precision algebra package **gmpy2** for improved numerical stability.
### Computation of effect size estimates
Let, once again, \(x\) be a random variable distributed in agreement with one of the models discussed above, representing a read count. Let \(\hat{x}\) be a realization of this random variable (i.e. an observed read count from the data). Then, we define effect-size (ES) as:
\[ES_{x}(\hat{x})=log_{2}(\mathbb{E}[x])-log_{2}(\hat{x}).\]
We combine effect sizes across replicates/samples as a weighted average, where weights are negative logarithms of the respective p-values.
## 10 Differential allele-specificity
**MIXALIME** also provides machinery to test for the differential allele-specificity between two sample groups (i.e. control and test). We employ Wald or likelihood-ratio test (LRT) to see if there is a difference in parameters estimates between two groups:
1. We obtain parameter estimates for the whole dataset as explained in the previous sections. Usually, it results in numerous estimates of \(b,\mu,\kappa,w\) - one for each window (see Section 7.3);
2. We take parameter estimates from windows that correspond to the fixed allele count present in control/test group;
3. We fix those parameters, but obtain MLEs of \(p\) using 1D optimization for both control group and test group: \(p_{control}\) and \(p_{test}\) respectively;
4. We use a chosen test (Wald or LRT, Wald is the default option) to see if the difference between \(p_{control}\) and \(p_{test}\) is statistically significant. Briefly, * **Wald test**: We use asymptotic distribution of MLE of \(p_{control}\) and \(p_{test}\) (a normal distribution with variance equal to the Fisher information; here, we use observed Fisher information instead of expected one as in Equation 12) to test whether their difference is significant. * **LRT**: Here, employ asymptotic distribution \(\chi^{2}(1)\) of loglikelihood ratios of free and constrained (nested) models to see if constrained model results in a significant decrease of likelihood. We assume the free model to be a model with two independent parameters \(p_{control}\) and \(p_{test}\) for samples from control and test groups respectively (practically its loglikelihood is computed as a sum of loglikelihoods of two independent models: one estimated on a control group and the other on a test group), and the constrained model is one with a single \(p\) parameter.
Just like in a regular SNV scoring scheme, we apply this algorithm for both \(f(x|y)\) and \(f(y|x)\) to obtain 2 p-values for each SNV, choosing the smallest of them as a final p-value.
### 11 Software implementation
#### Technical details
**MIXALIME** is written in the Python programming language. We took advantage of the autodifferentiation and just-in-time compilation provided by the **JAX** framework and we used optimization routines present in the **scipy** package. For reading and processing input datasets we rely on a combination of **datatable**, **pandas** (pandas development team, 2020) and **pysam** packages. Implementation-wise, most of the math is done in a separate package named **betangebinfit** (for the sake of possible usage outside of the task of identifying allele-specific events), whereas the **MIXALIME** packages itself is more of a wrapper around it.
This paper covers **MIXALIME**_v 2.13.0_(Meshcheryakov et al., 2023a). It can be installed with the **pip** program:
In [ ]: ```
``` >pip3installmixline==2.13
```
#### Workflow
A user engages with **MIXALIME** via command-line interface. The package provides a complete documentation of its feature alongside with a small tutorial through the **help** command:
In [ ]: ```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
* []: >mixalimecombineProjectNamegroup.txt When applicable, one can also call \(\; |
2307.04794 | CHEX-MATE: CLUster Multi-Probes in Three Dimensions (CLUMP-3D), I. Gas
Analysis Method using X-ray and Sunyaev-Zel'dovich Effect Data | Galaxy clusters are the products of structure formation through myriad
physical processes that affect their growth and evolution throughout cosmic
history. As a result, the matter distribution within galaxy clusters, or their
shape, is influenced by cosmology and astrophysical processes, in particular
the accretion of new material due to gravity. We introduce an analysis method
to investigate the 3D triaxial shapes of galaxy clusters from the Cluster
HEritage project with XMM-Newton -- Mass Assembly and Thermodynamics at the
Endpoint of structure formation (CHEX-MATE). In this work, the first paper of a
CHEX-MATE triaxial analysis series, we focus on utilizing X-ray data from XMM
and Sunyaev-Zel'dovich (SZ) effect maps from Planck and ACT to obtain a three
dimensional triaxial description of the intracluster medium (ICM) gas. We
present the forward modeling formalism of our technique, which projects a
triaxial ellipsoidal model for the gas density and pressure to compare directly
with the observed two dimensional distributions in X-rays and the SZ effect. A
Markov chain Monte Carlo is used to estimate the posterior distributions of the
model parameters. Using mock X-ray and SZ observations of a smooth model, we
demonstrate that the method can reliably recover the true parameter values. In
addition, we apply the analysis to reconstruct the gas shape from the observed
data of one CHEX-MATE galaxy cluster, Abell 1689, to illustrate the technique.
The inferred parameters are in agreement with previous analyses for that
cluster, and our results indicate that the geometrical properties, including
the axial ratios of the ICM distribution, are constrained to within a few
percent. With much better precision than previous studies, we thus further
establish that Abell 1689 is significantly elongated along the line of sight,
resulting in its exceptional gravitational lensing properties. | Junhan Kim, Jack Sayers, Mauro Sereno, Iacopo Bartalucci, Loris Chappuis, Sabrina De Grandi, Federico De Luca, Marco De Petris, Megan E. Donahue, Dominique Eckert, Stefano Ettori, Massimo Gaspari, Fabio Gastaldello, Raphael Gavazzi, Adriana Gavidia, Simona Ghizzardi, Asif Iqbal, Scott Kay, Lorenzo Lovisari, Ben J. Maughan, Pasquale Mazzotta, Nobuhiro Okabe, Etienne Pointecouteau, Gabriel W. Pratt, Mariachiara Rossetti, Keiichi Umetsu | 2023-07-10T18:00:06Z | http://arxiv.org/abs/2307.04794v2 | # CHEX-MATE: CLUster Multi-Probes in Three Dimensions (CLUMP-3D)
###### Abstract
Galaxy clusters are the products of structure formation through myriad physical processes that affect their growth and evolution throughout cosmic history. As a result, the matter distribution within galaxy clusters, or their shape, is influenced by cosmology and astrophysical processes, in particular the accretion of new material due to gravity. We introduce an analysis method to investigate the three-dimensional triaxial shapes of galaxy clusters from the Cluster HEritage project with _XMM-Newton_ - Mass Assembly and Thermodynamics at the Endpoint of structure formation (CHEX-MATE). In this work, the first paper of a CHEX-MATE triaxial analysis series, we focus on utilizing X-ray data from _XMM-Newton_ and Sunyaev-Zel'dovich (SZ) effect maps from _Planck_ and Atacama Cosmology Telescope (ACT) to obtain a three dimensional triaxial description of the intracluster medium (ICM) gas. We present the forward modeling formalism of our technique, which projects a triaxial ellipsoidal model for the gas density and pressure to compare directly with the observed two dimensional distributions in X-rays and the SZ effect. A Markov chain Monte Carlo is used to estimate the posterior distributions of the model parameters. Using mock X-ray and SZ observations of a smooth model, we demonstrate that the method can reliably recover the true parameter values. In addition, we apply the analysis to reconstruct the gas shape from the observed data of one CHEX-MATE galaxy cluster, PSZ2 G313.33+61.13 (Abell 1689), to illustrate the technique. The inferred parameters are in agreement with previous analyses for that cluster, and our results indicate that the geometrical properties, including the axial ratios of the ICM distribution, are constrained to within a few percent. With much better precision than previous studies, we thus further establish that Abell 1689 is significantly elongated along the line of sight, resulting in its exceptional gravitational lensing properties.
Key Words.: Galaxies: clusters: general - Galaxies: clusters: intracluster medium - X-rays: galaxies: clusters - Cosmology: observations - (Cosmology:) dark matter - Galaxies: clusters: individual: Abell 1689
## 1 Introduction
Galaxy clusters are useful probes of structure formation, astrophysical processes such as shocks and feedback from active galactic nuclei, and cosmology (Davis et al. 1985; Voit 2005; Allen et al. 2011; Kravtsov & Borgani 2012; Markevitch & Vikhlinin 2007; McNamara & Nulsen 2007). For instance, they are fundamental to the science goals of numerous ongoing and upcoming large survey projects such as _eROSITA_(Predehl et al. 2021), _Euclid_(Euclid Collaboration et al. 2019), and _Rubin Observatory_(Ivezic et al. 2019). In order to maximize the scientific reach of such programs, particularly with regard to cosmological parameter constraints, it is crucial to accurately characterize the ensemble average physical properties of galaxy clusters along with the intrinsic scatter relative to these averages (e.g., Lima & Hu 2005; Zhan & Tyson 2018; Euclid Collaboration et al. 2019). One such example are the scaling relations used to connect global galaxy cluster observables to underlying halo mass (Rozo & Rykoff 2014; Mantz et al. 2016; Pratt et al. 2019). While these scaling relations are generally sensitive to a range of astrophysical processes (e.g., Ansarifard et al. 2020), some observables, including the gravitational weak lensing measurements often used to determine absolute mass, have deviations from average relations that are dominated by projection effects related to asphericity and orientation (Meneghetti et al. 2010; Becker & Kravtsov 2011).
CHEX-MATE (CHEX-MATE Collaboration et al. 2021)1 is an effort to provide a more accurate understanding of the population of galaxy clusters at low-\(z\) and high mass, particularly in the context of cosmology and mass calibration, including the shape of their matter distributions and the effects of the baryonic physics on their global properties. The project is based on a 3 Msec _XMM-Newton_ program to observe 118 galaxy clusters, containing two equal-sized sub-samples selected from the _Planck_ all-sky SZ2 effect survey. The CHEX-MATE Tier-1 and Tier-2 samples each include 61 galaxy clusters with four overlapping clusters and represent
a volume-limited (\(0.05<z\leq 0.2\)) sample in the local universe and mass-limited (\(M_{500}\geq 7.25\times 10^{14}\) M\({}_{\odot}\))3 sample of the most massive objects in the universe, respectively. The X-ray observing program has recently been completed, and initial results from the analyses of these data along with publicly available SZ data have already been published (Campitiello et al. 2022; Oppizzi et al. 2022; Bartalucci et al. 2023).
Footnote 3: The parameter \(M_{500}\) denotes the mass enclosed within a radius (\(R_{500}\)) where the mean overdensity is 500 times the critical density at a specific redshift, and we use the \(M_{500}\) and \(R_{500}\) values from Planck Collaboration et al. (2016a).
We utilize triaxial modeling techniques (e.g., Limousin et al. 2013) to investigate the three-dimensional mass distribution within the CHEX-MATE galaxy clusters to infer their intrinsic properties. This approach is motivated by two reasons: (1) Three-dimensional triaxial shapes provide a better approximation of galaxy clusters than spherical models, and the parameters, such as mass, obtained from such an analysis have lower levels of bias and intrinsic scatter (Becker & Kravtsov 2011; Khatri & Gaspari 2016); (2) A correlation between the triaxial shape of the dark matter (DM) halo and its formation history has been established in simulations (e.g., Ho et al. 2006; Lau et al. 2021; Stapelberg et al. 2022), suggesting that triaxial shape measurements can provide a powerful probe of cosmology independent of other techniques currently in use. For instance, some lensing-based shape measurements have found good agreement with \(\Lambda\)CDM predictions (Oguri et al. 2010; Chiu et al. 2018), while a recent multi-probe triaxial analysis suggests a \(\simeq 2\sigma\) discrepancy between the observed and predicted minor to major axial ratio distributions (Sereno et al. 2018). This discrepancy could indicate that clusters formed more recently than predicted. Alternatively, elevated merger rates (Despali et al. 2017), a reduced influence of baryons on the dark matter (Suto et al. 2017) or enhanced feedback (Kazantzidis et al. 2004) could also explain the observed cluster shapes. CHEX-MATE offers a uniform selection of galaxy clusters with consistent measurements of ICM density and temperature. This clean, well-characterized selection with a large sample size (\(\sim 80\) clusters excluding major mergers; Campitiello et al. 2022) will enable a robust cosmological measurement of the triaxial shape distribution.
For our analysis, we adopted the CLUster Multi-Probes in Three Dimensions (CLUMP-3D; Sereno et al. 2017, 2018; Chiu et al. 2018; Sayers et al. 2021) project and implemented significant updates to the modeling package. CLUMP-3D incorporates multiwavelength data from X-ray (surface brightness and spectroscopic temperature), mm-wave (SZ surface brightness), and optical (gravitational lensing) observations, which are the projected observables. Then, it assumes triaxial distributions of the ICM gas and matter density profiles. Taking advantage of the different dependencies of the X-ray and SZ signals on the gas density and temperature, they probe the line-of-sight extent of the ICM, and gravitational lensing data probes the projected matter distribution. In particular, the X-ray emission observed from the ICM is proportional to the line-of-sight integral of the squared electron density (\(n_{e}\)) multiplied by the X-ray cooling function, \(\Lambda\), represented as \(S_{X}\propto\int n_{e}^{2}\Lambda dl\). Meanwhile, the detected SZ signal is proportional to the line-of-sight integral of the product of electron density and temperature (\(T_{e}\)), denoted as \(B_{\rm SZ}\propto\int n_{e}T_{e}dl\). Given that the ICM temperature (\(T_{X}\)) can be spectroscopically measured using X-ray observations, the line-of-sight elongation (\(\Delta l\)) can subsequently be determined through the combination of these three measurements as \(\Delta l\sim(B_{\rm SZ}^{*}\Lambda)/(S_{X}T_{X}^{*})\). Assuming co-alignement of the triaxial axes of the ICM and dark matter distributions, while still allowing for different axial ratios for the two quantities, our multi-probe analysis can thus constrain the three-dimensional shapes of galaxy clusters. CLUMP-3D was introduced in Sereno et al. (2017), where the authors inferred the triaxial matter and gas distribution of the galaxy cluster MACS J1206.2\(-\)0847. The technique built upon similar methods developed to constrain cluster morphology (e.g., Sereno & Umetsu 2011; Sereno et al. 2012). Then, it was applied to measure the shapes of the Cluster Lensing And Supernova survey with Hubble (CLASH4; Postman et al. 2012) clusters, to probe the ensemble-average three-dimensional geometry (Sereno et al. 2018; Chiu et al. 2018) as well as the radial profile of the non-thermal pressure fraction (Sayers et al. 2021). These results demonstrated the potential of the three-dimensional triaxial shape measurement technique, but they were relatively imprecise due to the sample size, data quality, and systematics related to cluster selection. Thus, the much larger CHEX-MATE galaxy cluster sample, with a well understood selection function and more uniform and higher quality X-ray data, will provide improved statistics and more robust constraints on the shape measurements.
Footnote 4: [https://www.stsci.edu/~postman/CLASH/](https://www.stsci.edu/~postman/CLASH/)
In this paper, we demonstrate several improvements to the original CLUMP-3D formalism while modeling the ICM distributions observed by _XMM-Newton_, _Planck_, and ground-based SZ data from ACT. As detailed below, we have implemented a fully two-dimensional analysis of the X-ray temperature (Lovisari et al. 2023) and SZ data, whereas the original CLUMP-3D only treated the X-ray surface brightness (SB) in two dimensions while using one-dimensional azimuthally-averaged profiles of both the X-ray spectroscopic temperatures and the SZ effect data. In addition, we now model the ICM gas density and pressure instead of its density and temperature. This allows us to fit the data with fewer parameters, thus accelerating the model fitting process. Additionally, we fully rewrote the code in Python to facilitate future public release of the package. In Sec. 2, we summarize the triaxial analysis formalism and describe the model fitting method. In Sec. 3, we introduce the X-ray and SZ data from our program and apply the technique to a CHEX-MATE galaxy cluster. In a subsequent paper, we will include gravitational lensing constraints in a manner that also builds upon, and improves, the existing CLUMP-3D technique. With these X-ray, SZ effect, and gravitational lensing data, we will be able to model the triaxial distributions of both the ICM and the dark matter. Throughout this study, we adopt a \(\Lambda\)CDM cosmology characterized by \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\rm m}=0.3\), and \(\Omega_{\Lambda}=0.7\). \(E(z)\) represents the ratio of the Hubble constant at redshift \(z\) to its present value, \(H_{0}\), and \(h_{70}=H_{0}/(100\) s\({}^{-1}\) Mpc\({}^{-1})/0.7\).
## 2 Triaxial Analysis: Formalism and the Model fit
While the mathematical description of a triaxial geometry for astronomical objects and their physical profiles has been introduced in previous studies (e.g., Stark 1977; Binggeli 1980; Binney 1985; Oguri et al. 2003; De Filippis et al. 2005; Corless & King 2007; Sereno et al. 2010, 2012, 2018), these works lack consistency in their notation. To prevent confusion, we present our mathematical formalism for triaxial modeling in this section. We then describe our model fitting procedure and the implementation of the fitting algorithm in our software package.
As this paper focuses on the analysis method to study ICM distributions, we do not include the gravitational lensing data in our fits. Future works in this series will expand our formalism to include total mass density profiles constrained by gravitational lensing measurements. For instance, in the case of a Navarro-Frenk-White (NFW; Navarro et al. 1997) profile the gravitational lensing analysis requires two additional parameters-total mass (\(M_{200}\)) and concentration (\(c_{200}\))-assuming that the gas and matter distributions are co-aligned along the ellipsoidal axes. This assumption is well supported by a 2D weak-lensing and X-ray analysis of 20 high-mass galaxy clusters (Umetsu et al. 2018), as well as by cosmological hydrodynamical simulations (Okabe et al. 2018).
### Geometry and Projection
To connect the intrinsic cluster geometry to the projected properties observed in the plane of the sky, we assume a triaxial ellipsoidal model for the gas distribution, where the thermodynamic profiles of the ICM are represented as a function of \(\zeta\), the ellipsoidal radius. In the intrinsic coordinate system of the ellipsoid \((x_{1},x_{2},x_{3})\), it is defined as:
\[\zeta^{2}=\frac{x_{1}^{2}}{q_{1}^{2}}+\frac{x_{2}^{2}}{q_{2}^{2}}+x_{3}^{2}, \tag{1}\]
where \(q_{1}\) and \(q_{2}\) are minor-to-major and intermediate-to-major axial ratios, respectively (\(0<q_{1}\leq q_{2}\leq 1\)). Given a semi-major axis of the ellipsoid \(l_{s}\), the volume of the ellipsoid is \((4\pi/3)l_{s}^{2}q_{1}q_{2}\). The ellipsoid becomes a prolate shape if \(q_{1}=q_{2}\leq 1\) and an oblate shape if \(q_{1}\leq q_{2}=1\).
Figure 1 illustrates the geometry of the ellipsoid and the involved coordinate systems. It is essential to note that the axes defining the ICM model may not align with the observer's frame. To relate the ellipsoid's intrinsic coordinate system (\(x_{1}^{\rm int}\), \(x_{2}^{\rm int}\), \(x_{3}^{\rm int}\)) to the observer's coordinate frame (\(x_{1}^{\rm obs}\), \(x_{2}^{\rm obs}\), \(x_{3}^{\rm obs}\)), we employ three Euler angles. These angles describe the relationship between the two coordinate systems: (1) the angle between \(x_{1}^{\rm int}\), aligned with the major axis of the ellipsoid, and \(x_{3}^{\rm obs}\), which lies along the observer's line-of-sight (\(\theta\)), (2) the angle between \(x_{1}^{\rm int}\) and the line of nodes (\(\varphi\)), and (3) the angle between \(x_{1}^{\rm obs}\) and the line of nodes (\(\psi\)). The line of nodes is the intersection of the \(x_{1}^{\rm int}\)-\(x_{2}^{\rm int}\) plane and the \(x_{3}^{\rm obs}\)-\(x_{2}^{\rm obs}\) plane, and it is aligned with the vector \(x_{3}^{\rm int}\times x_{3}^{\rm obs}\).
We can derive the geometric properties of the projected ellipse from the intrinsic parameters of the ellipsoid when it is projected onto the plane from any direction. These properties encompass the semi-major axis of the projected ellipse \(l_{p}\), its ellipticity \(\epsilon\), the orientation of the ellipse in the plane of the sky \(\theta_{e}\), and the elongation parameter \(e_{\rm l}\). The projected profiles are expressed as a function of \(\xi\), the elliptical radius of the ellipse in the plane of the sky.
Figure 1: This figure depicts the triaxial ellipsoid model and the coordinate systems used in the triaxial analysis. The intrinsic coordinate system of the ellipsoid is denoted by dotted grey arrows (\(x_{1}^{\rm int}\), \(x_{2}^{\rm int}\), and \(x_{3}^{\rm int}\)), where \(x_{3}^{\rm int}\) represents the major axis. The black arrows (\(x_{1}^{\rm obs}\), \(x_{2}^{\rm obs}\), and \(x_{3}^{\rm obs}\)) correspond to the observer’s coordinate system, where \(x_{3}^{\rm obs}\) is aligned with the observer’s line-of-sight. In other words, an observer views the ellipsoid in the \(-x_{3}^{\rm obs}\) direction. The three Euler angles (\(\theta\), \(\varphi\), \(\psi\)) characterize the intrinsic coordinate system of the ellipsoid in relation to the observer’s coordinate system. The blue line represents the line of nodes, which is the intersection of the \(x_{1}^{\rm int}\)-\(x_{2}^{\rm int}\) plane and the \(x_{1}^{\rm obs}\)-\(x_{2}^{\rm obs}\) plane, and it is aligned with the vector \(x_{3}^{\rm int}\times x_{3}^{\rm obs}\). The red ellipse denotes the projection of the ellipsoid on the sky plane, with \(l_{p}\) representing its semi-major axis. The black dashed line on the ellipse shows the projected major axis of the ellipsoid on the sky plane. The green ellipse is the projection of the ellipsoid onto the plane that is perpendicular to the sky plane, and \(l_{\rm loss}\) is the half size of the ellipse along the observer line-of-sight. See also Figs. 2. and 3. in Sereno et al. (2012).
The ellipticity of the projected ellipse (\(\epsilon\)) is
\[\epsilon=1-q_{\rm p}, \tag{2}\]
where \(q_{\rm p}\) is the minor-to-major axial ratio of the observed projected isophote (\(q_{\rm p}\leq 1\)), which is the inverse of \(e_{\rm p}\) used in Sereno et al. (2012), and
\[q_{\rm p}=\sqrt{\frac{j+l-\sqrt{(j-l)^{2}+4k^{2}}}{j+l+\sqrt{(j-l)^{2}+4k^{2}}}}, \tag{3}\]
where
\[j = \frac{1}{2}\left[\left(\frac{1}{q_{1}^{2}}+\frac{1}{q_{2}^{2}} \right)-\frac{\sin^{2}\theta\cos^{2}\psi\left(q_{1}^{2}+q_{2}^{2}-2\right)}{q_ {1}^{2}q_{2}^{2}}+\left(\frac{1}{q_{1}^{2}}-\frac{1}{q_{2}^{2}}\right)\left\{ \cos 2\psi\left(\cos^{2}\theta\cos^{2}\psi-\sin^{2}\psi\right)-\cos\theta \sin 2\varphi\sin 2\psi\right\}\right],\] \[k = \frac{1}{4q_{1}^{2}q_{2}^{2}}\left[2\cos\theta\left(q_{1}^{2}-q_{ 2}^{2}\right)\cos 2\psi\sin 2\varphi+\left\{\sin^{2}\theta\left(q_{1}^{2}+q_{ 2}^{2}-2\right)+\left(1+\cos^{2}\theta\right)\left(q_{1}^{2}-q_{2}^{2}\right) \cos 2\varphi\right\}\sin 2\psi\right], \tag{4}\] \[l = \frac{1}{2}\left[\left(\frac{1}{q_{1}^{2}}+\frac{1}{q_{2}^{2}} \right)-\frac{\sin^{2}\theta\sin^{2}\psi\left(q_{1}^{2}+q_{2}^{2}-2\right)}{q_ {1}^{2}q_{2}^{2}}+\left(\frac{1}{q_{1}^{2}}-\frac{1}{q_{2}^{2}}\right)\left\{ \cos 2\varphi\left(\cos^{2}\theta\sin^{2}\psi-\cos^{2}\psi\right)+\cos\theta \sin 2\varphi\sin 2\psi\right\}\right].\]
It is worth noting that the expressions of \(j\), \(k\), and \(l\) in Stark (1977) and Binggeli (1980) differ from those presented above, as they assumed \(\psi=0\), using only two angles to align the major ellipsoidal axis with the observer's line-of-sight. However, a coordinate transformation requiring \(\psi\) is necessary to align the remaining axes.
The orientation angle in the plane of the sky of the projected ellipse is
\[\theta_{e}=\tan^{-1}\left(\frac{l-j+\sqrt{(j-l)^{2}+4k^{2}}}{2k}\right), \tag{5}\]
and the elongation parameter of the ellipsoid is
\[e_{\rm l}\equiv\frac{l_{\rm los}}{l_{\rm p}}=\sqrt{\frac{q_{\rm p}}{q_{1}q_{2}}} f^{-3/4}, \tag{6}\]
where
\[f=\sin^{2}\theta\left[\left(\frac{\sin\varphi}{q_{1}}\right)^{2}+\left(\frac{ \cos\varphi}{q_{2}}\right)^{2}\right]+\cos^{2}\theta. \tag{7}\]
The elongation parameter, \(e_{\rm l}\), represents the ratio of the size of the ellipsoid along the observer's line-of-sight to the major axis of the projected ellipse in the sky plane, providing a measure of the three dimensional geometry of the triaxial ellipsoid model of the ICM. In the gas analysis presented in Sereno et al. (2012), the orientation angle (\(\theta_{\rm\epsilon}\); represented as \(\epsilon\) by Sereno et al. 2012) was determined from the X-ray map, while the elongation parameter (represented as \(e_{\rm\Delta}\), which is equivalent to \(1/e_{\rm l}\)) was estimated from the combined X-ray and SZ analysis. Later, Sereno et al. (2017) simultaneously constrained the individual Euler angles by treating the axial ratios and three angles as free parameters.
Then, the semi-major axis of the projected ellipse becomes
\[l_{\rm p} = \frac{l_{s}}{e_{\rm l}\sqrt{f}} \tag{8}\] \[= l_{s}\sqrt{\frac{q_{1}q_{2}}{q_{\rm p}}}f^{1/4}, \tag{9}\]
and the projected length scales \(l_{s}\) and \(l_{\rm los}\) are related by the elongation parameter, that is,
\[l_{\rm los}=l_{s}/\sqrt{f}. \tag{10}\]
In the plane of the sky, an elliptical radius \(\xi\) becomes
\[\xi^{2}=\left(x_{1}^{2}+\frac{x_{2}^{2}}{q_{\rm p}^{2}}\right)\left|\frac{l_{s} }{l_{\rm p}}\right\rangle^{2} \tag{11}\]
(Sereno et al. 2010). 5
Footnote 5: Assuming that the ellipse is expressed as \(\frac{x_{1}^{2}}{a_{1}^{2}}+\frac{x_{1}^{2}}{b_{2}^{2}}=1\), \(q_{\rm p}\) is the minor-to-major axial ratio (\(b/a\)), and the elliptical radius, which is the corresponding major axis length, becomes \(\sqrt{x_{1}^{2}+\frac{x_{1}^{2}}{q_{\rm p}^{2}}}\) because \(x_{1}^{2}+\frac{x_{1}^{2}}{b^{2}}x_{2}^{2}=a^{2}\).
Finally, three-dimensional volume density can be projected onto the sky plane by utilizing the geometric parameters,
\[F_{\rm 2D}(\xi;l_{\rm p},p_{i}) = \frac{2}{\sqrt{f}}\int_{\xi}^{\infty}F_{\rm 3D}(\xi;l_{s},p_{i}) \frac{\xi}{\sqrt{\zeta^{2}-\xi^{2}}}d\xi, \tag{12}\] \[F_{\rm 2D}(x_{\xi};l_{\rm p},p_{i}) = 2l_{\rm p}e_{\parallel}\int_{x_{\xi}}^{\infty}F_{\rm 3D}(x_{\zeta};l _{s},p_{i})\frac{x_{\zeta}}{\sqrt{x_{\zeta}^{2}-x_{\xi}^{2}}}dx_{\zeta}, \tag{13}\]
where \(x_{\zeta}=\zeta/l_{s}\), \(x_{\xi}=\xi/l_{\rm p}\), and \(p_{i}\) are the parameters describing the intrinsic density profile (Stark 1977; Sereno 2007; Sereno et al. 2010). Using this projection, we calculate the SZ and X-ray maps on the sky plane from the three-dimensional ellipsoidal distribution of the ICM profiles and fit the model to the data. We describe the analytic profiles (\(F_{\rm 3D}\)) for the physical quantities related to the direct observables (\(F_{\rm 2D}\)) in the next section.
### Electron Density and Pressure Profiles
We use smooth analytic functions of the electron density and pressure profiles to describe the thermodynamics and spatial distribution of the ICM, and then use these functions to compute observable quantities, such as the SZ effect map, the X-ray SB map, and the X-ray temperature map. The model lacks the ability to effectively constrain small-scale structures that deviate from its assumptions. However, the three-dimensional description of the profiles provides a better approximation compared to spherical models. After accounting for instrumental effects, such as the point spread function (PSF), these model maps are then compared to the observed data. The original CLUMP-3D package, as detailed in Sereno et al. (2017), instead assumed smooth analytic functions for the gas density and temperature (Vikhlinin et al. 2006; Baldi et al. 2012). However, because the presence (or not) of a cool core alters the overall shape of the temperature profile (e.g., Pratt et al. 2007), the analytic function needs to be sufficiently flexible to allow for either a decrease or increase in temperature at small radii. Pressure profiles are more regular in their global shape (e.g., Arnaud et al. 2010), and therefore a simpler function with fewer free parameters can be used to describe the ICM. Thus, our overall model can be more easily constrained than the one used by Sereno et al. (2017). Table 1 lists the model parameters used in our gas analysis, including the geometric parameters described in the previous section.
The electron density profile is described as
\[n_{e}(\zeta)=n_{0}\left(\frac{\zeta}{\zeta_{c}}\right)^{-\eta_{0}}\left[1+ \left(\frac{\zeta}{\zeta_{c}}\right)^{2}\right]^{-3\eta_{0}/2+\eta_{1}/2}\left[ 1+\left(\frac{\zeta}{\zeta_{\rm c}}\right)^{3}\right]^{-\gamma_{\rm e}/3}, \tag{14}\]
where \(n_{0}\) is the central electron density, \(\zeta_{\rm c}\) is the core radius, and \(\zeta_{\rm c}\) is the tidal radius (\(\zeta_{\rm c}>\zeta_{c}\)). (\(\beta_{e}\), \(\eta_{e}\), \(\gamma_{e}\)) represent the power law exponent of the electron density distribution for the intermediate, inner, and external slope of the profile, respectively (Vikhlinin et al. 2006; Ettori et al. 2009). The electron pressure profile is modeled using a generalized NFW (gNFW) profile (Navarro et al. 1996; Nagai et al. 2007; Arnaud et al. 2010). It is described as
\[\frac{P_{e}(x)}{P_{500}}=\frac{P_{0}}{(c_{500}x)^{\gamma_{p}}[1+(c_{500}x)^{ \alpha_{p}}]^{(\beta_{p}-\gamma_{p})/\alpha_{p}}}, \tag{15}\]
where \(x=\zeta/R_{500}\), (\(\gamma_{p}\), \(\alpha_{p}\), \(\beta_{p}\)) describes the power law exponent for the central (\(r\ll r_{\rm s}\)), intermediate (\(r\sim r_{\rm s}=R_{500}/c_{500}\)), and outer (\(r\gg r_{\rm s}\)) regions, and the characteristic pressure is
\[P_{500}=1.65\times 10^{-3}E(z)^{8/3}\times\left[\frac{M_{500}}{3\times 10^{14}h_{7 0}^{-1}M_{\odot}}\right]^{2/3}h_{70}^{2}~{}{\rm keV~{}cm^{-3}}. \tag{16}\]
The expressions for \(P_{500}\) provided in Nagai et al. (2007) and Arnaud et al. (2010) represent the gas pressure and the electron pressure, respectively. We opt to use the electron pressure formulation from the latter. In order to convert the electron pressure, \(P_{e}\), into gas pressure, it is necessary to incorporate both the mean molecular weight and the mean molecular weight per free electron into the calculations. As noted by Nagai et al. (2007), strong degeneracies between the pressure profile parameters generally prevent meaningful constraints when all are varied (see also Battaglia et al. 2012). For our baseline fits, we thus fix the values of \(c_{500}\) and \(\gamma_{p}\) to 1.4 and 0.3 as in Sayers et al. (2023). In addition, because \(\beta_{p}\) characterizes the pressure profile in the outer regions, it may not be well-constrained depending on the map size chosen for the actual fit. For the demonstration of our approach using actual CHEX-MATE data in Sec. 3, we restrict the map size of the X-ray and SZ observational data to within \(R_{500}\) to mask out potential spurious signal at large radii that do not originate from a target cluster, and therefore an external constraint on the value of \(\beta_{p}\) is required. In such cases, we use a value that depends on the mass and redshift, given by
\[\beta_{p}=5.495\left(\frac{M_{500}}{10^{15}~{}{\rm M}_{\odot}}\right)^{0.15}(1+ z)^{0.02}~{}. \tag{17}\]
This relation is derived from a combined X-ray and SZ analysis of galaxy clusters with a redshift range of \(0.05\leq z\leq 0.60\) and mass range of \(4\times 10^{14}\leq M_{500}\leq 30\times 10^{14}{\rm M}_{\odot}\)(Sayers et al. 2023). This fit is thus valid for the mass and redshift ranges of the CHEX-MATE clusters, with Tier-1 covering \(0.05<z<0.2\) and \(2\times 10^{14}<M_{500}<9\times 10^{14}{\rm M}_{\odot}\), and Tier-2 encompassing \(z<0.6\) and \(M_{500}>7.25\times 10^{14}{\rm M}_{\odot}\).
### Sunyaev-Zeldovich Effect and X-ray Observables
In this section, we summarize the observables associated with the SZ effect and the X-ray emissivity, and explain their relationship to the electron density and pressure profiles introduced earlier. The SZ effect is characterized by the Compton-\(y\) parameter, which is proportional to the integrated electron pressure along the line-of-sight.
\[y\equiv\frac{\sigma_{\rm T}}{m_{e}c^{2}}\int_{\parallel}P_{e}dl=\frac{\sigma_{ \rm T}k_{B}}{m_{e}c^{2}}\int_{\parallel}n_{e}T_{e}dl, \tag{18}\]
where \(\sigma_{\rm T}\) is the Thomson cross-section, \(k_{B}\) is the Boltzmann constant, \(n_{e}\) is the electron number density, and \(T_{e}\) is the electron temperature. The X-ray observations are primarily sensitive to the surface brightness,
\[{\rm SB}=\frac{1}{4\pi(1+z)^{3}}\int_{1}n_{e}^{2}\Lambda_{\rm eff}(T_{e},Z)dl \tag{19}\]
(Reese et al., 2010), of the ICM due to thermal Bremsstrahlung, where the cooling function \(\Lambda_{\rm eff}(T_{e},Z)\) quantifies the thermal radiation emitted from a fully ionized plasma due to collisions, taking into account the relative abundance of each chemical element. It can be calculated using software such as XSPEC(Arnaud, 1996). We use a pre-calculated table and interpolate the value in the temperature (\(T_{e}\))-metallicity (\(Z\)) space during the model computation. To calculate the emissivity, the instrument response within the chosen energy band [0.7-1.2] keV and the Galactic hydrogen column density must be taken into account, as explained in Bartalucci et al. (2023), which describes the details of the data analysis used to produce the SB maps. In our software, we perform the calculation using the Python package pyproffit6(Eckert et al., 2020).
Footnote 6: [https://pyproffit.readthedocs.io/en/latest/intro.html](https://pyproffit.readthedocs.io/en/latest/intro.html)
The _XMM-Newton_ data can also be used to derive projected temperature maps of ICM via spectroscopic fits (Lovisari et al., 2023). Within our model, we approximate this spectroscopic temperature based on the formalism of Mazzotta et al. (2004) as follows:
\[T_{\rm sp}=\frac{\int WT_{e}dV}{\int WdV}\;{\rm keV};\;W=\frac{n_{e}^{2}}{T_{e }^{3/4}}, \tag{20}\]
which is valid for Bremsstrahlung (\(T_{e}\geq 3\) keV).
The SZ and X-ray observables (Eqs. 18 and 19) are modeled as projections of the three-dimensional profiles parameterized by the ellipsoidal radius \(\zeta\) (or \(\chi_{\zeta}\)). The three-dimensional volume density of the models, \(F_{\rm 3D}(x_{\zeta};l_{s},p_{i})\), can be written analytically, and
\begin{table}
\begin{tabular}{c l l l} \hline \hline Parameter & Units & Description & \multicolumn{1}{c}{Default Prior} \\ \hline & & Geometrical Parameters of a Triaxial Ellipsoid (Eqs. 1 and 4) & \\ \hline \(q_{\rm ICM,1}\) & Minor-to-major axial ratio of the ICM distribution & \(\mathcal{U}(0,1)\) \\ \(q_{\rm ICM,2}\) & Intermediate-to-major axial ratio of the ICM distribution & \(\mathcal{U}(q_{\rm ICM,1},1)\) \\ \(c_{\rm cos}\,\theta\) & Cosine of the inclination angle of the ellipsoid major axis & \(\mathcal{U}(0,1)\) \\ \(\varphi\) & deg & Second Euler angle & \(\mathcal{U}\) (-\(\pi\)/2, \(\pi\)/2) \\ \(\psi\) & deg & Third Euler angle & \(\mathcal{U}\) (-\(\pi\)/2, \(\pi\)/2) \\ \hline \multicolumn{4}{c}{Electron Density Profile (Eq. 24)} \\ \hline \(n_{0}\) & cm\({}^{-3}\) & Central scale density of the distribution of electrons & \(\mathcal{U}(10^{-6},10)\) \\ \(\zeta_{c}\) & kpc & Ellipsoidal core radius of the gas distribution & \(\mathcal{U}(0,10^{3})\) \\ \(\zeta_{l}\) & Mpc & Ellipsoidal truncation radius of the gas distribution (\(\zeta_{i}>\zeta_{c}\)) & \(\mathcal{U}(\zeta_{c}/10^{3},3)\) \\ \(\beta_{e}\) & Slope of the gas distribution (in the intermediate region) & \(\mathcal{U}(0,3)\) \\ \(\eta_{e}\) & Slope of the gas distribution (inner) & \(\mathcal{U}(0,1)\) \\ \(\gamma_{e}\) & Slope of the gas distribution (outer) & \(\mathcal{U}(0,5)\) \\ \hline \multicolumn{4}{c}{Gas Pressure Profile (Eq. 15)} \\ \hline \(P_{0}\) & Normalization for the gNFW pressure profile & \(\mathcal{U}(0,10^{2})\) \\ \(c_{500}\) & Pressure profile concentration (\(r\sim r_{\rm s}=R_{500}/c_{500}\)) & \(\delta(1.4)\) \\ \(\gamma_{p}\) & Slope parameter for central region (\(r\ll r_{\rm s}\)) & \(\delta(0.3)\) \\ \(\alpha_{p}\) & Slope parameter for intermediate region (\(r\sim r_{\rm s}\)) & \(\mathcal{U}(0,5)\) \\ \(\beta_{p}\) & Slope parameter for outer region (\(r\gg r_{\rm s}\)) & \(\mathcal{U}(0,15)\)7 \\ \hline \hline \end{tabular}
**Notes.** We consider five geometric parameters (\(q_{\rm ICM,1}\); \(q_{\rm ICM,2}\), \(\theta\), \(\varphi\), \(\psi\)), six electron density parameters (\(n_{0}\), \(\zeta_{c}\), \(\xi_{c}\), \(\beta_{e}\), \(\eta_{e}\), \(\gamma_{e}\)), and five gas pressure parameters (\(P_{0}\), \(c_{500}\), \(\gamma_{p}\), \(\alpha_{p}\), \(\beta_{p}\)). For the geometric and electron pressure profile parameters, we primarily adopt the priors in Sereno et al. (2018). We also assign delta priors to \(c_{500}\) (=1.4), and \(\gamma_{p}\) (=0.3) as a default, resulting in 14 free parameters. In the default prior column, \(\mathcal{U}\) refers to a uniform prior and \(\delta\) refers to a delta function that fixes the parameter for the model fit.
\({}^{\rm 4)}\) For the cluster PSZ G313+61.13, to which we applied the model fit in this paper (Sec. 3), we employed a delta prior (Eq. 17) because we limited the map size to be within \(R_{500}\), which results in very little sensitivity to \(\beta_{p}\)(Sayers et al., 2023).
\end{table}
Table 1: The Gas Model Parameters
the two-dimensional maps are calculated following Eq. 12. The model Compton-\(y\) parameter is
\[y_{\rm model}(x_{\xi};l_{\rm p},p_{i})=\left(2lp_{\rm e}l_{\rm i}\right)\left( \frac{\sigma_{\rm T}}{m_{e}c^{2}}\right)\int_{x_{\xi}}^{\infty}P_{e}(x_{\xi}) \frac{x_{\xi}}{\sqrt{x_{\xi}^{2}-x_{\xi}^{2}}}dx_{\xi}, \tag{21}\]
where
\[P_{e}(x_{\xi})=\frac{P_{0}P_{500}}{\left(c_{500}x_{\xi}\frac{l_{\rm i}}{R_{500 }}\right)^{\gamma_{p}}\left[1+\left(c_{500}x_{\xi}\frac{l_{\rm i}}{R_{500}} \right)^{\sigma_{p}}\right]^{(\tilde{\sigma}_{p}-\gamma_{p})/a_{p}}\ {\rm keV\ cm^{-3}}}. \tag{22}\]
This integration can be computationally expensive, depending on the size of the map. To expedite the calculation, we create a linearly-spaced sample of the (normalized) elliptical radius \(x_{\xi}\) and interpolate the integration results while generating a model map. We apply the same technique in the X-ray observable calculation. Lastly, we convolve the model map with the appropriate PSF shape (e.g., a \(7^{\prime}\) FWHM Gaussian map in the case of _Planck_ and \(1.6^{\prime}\) FWHM in the case of ACT, see Fig. 2).
Similarly, the X-ray SB (Eq. 19) model becomes
\[{\rm SB}_{\rm model}(x_{\xi};l_{\rm p},p_{i})=\left(2lp_{\rm e}l_{\rm i}\right) \frac{1}{4\pi(1+z)^{3}}\int_{x_{\xi}}^{\infty}n_{e}^{2}(x_{\xi})\Lambda_{\rm eff }\left(T_{e}(x_{\xi}),Z(x_{\xi})\right)\frac{x_{\zeta}}{\sqrt{x_{\zeta}^{2}-x_ {\xi}^{2}}}dx_{\zeta}, \tag{23}\]
where
\[n_{e}(x_{\zeta})=n_{0}\left(x_{\zeta}\frac{l_{s}}{\zeta_{c}}\right)^{-\gamma_ {p}}\left[1+\left(x_{\zeta}\frac{l_{s}}{\zeta_{c}}\right)^{2-3\tilde{\sigma} _{p}/2+\gamma_{p}/2}\left[1+\left(x_{\zeta}\frac{l_{s}}{\zeta_{s}}\right)^{3} \right]^{-\gamma_{p}/3}, \tag{24}\]
and the electron temperature is
\[T_{e}(x_{\zeta})=\frac{P_{e}(x_{\zeta})}{n_{e}(x_{\zeta})k_{B}}. \tag{25}\]
We use a radius-dependent metallicity profile \(Z(x_{\zeta})\) obtained from the X-COP galaxy cluster samples (Ghizzardi et al. 2021) for calculating the cooling function.
Upon generating the model, instrumental responses are incorporated to facilitate a direct comparison between the model and the data. For the _XMM-Newton_ X-ray maps, the sky background in the [0.7-1.2] keV band (\(2.49\times 10^{-4}\) cts/s/arcmin\({}^{2}\); Bartalucci et al. 2023) is considered. Specifically, we adopted the sky and particle background measured by the European Photon Imaging Camera (EPIC; Struder et al. 2001; Turner et al. 2001) M2 CCD in the [0.5-2] keV band and converted it for the [0.7-1.2] keV band. After adding the sky background, the vignetting is applied. Subsequently, the resulting map is convolved with a Gaussian profile to account for the PSF. The nominal PSF of _XMM-Newton_ can be can be closely represented using a Gaussian function with a \(6^{\prime\prime}\) FWHM7. However, the actual FWHM of the PSF is dependent on the angle relative to the optical axis, and combining images from different cameras could potentially deteriorate the final PSF. Therefore, we follow the convention of Bartalucci et al. (2023) and assume the Gaussian has a FWHM of \(10^{\prime\prime}\). The line-of-sight integration of the observed quantities described above is performed to a depth of 10 Mpc in radius.
Footnote 7: [https://xmm-tools.cosmos.esa.int/external/xmm_user_support/documentation/uhb/onaxisrraypsf.html](https://xmm-tools.cosmos.esa.int/external/xmm_user_support/documentation/uhb/onaxisrraypsf.html)
To summarize, the observational data used in our analysis includes two dimensional images of the SZ signal, X-ray SB, and X-ray temperature. Then, we use our triaxial model to generate analogous images based on the model parameters delineated in Table 1. The observed and model-generated images can then be directly compared to facilitate our fitting process, and the method employed for this fitting procedure is elaborated upon in the following section.
### Fitting Formalism
The \(\chi^{2}\) statistic is used to define the likelihood of the model. We use emcee (Foreman-Mackey et al. 2013), a Python-based affine-invariant ensemble Markov Chain Monte-Carlo (MCMC; Goodman & Weare 2010) package, for the model fitting process. By performing MCMC sampling (Hogg & Foreman-Mackey 2018), we determine the posterior distribution of the parameters that describe the triaxial model. When conducting a model fit with the data, we occasionally need to adjust the scale parameter of the stretch move within the affine-invariant ensemble sampling algorithm implemented in the package to enhance performance (Huijser et al. 2015).
We define the \(\chi^{2}\) functions for our analysis below, which are based on two-dimensional maps of the SZ and X-ray data rather than the original one-dimensional radial profiles used in the CLUMP-3D method presented in Sereno et al. (2017). The \(\chi^{2}\) function for the two-dimensional SZ map is
\[\chi^{2}_{\rm SZ}=\sum_{i,j=1}^{Ny}\left[y_{i}-\tilde{y}_{j}\right]\left(C_{ \rm SZ}^{-1}\right)_{ij}\left[y_{j}-\tilde{y}_{j}\right], \tag{26}\]
where \(\hat{y}_{i}\) is the model Compton-\(y\) within a pixel, and \(y_{i}\) is the observed value. To deal with the correlated noise in the SZ data, we use the inverse of the uncertainty covariance matrix (\(C_{\rm SZ}^{-1}\)). Similarly, The \(\chi^{2}\) function for the X-ray temperature map becomes
\[\chi_{\rm T}^{2}=\sum_{i=1}^{N_{\rm T}}\left(\frac{T_{{\rm sp},i}-\hat{T}_{{\rm sp },i}}{\delta T_{{\rm sp},i}}\right)^{2}, \tag{27}\]
where \(\hat{T}_{{\rm sp},i}\) is the model spectroscopic temperature within a pixel, and \(T_{{\rm sp},i}\) is the observed value with uncertainty \(\delta T_{{\rm sp},i}\).
For the X-ray SB, we employ a dual approach. We use a two-dimensional model fit within the circular region that encloses 80% of the emission and one-dimensional analysis for the outside region where the background and the source emission is comparable and signal-to-noise ratio is relatively low. In the exterior region, we compute azimuthal medians in annular bins to mitigate biases in measuring X-ray SB caused by gas clumping, as suggested by Eckert et al. (2015). While our current analysis solely uses the two-dimensional map of X-ray temperature, in future work we intend to implement an approach that is fully consistent with our treatment of the X-ray SB to also mitigate local deviations from homogeniety in the X-ray temperature data (Lovisari et al., 2023). Then, the combined likelihood becomes
\[\chi_{\rm SB}^{2}=\chi_{{\rm SB},{\rm 1D}}^{2}+\chi_{{\rm SB},{\rm 2D}}^{2} \tag{28}\]
where
\[\chi_{{\rm SB},{\rm 1D}}^{2}=\sum_{i=1}^{N_{\rm SB},{\rm 1D}}\left(\frac{S_{{ \rm X},{\rm 1D},i}-\hat{S}_{{\rm X},{\rm 1D},i}}{\delta S_{{\rm X},{\rm 1D},i}} \right)^{2}, \tag{29}\]
and
\[\chi_{{\rm SB},{\rm 2D}}^{2}=\sum_{i=1}^{N_{\rm SB},{\rm 2D}}\left(\frac{S_{{ \rm X},{\rm 2D},i}-\hat{S}_{{\rm X},{\rm 2D},i}}{\delta S_{{\rm X},{\rm 2D},i}} \right)^{2}, \tag{30}\]
where \(\hat{S}_{{\rm X},i}\) is the model SB, and \(S_{{\rm X},i}\) and \(\delta_{{\rm S},i}\) are obtained from the observational data. We currently employ SB measurements and the corresponding error for our 2D analysis assuming Gaussian statistics. This should be a valid assumption, as we define regions with sufficiently large photon counts (i.e., \(\geq 20\)). However, the formally correct approach is to use the Cash statistic, which accounts for Poisson fluctuations in the photon counts (Cash, 1979). Fits using the Cash statistic for photon counting in the low count regime will be explored in future works.
Finally, the total \(\chi^{2}\) statistic becomes
\[\chi_{{\rm X+SZ}}^{2}=\chi_{{\rm SZ}}^{2}+\chi_{\rm T}^{2}+\chi_{\rm SB}^{2}, \tag{31}\]
and the MCMC is used to sample \(\chi_{{\rm X+SZ}}^{2}\) within the parameter space near the best fit.
### Parameter Estimation with Mock Data
To validate the accuracy of our model fitting algorithm, we conduct a full analysis using mock observations of a galaxy cluster described by our model from known input parameter values. Using the model parameters outlined in Table 1, we generate model SZ, X-ray SB and temperature maps, incorporating the instrument PSF response. For this test, we assume the mock cluster has the following characteristics: \(z=0.18\), \(M_{500}=8.8\times 10^{14}\)M\({}_{\odot}\), \(R_{500}=7.4^{\prime}\). Additionally, we take the following values for the
Figure 2: The projected, PSF-convolved SZ model map (_left_). PSF convolved, X-ray surface brightness (_middle_) and temperature (_right_) maps. The contours for the models are overlaid to improve the visual representation of the maps. The simulated PSFs used for the SZ and X-ray maps correspond to \(7^{\prime}\) and \(10^{\prime\prime}\) FWHM, respectively. To ensure accurate PSF convolution, we bin the pixels in the maps such that the FWHM of each instrument’s PSF is covered by at least three pixels.
geometric configuration and electron density and pressure parameters, with \((q_{\rm ICM,1},q_{\rm ICM,2},\cos\theta,\varphi,\psi)=(0.6,0.75,0.8,-25,60)\), \((n_{0},\zeta_{\rm c},\zeta_{\rm r},\beta_{\rm e},\eta_{\rm e},\gamma_{\rm e})=(0.002,175,1.5,0.6,0.3,1.8)\), \(P_{0},\alpha_{p}=(10.0,1.0)\). In this case, \(e_{\rm l}=1.02\).
The model maps generated with the input parameters are presented in Fig. 2. For each pixel based on its coordinates within the observed map, we calculate the observables projected onto the two-dimensional sky plane (Sec. 2.3). Then, instrumental effects, including the PSF response, are applied. As we discuss in the next section, our baseline analysis of the observed data uses the combined _Planck_ and ACT SZ effect map (Aiola et al. 2020), and we assume a PSF with a FWHM of \(1.6^{\prime}\). To ensure adequate angular sampling of the PSF, we require a maximum pixel size equal to the FWHM divided by three.
In addition, we incorporated noise into each mock observation. Using the error maps for the observed data, we randomly sampled Gaussian noise distributions for the SZ, X-ray SB, and X-ray temperature maps, respectively. Figure 3 shows the posterior distribution of the parameters from our fit to this mock observation. The posterior distributions indicate that we can accurately
Figure 3: Posterior distributions estimated from our MCMC for a mock observation generated from a smooth model. Green vertical lines in each plot indicate the input parameters used to generate the mock observation maps, while the red vertical lines represent the median value from the accepted MCMC samples. The values displayed above each histogram show the median of the distribution, along with the \(1\sigma\) (68.3%) credible region, which is indicated by dashed vertical lines in every plot. Additionally, the solid black line in the 2D distributions encloses the 68% credible region for the parameter pairs. As highlighted by Vikhlinin et al. (2006) and Nagai et al. (2007), there exists a correlation among the model parameters related to the ICM radial profiles, and individual parameter values exhibit degeneracy. However, our objective is to ascertain smooth analytic functions that accurately represent the electron density and pressure profiles, thereby providing a comprehensive description of the ICM thermodynamics.
recover most of the varied parameter values within the expected deviations due to noise fluctuations. Thus, our fitting methodology is able to reliably determine the underlying shape and thermodynamics of the observed mock galaxy cluster.
The use of both SZ and X-ray data in our analysis allows us to measure the three-dimensional geometry of the ICM distribution by constraining the elongation parameter (Sec. 2.1), since the two observational probes redundantly measure the thermodynamic properties of the gas along the line-of-sight. However, it should be noted that there may be degeneracies in determining cluster shape through this multi-probe approach depending on the relative orientation of the geometry, especially in inferring the geometric parameters of the 3D structure, as discussed in Sereno (2007). These degeneracies can cause bias in the recovered shape parameters along with multimodality in the posterior distributions. A further exploration of these degeneracies, in particular as they pertain to the observational data available for the CHEX-MATE sample, will be included in a subsequent paper in this series.
## 3 Application to CHEX-MATE Data
In this section, we introduce the X-ray and SZ data collected from our program. We apply the triaxial analysis technique to analyze a CHEX-MATE galaxy cluster PSZ2 G313.33+61.13 (Abell 1689), and the cluster serves as an illustrative example to demonstrate the method.
### Data
Table 2 summarizes the SZ and X-ray data from CHEX-MATE available for our multiwavelength analysis of the ICM distribution. The foundation of our analysis is the 3 Msec _XMM-Newton_ observing program CHEX-MATE (CHEX-MATE Collaboration et al. 2021), from which we have obtained two-dimensional X-ray SB and temperature maps produced using the Voronoi tessellation method (Cappellari & Copin 2003; Diehl & Statler 2006). The details of the image production are reported in Bartalucci et al. (2023) and Lovisari et al. (2023), and here we report briefly the main analysis steps.
The _XMM-Newton_ observations of the clusters were obtained using the EPIC instrument (Struder et al. 2001; Turner et al. 2001). To create the X-ray SB map, photon-count images in the [0.7-1.2] keV range were extracted from the data acquired using the MOS1, MOS2, and pn cameras on the instrument. The energy band was selected to optimize the contrast between the emission from the source and the background (Ettori et al. 2010). The images from all three cameras were combined to maximize the statistical significance while accounting for the energy-band responses. Additionally, the X-ray maps are instrumental-background subtracted and corrected for the exposure. Point sources are removed from the analysis (Ghirardini et al. 2019) by masking them with circular regions that appear as empty circular regions in the X-ray maps in Fig. 4. Furthermore, they are spatially binned to have at least 20 counts per bin using the Voronoi technique. X-ray temperature maps (Lovisari et al. 2023) were prepared in a similar manner for the data obtained in the [0.3-7] keV band, with background modeling (Lovisari & Reiprich 2019) and spectral fitting performed. The fitting procedure to ascertain the temperature was done utilizing the XSPEC (Arnaud 1996), which was employed to minimize the modified Cash statistics (Cash 1979) with the assumption of Asplund et al. (2009) metallicity. Subsequently, Voronoi-binned maps were generated to achieve a high signal-to-noise ratio (\(\sim\)30) for each cell.
_Planck_ SZ maps are available for all of the CHEX-MATE galaxy clusters by definition (Planck Collaboration et al. 2013; Pointecouteau et al. 2021). From these data we have generated a custom \(y\)-map using the Modified Internal Linear Component Algorithm (MILCA; Hurier et al. 2013) with an improved angular resolution of 7\({}^{\prime}\) FWHM compared to the one publicly released by _Planck_ with an angular resolution of 10\({}^{\prime}\) FWHM (Planck Collaboration et al. 2016b). Also, ground-based SZ observations from cosmic microwave background (CMB) surveys, including the ACT and the South Pole Telescope (SPT; Bleem et al. 2022)8, as well as the Caltech Submillimeter Observatory (CSO)'s Bolocam galaxy cluster archive9(Sayers et al. 2013), provide higher angular resolution data for a subset of CHEX-MATE clusters. Some of these ground-based data are currently publicly accessible, while others are slated for future release.
Footnote 8: [https://pole.uchicago.edu/public/data/sptsz_ymap/](https://pole.uchicago.edu/public/data/sptsz_ymap/)
Footnote 9: [https://irsa.ipac.caltech.edu/data/Planck/release_2/ancillary-data/bolocam/bolocam.html](https://irsa.ipac.caltech.edu/data/Planck/release_2/ancillary-data/bolocam/bolocam.html)
In this demonstration paper, we make use of the ACT SZ component-separated maps. The recent data release 4 (DR4) from the ACT provides component-separated maps, one of which is the SZ (Aiola et al. 2020; Madhavacheril et al. 2020). These maps were generated by analyzing data from a 2,100 square degree area of the sky, captured using the ACTPol receiver (Henderson et al. 2016) at 98 and 150 GHz. This data offers more than four times finer angular resolution compared to the _Planck_ map. Then, the maps were jointly analyzed and combined with _Planck_ data. Rather than using the noise estimate provided with these data, which is
\begin{table}
\begin{tabular}{c c c l} \hline \hline Wavelength & Type & Instrument & Reference \\ \hline X-ray & Surface brightness (SB) & _XMM-Newton_ & Bartalucci et al. (2023) \\ X-ray & Temperature & _XMM-Newton_ & Lovisari et al. (2023) \\ mm-wave & SZ \(y\)-map & _Planck\({}^{a}\)_ & Planck Collaboration et al. (2016b) \\ mm-wave & SZ \(y\)-map & ACT (ACTPol)\({}^{b}\) & Madhavacheril et al. (2020) \\ \hline \end{tabular} 1
[FOOTNOTE:1]Footnote 1: [https://www.aanda.com/](https://www.aanda.com/)
[FOOTNOTE:2]Footnote 2: [https://pole.uchicago.edu/public/data/sptsz_ymap/](https://pole.uchicago.edu/public/data/sptsz_ymap/)
[FOOTNOTE:7]Footnote 7: [https://www.aanda.com/](https://www.aanda.com/)
[FOOTNOTE:8]Footnote 8: [https://pole.uchicago.edu/public/data/sptsz_ymap/](https://pole.uchicago.edu/public/data/sptsz_ymap/)
\end{table}
Table 2: List of the X-ray and SZ observation data and instruments used for the analysis of PSZ2 G313.33+61.13.
quantified as a two dimensional power spectral density, we instead follow an approach based on the recent analysis of similar joint ACT and _Planck_ maps in Pointecouteau et al. (2021). Specifically, we randomly sample 10,000 maps, ensuring that their size aligns with that of the input SZ data, in the corresponding ACT region (for instance, the region designated as 'BN' for the cluster Abell 1689 analyzed in the next section). Then, we compute the covariance using these maps to estimate the noise covariance matrix. The resulting noise rms for the \(y\)-map is approximately \(\sim 9\times 10^{-6}\) per 0.5\({}^{\prime}\) square pixel, and the diagonal elements of the noise covariance matrix are shown along with the \(y\)-map in Fig. 4.
### Pszz g313.33+61.13 (Abell 1689)
Using the datasets described above, we demonstrate our fitting method for PSZ2 G313.33+61.13 (Abell 1689), which is a Tier-2 cluster in the CHEX-MATE sample located at \(z=0.1832\) with a _Planck_ SZ estimated mass of \(M_{500}=8.77\times 10^{14}\)M\({}_{\odot}\). We note the lensing mass measurement of the cluster is \(\sim\)70% higher than the _Planck_ hydrostatic mass estimate; see Umetsu et al. (2015). We conducted a triaxial fit aligning the model center with the X-ray peak (Bartalucci et al. 2023). For a morphologically regular cluster, like Abell 1689, any deviations or offsets between the SZ and X-ray measurements are expected to have minimal impact on the overall model fit. The _Planck_ + ACT SZ \(y\)-maps, along with the _XMM-Newton_ X-ray SB and temperature maps, are shown in Fig. 4. Maps of the rms noise for each observable are also included, and indicate that the cluster is imaged at high signal to noise. This particular cluster was chosen for our demonstration because its triaxial shape has been well studied in the literature (Morandi et al. 2011; Sereno & Umetsu 2011; Sereno et al. 2012; Umetsu et al. 2015). For example, Sereno et al. (2012) performed a gas-only analysis using radial profiles of the X-ray and SZ observations from _Chandra_, _XMM-Newton__WMAP_, along with various ground-based SZ facilities, and constrained the shape and orientation of the cluster's triaxial model with \(q_{\rm ICM,1}=0.70\pm 0.15\), \(q_{\rm ICM,2}=0.81\pm 0.16\), and \(\cos\theta=0.70\pm 0.29\). A subsequent study by Umetsu et al. (2015) presented a combined multiwavelength analysis that included lensing data, with the inferred ICM distribution being \(q_{\rm ICM,1}=0.60\pm 0.14\), \(q_{\rm ICM,2}=0.70\pm 0.16\). Their
Figure 4: The SZ and X-ray maps of the CHEX-MATE galaxy cluster PSZ2 G313.33+61.13 (Abell 1689). The ACT Compton-\(y\) map (_top left_) and its error (_bottom left_) map, the X-ray SB (_top middle_) and its error (_bottom middle_) map, and the X-ray temperature (_top right_) and its error (_bottom right_) map are shown. The ACT SZ map is one of the component-separated map products that were produced using the internal linear combination method and combined with data from the _Planck_ (i.e., this is a joint map from ACT + _Planck_ data, Madhavacheril et al. 2020). The X-ray maps are the data products of Bartalucci et al. (2023) and Lovisari et al. (2023). Bright point sources in the X-ray SB maps are indicated by white circles and masked in the analysis, and the same point-source regions are also removed from the spectral analysis to obtain the temperatures. The regions are excluded in the model fit. The X-ray SB maps are binned using a Voronoi algorithm to ensure an adequate number of photon counts per bin, with smaller bin sizes used in the central region where the count rates are higher. For the temperature maps, Voronoi binning was similarly applied using a fixed signal-to-noise ratio of 30 instead of a fixed number of counts, ensuring a roughly uniform statistical uncertainty per bin. In the X-ray and SZ maps, red circles indicate the two-dimensional map regions included in our analysis. We incorporate a circular region with a radius of \(r=R_{500}\) around the galaxy cluster center for the SZ and the X-ray data by applying a circular mask to the maps, and the radius is shown as red circles. For this particular cluster, \(R_{500}\) is equal to 7.42\({}^{\prime}\) (\(\sim\)1.37 Mpc). In the X-ray SB and its corresponding error maps, red dashed circles represent the region that encompasses 80% of the emission in the SB map, which is where the 2D (inner region) and 1D analyses (outer region) are separated, and it is located at \(r=2.58\)’. We will explore and implement a comparable approach that integrates both 2D and 1D analysis techniques for temperature data, as described by Lovisari et al. (2023), in a forthcoming analysis.
derived value of \(\cos\theta\), obtained from the combined lensing and X-ray/SZ analysis, was found to be \(0.93\pm 0.06\). The large \(\cos\theta\) suggests that the major axis of the triaxial ellipsoid (\(\chi^{\rm int}_{3}\) in Fig. 1) is closely aligned with the observer's line of sight.
Figure 5 shows the posterior of the model parameters that describe our triaxial fit of PSZ2 G313.33+61.13, using the data from _Planck_, ACT, and _XMM-Newton_. We find axial ratios of \(q_{\rm ICM,1}=0.65\pm 0.02\) and \(q_{\rm ICM,2}=0.79\pm 0.02\). These values are consistent with previous results, but an order of magnitude more precise (Table 3). Our fits indicate the major axis of Abell 1689 is almost
\begin{table}
\begin{tabular}{l c c c} \hline \hline \(q_{\rm ICM,1}\) & \(q_{\rm ICM,2}\) & \(\cos\theta\) & \(e_{\parallel}\) & Reference \\ \hline \(0.70\pm 0.15\) & \(0.81\pm 0.16\) & \(0.70\pm 0.29\) & \(1.68\pm 0.53\) & Sereno et al. (2012) \\ \(0.60\pm 0.14\) & \(0.70\pm 0.16\) & \(0.93\pm 0.06\) & \(1.16\pm 0.10\) & Umetsu et al. (2015) \\ \(0.65\pm 0.02\) & \(0.79\pm 0.02\) & \(\geq 0.96^{a}\) & \(1.24\pm 0.03\) & This work \\ \hline \end{tabular} 1
\end{table}
Table 3: Parameters describing the triaxial geometry of PSZ2 G313.33+61.13 (Abell 1689)
Figure 5: The posterior distribution of the model fit parameters for the galaxy cluster PSZ2 G313.33+61.13 obtained using SZ data from _Planck_ and ACT, as well as X-ray data from _XMM-Newton_. The red vertical lines indicate the median value from the accepted MCMC samples, with values displayed along with their 68% credible regions above each histogram. Instead of the Euler angles \(\varphi\) and \(\phi\), we present \(e_{\parallel}\), which is a function of five geometric parameters of a triaxial ellipsoid (Eq. 6).
perfectly aligned with the line of sight, with \(\cos\theta\geq 0.96\) at 90% confidence. While previous works also indicated such an alignment, a much wider range of orientations were allowed in those fits. We note that our analysis only includes statistical uncertainties on the fit, and the uncertainty due to data calibration is not taken into account here. Also, as the elongation parameter (Eq. 6), which is the ratio of the size of the ellipsoid along the observed line-of-sight to the major axis of the projected ellipse in the sky plane, quantifies the 3D geometry of the triaxial ellipsoid model of the ICM. We thus present constraints on \(e_{\parallel}\) rather than on \(\varphi\) and \(\psi\). The inferred \(e_{\parallel}\) is well constrained in the fit to a value of \(1.24\pm 0.03\) and is consistent with the gas analysis result of Sereno et al. (2012), who found \(e_{\Delta}=0.66\pm 0.21\), which corresponds to \(1.15\leq e_{\parallel}\leq 2.22\). Figure 6 shows the reconstructed SZ, X-ray SB and temperature maps of PSZ G313.33+61.13, incorporating the instrument response, generated using the recovered parameters from Fig. 5. The difference map, which is created by taking the input data and subtracting the reconstructed model from it, reveals that the majority of the pixels exhibit relative errors that are spread within a range of \(\pm 4\sigma\) (Fig. 5). The residuals for the SZ, X-ray SB, and X-ray temperatures are distributed around zero. Their respective standard deviations are equivalent to \(1.5\sigma\), \(0.6\sigma\), and \(1.1\sigma\) when fitted by a Gaussian.
For comparison, we performed an additional X-ray + SZ fit using only the _Planck_ SZ data, without incorporating the ground-based ACT data. We obtain posteriors that significantly deviate from our baseline fit with ACT data. We attribute this to the coarse angular resolution of _Planck_ which prevents it from resolving morphological features given the angular size of Abell 1689 at z=0.1832. To test this, we generated two sets of mock observations using the recovered parameters from our baseline fit to the observed data from both _Planck_ and ACT (along with _XMM-Newton_). One mock was based on the properties of the \(y\)-map from _Planck_ + ACT, while the other mimicked the \(y\)-map with only _Planck_ SZ data, including the appropriate noise and PSF shape for
Figure 6: Reconstructed SZ and X-ray models of PSZ2 G313.33+61.13 generated using the recovered parameters from Fig. 5 (_top_). The difference between the observational data and the reconstructed model map above, in units of pixel-based error (_middle_). The histogram of the distribution of the relative error in the middle panels (_bottom_). The X-ray SB histogram takes into account both the residuals in the inner 2D region, which includes 80% of the emissions observed in the data, and the outer map region where we implemented 1D analysis using azimuthal medians (see Fig. 4). In all cases, the residuals are distributed within \(\pm 4\sigma\) level compared to the error. When the relative errors of the SZ, X-ray SB, and X-ray temperature are modeled with a Gaussian fit, their standard deviations align with \(1.5\sigma\), \(0.6\sigma\), and \(1.1\sigma\) respectively.
each case. Our fit to the mock multiwavelength data with the _Planck_ + ACT \(y\)-map yields recovered parameters closely aligned with the input model, suggesting these data can accurately recover the input ICM shape. In contrast, the second mock observation based on the _Planck_-only \(y\)-map produces a set of parameters significantly deviating from the input. This suggests that the SZ data from _Planck_ alone are insufficient to reliably fit our triaxial model, at least for a galaxy cluster with this specific shape at this specific redshift. This confirms that our fit to observed data using the _Planck_-only \(y\)-map are likely biased. In a subsequent paper we will explore this issue in more detail, to better understand which types of galaxy clusters can (or cannot) be reliably reconstructed with the data available for CHEX-MATE.
Furthermore, in order to evaluate how the much higher overall signal to noise of the X-ray SB compared to the SZ and X-ray temperature impacts the results, we carried out an additional fit using the reduced \(\chi^{2}\) for each of the three observables in order to weight them equally in the fit. The results of this fit indicate that there is only a minimal shift in the values of the derived geometric parameters based on this equal weighting of the three observables. Specifically, in the reduced \(\chi^{2}\) fit, \(q_{\rm ICM,1}\) has a value of \(0.70\pm 0.04\), \(q_{\rm ICM,2}\) is \(0.78\pm 0.05\), and \(e_{\parallel}\) stands at \(1.22\pm 0.07\). We also attempted to account for fluctuations in the calibration uncertainty, which can be especially important for the temperature profile (e.g., Schellenberger et al. 2015; Wallbank et al. 2022). We conducted model fits by introducing an additional \(\sim 10\%\) uncertainty of the temperature, but observed little changes in the parameters, with posteriors displaying similar levels of variation.
As we will illustrate in subsequent studies, the derived geometric parameters of the ICM distribution, such as the elongation that quantifies the 3D geometry, can be applied in conjunction with gravitational lensing measurements. For these fits, we will work under the assumption that the triaxial axes of the ICM and dark matter are coaligned, but with axial ratios that are allowed to vary. The lensing analysis becomes crucial for discerning the triaxial shapes of dark matter, circumventing the need to rely on hydrostatic equilibrium or simulation-based corrections. Consequently, a comprehensive multi-probe analysis facilitates a characterization of the total matter distribution, which is essential for precise lensing-based mass calibrations (Sereno et al. 2018), along with allowing for a determination of the distribution of non-thermal pressure support (Sayers et al. 2021).
## 4 Conclusions
We have improved a multi-probe analysis package to fit the three-dimensional ellipsoidal shapes of CHEX-MATE galaxy clusters. This package builds upon CLUMP.3D (Sereno et al. 2017), which was employed to analyze the triaxial shapes of CLASH clusters (Sereno et al. 2018; Chiu et al. 2018; Sayers et al. 2021). Specifically, we have made the following improvements: 1) we model 2D distributions of the SZ and X-ray temperature data, in contrast to the 1D azimuthally averaged profiles in these quantities used by Sereno et al. (2017), 2) we parametrize electron density and pressure rather than density and temperature, reducing the number of parameters and speeding up the fit, and 3) we have ported the code to Python to facilitate a future public release. For the two-dimensional map analyses, we have added the capability to include publicly available SZ data from ground-based CMB surveys such as ACT, in addition to the default _Planck_ SZ maps.
We verified the triaxial analysis method through mock data analysis and applied it to the actual CHEX-MATE galaxy cluster, PSZ2 G313.33+61.13 (Abell 1689). The analysis effectively constrains the model geometry, in particular, at the few percent level for the axial ratios. Our results are consistent with previous analyses of Abell 1689 available in the literature. Specifically, we find axial ratios of \(q_{\rm ICM,1}=0.65\pm 0.02\), \(q_{\rm ICM,2}=0.79\pm 0.02\), and elongation parameter \(e_{\parallel}=1.24\pm 0.03\). Compared to the similar gas-only analysis using X-ray and SZ data presented in Sereno et al. (2012), the axial ratios and elongation parameters in our study demonstrate a substantial improvement, with uncertainties an order of magnitude lower. This marked improvement is attributable to multiple factors: our use of deeper new _XMM-Newton_ data not available to Sereno et al. (2012); our use of an _XMM-Newton_ SB image rather than a shallower _Chandra_ SB image; our use of much higher quality SZ data from _Planck_ and ACT rather than from WMAP and SZA/OVRO/BIMA; and our improved analysis formalism making use of fully 2D images for all of the observables rather than a projected elliptically averaged profile of X-ray SB and temperature along with a single aperture photometric measurement of the SZ signal. Our results indicate that Abell 1689 has axial ratios typical of what is expected for the general population of galaxy clusters (Jing & Suto 2002; Lau et al. 2011), but a remarkably close alignment between the major axis and the line of sight. This alignment has resulted in exceptional lensing properties of Abell 1689, such as an abundance of strong lensing features (e.g., Broadhurst et al. 2005; Limousin et al. 2007), one of the largest Einstein radii observed (\(47\arcsec\), Coe et al. 2010), and an extremely large concentration for its mass when fitted to a spherically symmetric model (\(c_{\rm vir}=12.8^{+3.1}_{-2.4}\) or \(c_{200}=10.2^{+2.6}_{-1.9}\), Umetsu et al. 2011; Umetsu 2020). We thus conclude that there is nothing unusual about the triaxial shape of Abell 1689, other than its orientation. In addition, the estimated axial ratios of the cluster yield a triaxiality parameter \(t=0.66\) (Franx et al. 1991). While the incorporation of lensing data is necessary for a direct quantitative comparison with DM axial ratios, the calculated \(t\) classifies this halo as being close to the 'prolate' population that comprises \(\sim\!80\%\) of the total cluster fraction in the DM only simulations (Vega-Ferrero et al. 2017). The integration of lensing data for a comprehensive multi-wavelength analysis, as well as the public release of the software and data products, will be addressed in subsequent papers of this series.
###### Acknowledgements.
J.K. and J.S. were supported by NASA Astrophysics Data Analysis Program (ADAP) Grant 80NSSC21K571. J.K. is supported by a Robert A. Millan Fellowship from the California Institute of Technology (Caltech). M. S. acknowledges financial contribution from contract ASI-INAF n.2017-14-H.0. and from contract INAF mainstream project 105.01.86.10. M.E.D. acknowledges partial support from the NASA ADAP, primary award to SAO with a subward to MSU, SV9-89010. S.E., F.G., and M.R. acknowledge the financial contribution from the contracts ASI-INAF Athena 2019-27-HH.0, "Attividi di Studio per la communi scientifica di Astrofisica delle Alte Energe e Fisica Astroquercellare" (Aocordo Altaviato sXI-INAF n. 2017-14-H.0), and from the European Union's Horizon 2020 Programme under the AHEAD2020 project (grant agreement n. 871158). This research was supported by the International Space Science Institute (ISSI) in Bern, through ISSI International Team project #565 (_Multi-Wavelength Studies of the Culmination of Structure Formation in the Universe_). A.L., E.P., and G.W.p. acknowledge support from CNES, the French space agency. K.U. acknowledges support from the National Science and Technology Council of Taiwan (grant 109-2112-M-001-018-MY3) and from the Academia Sinica (grants AS-IA-107-M01 and AS-IA-112-M04). B.J.M. acknowledges support from STFC grant ST/V000454/1. |
2302.11221 | A q-analog of certain symmetric functions and one of its specializations | Let the symmetric functions be defined for the pair of integers $\left(
n,r\right) $, $n\geq r\geq 1$, by $p_{n}^{\left( r\right) }=\sum m_{\lambda }$
where $m_{\lambda }$ are the monomial symmetric functions, the sum being over
the partitions $\lambda $ of the integer $n$ with length $r$. We introduce by a
generating function, a $q$-analog of $p_{n}^{\left( r\right) }$ and give some
of its properties. This $q$-analog is related to its the classical form using
the $q$-Stirling numbers. We also start with the same procedure the study of a
$p,q$-analog of $p_{n}^{\left( r\right) }$.
By specialization of this $q$-analog in the series $\sum\nolimits_{n=0}^{
\infty }q^{\binom{n}{2}}t^{n}/n!$, we recover in a purely formal way$\ $a class
of polynomials $J_{n}^{\left( r\right) }$ historically introduced as
combinatorial enumerators, in particular of tree inversions. This also results
in a new linear recurrence for those polynomials whose triangular table can be
constructed, row by row, from the initial conditions $ J_{r}^{\left( r\right)
}=1$. The form of this recurrence is also given for the reciprocal polynomials
of $J_{n}^{\left( r\right) }$, known to be the sum enumerators of parking
functions. Explicit formulas for $J_{n}^{\left( r\right) }$ and their
reciprocals are deduced, leading inversely to new representations of these
polynomials as forest statistics. | Vincent Brugidou | 2023-02-22T09:10:04Z | http://arxiv.org/abs/2302.11221v5 | # A q-analog of certain symmetric functions and one of its specializations
# A q-analog of certain symmetric functions and one of its specializations
Vincent Brugidou
_Universite de Lille, 59655 Villeneuve d'Ascq cedex, France_
**Abstract :** Let be the symmetric functions defined for the pair of integers \(\left(n,r\right)\)\(n\geq r\geq 1\) by \(p_{n}^{\left(r\right)}=\sum m_{\lambda}\) where the \(m_{\lambda}\) are the monomial symmetric functions, the sum being over the partitions \(\lambda\) of the integer \(n\) of length \(r\). In this article we introduce a \(q\)-analog of \(p_{n}^{\left(r\right)}\), through generating functions and give some of its properties which are \(q\)-analogs of its classical correspondent in particular when \(r=1\). We then prove that this \(q\)-analog of \(p_{n}^{\left(r\right)}\)can be expressed in terms of the classical \(p_{n}^{\left(j\right)}\), through the \(q\)-Stirling numbers of the second kind. We also begin,with the same procedure, the study of a \(p\),\(q\)-analog of \(p_{n}^{\left(r\right)}\).
In the rest of the article we specialize to the series \(\sum_{n=0}^{\infty}q^{\left(\frac{r}{2}\right)}t^{n}/n!\). We show that \(p_{n}^{\left(r\right)}\) is then related to the \(q^{r}\)-analog of \(p_{n-r}\). We deduce the existence of a double sequence of polynomials denoted \(J_{n,r}\left(q\right)\) with integer coefficients. We identify these polynomials with the inversion enumerators introduced for specific rooted forests. These polynomials verify a "positive" linear recurrence which allows to build row by row the table of \(J_{n,r}\) from the initial conditions \(J_{r,r}=1\). We also give the form of the linear recurrence, for the reciprocal polynomials of \(J_{n,r}\), which are the sum enumerators of parking functions. The linear recurrence permits to obtain an explicit calculation formula for \(J_{n,r}\). This formula leads us to introduce new statistics on rooted trees and forests for \(J_{n,r}\) or its reciprocal.
_keywords :_ Symmetric functions, \(q\)-analog, \(q\)-Stirling numbers, inversion enumerator, parking function.
## 1 Introduction
We know that the Mac Donald polynomials and their particular cases can be represented as analogs with one or two parameters of the usual symmetric functions. A large number of papers continue to be published on the subject giving rise to various analogues of symmetric functions. Most of these analogs are broad constructions using powerful algebraic tools such as representation theory. More modestly, this article is the first of a series whose object is a \(q\)-analog (or \(q\)-deformation) defined in a rather elementary way, of certain symmetric functions, namely the functions defined for each pair of integers \(\left(n,r\right)\) such that \(n\geq r\geq 1\) by
\[p_{n}^{\left(r\right)}=\sum_{\left|\lambda\right|=\,n,\,l\left(\lambda\right) =r}m_{\lambda} \tag{1.1}\]
Where the \(m_{\lambda}\) are the symmetric monomial functions, the sum being over the integer partitions \(\lambda\) of \(n\), with length \(l(\lambda)=r\). The functions \(p_{n}^{\left(r\right)}\) are introduced with this notation in exercise 19 p.33 of [10], whose notations we will follow fairly faithfully. Despite its simplicity, this approach turns out to be quite fruitful. The \(q\)-analogs thus defined, that we will note \(\left[p_{n}^{\left(r\right)}\right]\), possessing attractive properties. We present in this article the definition and some properties of this q-analog. We also give applications by specialization to the formal series
\[E_{xp}(t)=\sum_{n=0}^{\infty}q^{\left(\frac{r}{2}\right)}\frac{t^{n}}{n!} \tag{1.2}\]
sometimes called q-deformation of the exponential. We were thus able to revisit polynomials well known as enumerator of inversion in rooted forests, or in a reciprocal way such as the sum enumerator of parking functions (see Yan's summary article [17] on these notions). We deduce new identieis verified by these polynomials, from which we extract some combinatorial consequences.
The organization of the article is as follows. In section 2 we set the notations and recall the necessary prerequisites for symmetric functions, integer partitions and the q-calculus. |
2308.04111 | On the stability of Caffarelli-Kohn-Nirenberg inequality in R^2 | Dolbeault, Esteban and Loss [Invent. Math., 2016] obtained an optimal
rigidity result, that is, when $a<0$ and $b_{\mathrm{FS}}(a)\leq b<a+1$ the
extremal function for best constant $\mathcal{S}_{a,b}>0$ of the following
Caffarelli-Kohn-Nirenberg inequality is symmetry, \[
\mathcal{S}_{a,b}\left(\int_{\mathbb{R}^2}|x|^{-qb}|u|^q
\mathrm{d}x\right)^{\frac{2}{q}}
\leq \int_{\mathbb{R}^2}|x|^{-2a}|\nabla u|^2 \mathrm{d}x, \quad \mbox{for
all}\quad u\in C^\infty_0(\mathbb{R}^2), \] where
$b_{\mathrm{FS}}(a):=a-\frac{a}{\sqrt{a^2+1}}$, $q=\frac{2}{b-a}$. An important
task is investigating the stability of extremal functions set $\mathcal{M}$ for
this inequality. Firstly, we classify all solutions of the linearized problem
related to the extremals which fills the work of Felli and Schneider [J. Diff.
Equ., 2003]. When $b_{\mathrm{FS}}(a)< b<a+1$, we investigate the stability of
previous inequality by using spectral estimate combined with a compactness
argument that
\begin{align*}
\int_{\mathbb{R}^2}|x|^{-2a}|\nabla u|^2 \mathrm{d}x
-\mathcal{S}_{a,b}\left(\int_{\mathbb{R}^2}|x|^{-qb}|u|^q
\mathrm{d}x\right)^{\frac{2}{q}}
\geq \mathcal{B}
\mathrm{dist}(u,\mathcal{M})^2,\quad \mbox{for all}\quad u\in
C^\infty_0(\mathbb{R}^2),
\end{align*}
for some $\mathcal{B}>0$, however it is false when $b=b_{\mathrm{FS}}(a)$,
which extends the work of Wei and Wu [Math. Ann., 2022] to $\mathbb{R}^2$.
Furthermore, we obtain the existence of minimizers for $\mathcal{B}$ which
extends the recent work of K\"{o}nig [J. Eur. Math. Soc., to appear]. | Shengbing Deng, Xingliang Tian | 2023-08-08T07:53:21Z | http://arxiv.org/abs/2308.04111v2 | # On the stability of Caffarelli-Kohn-Nirenberg inequality in \(\mathbb{R}^{2}\)
###### Abstract.
Dolbeault, Esteban and Loss [19] obtained an optimal rigidity result, that is, when \(a<0\) and \(b_{\mathrm{FS}}(a)\leq b<a+1\) the extremal function for best constant \(\mathcal{S}_{a,b}>0\) of the following Caffarelli-Kohn-Nirenberg inequality is symmetry,
\[\mathcal{S}_{a,b}\left(\int_{\mathbb{R}^{2}}|x|^{-qb}|u|^{q}\mathrm{d}x\right)^ {\frac{2}{q}}\leq\int_{\mathbb{R}^{2}}|x|^{-2a}|\nabla u|^{2}\mathrm{d}x,\quad \text{for all}\quad u\in C_{0}^{\infty}(\mathbb{R}^{2}),\]
where \(b_{\mathrm{FS}}(a):=a-\frac{a}{\sqrt{a^{2}+1}}\), \(q=\frac{2}{b-a}\). An important task is investigating the stability of critical points set \(\mathcal{M}\) for this inequality. Firstly, we classify solutions of the linearized problem related to the extremals which fills the work of Felli and Schneider [22]. When \(b_{\mathrm{FS}}(a)<b<a+1\), we investigate the stability of previous inequality by using spectral estimate combined with a compactness argument that
\[\int_{\mathbb{R}^{2}}|x|^{-2a}|\nabla u|^{2}\mathrm{d}x-\mathcal{S}_{a,b} \left(\int_{\mathbb{R}^{2}}|x|^{-qb}|u|^{q}\mathrm{d}x\right)^{\frac{2}{q}} \geq\mathcal{B}\mathrm{dist}(u,\mathcal{M})^{2},\quad\text{for all}\quad u\in C _{0}^{\infty}(\mathbb{R}^{2}),\]
for some \(\mathcal{B}>0\), however it is false when \(b=b_{\mathrm{FS}}(a)\), which extends the work of Wei and Wu [37] to \(\mathbb{R}^{2}\). Furthermore, we obtain the existence of minimizers for \(\mathcal{B}\) which extends the recent work of Konig [25].
Key words and phrases:Caffarelli-Kohn-Nirenberg inequality; Non-degeneracy; Gradient stability; Existence of minimizers.
Introduction
Let \(\mathcal{S}\) be a smooth smooth \(\mathbb{R}^{N}\)-valued field with
paper, Dolbeault, Esteban and Loss [19] proved an optimal rigidity result by using the so-called _carre du champ_ method that when \(a<0\) and \(b_{\mathrm{FS}}(a)\leq b<a+1\) for all \(N\geq 2\), the extremal function is symmetry, and we refer to [17] for an overall review about this method.
As mentioned previous, once the extremal functions of (1.3) are well understood, it is natural to study the quantitative stability of (CKN) inequality (1.3) by asking whether the deviation of a given function from attaining equality in (1.3) controls its distance from the family of extremal functions. In the symmetry region, there are many papers concerned the stability of inequalities with potentials for the case \(N\geq 3\). Radulescu et al. [31] gave the remainder terms of Hardy-Sobolev inequality for the case \(a=0\). Wang and Willem [36] studied (CKN) inequalities with Lebesgue-type remainder terms, see also [1, 13, 14, 33] for remainder terms of weighted Hardy inequalities. Wei and Wu [37] established the stability of the profile decompositions to the (CKN) inequality (1.3) and also gave the gradient type remainder term, see also our recent works [15] for \(p\)-Hardy-Sobolev case and [16] for (CKN) case involving \(p\)-Laplace. Therefore, it is natural to consider the stability of inequality (1.3) for the case \(N=2\) in symmetry region.
### Problem setup and main results
Let us carefully state the work of Dolbeault, Esteban and Loss [19] for the case \(N=2\). Define the weighted space \(\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2}):=\mathcal{D}^{1,2}(\mathbb{R}^{2},|x|^{-2 a}\mathrm{d}x)\) as the completion of \(C_{0}^{\infty}(\mathbb{R}^{2})\) with respect to the inner product
\[\langle u,v\rangle_{\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})}:=\int_{\mathbb{R}^{ 2}}|x|^{-2a}\nabla u\cdot\nabla v\mathrm{d}x,\]
and the norm \(\|u\|_{\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})}:=\langle u,u\rangle_{\mathcal{D} _{a}^{1,2}(\mathbb{R}^{2})}^{1/2}\). If
\[-\infty<a<0,\quad a-\frac{a}{\sqrt{a^{2}+1}}=:b_{\mathrm{FS}}(a)\leq b<a+1, \quad q=\frac{2}{b-a}, \tag{1.4}\]
then the best constant \(\mathcal{S}_{a,b}:=\mathcal{S}(a,b)>0\) of
\[\int_{\mathbb{R}^{2}}|x|^{-2a}|\nabla u|^{2}\mathrm{d}x\geq\mathcal{S}_{a,b} \left(\int_{\mathbb{R}^{2}}|x|^{-qb}|u|^{q}\mathrm{d}x\right)^{\frac{2}{q}}, \quad\text{for all}\quad u\in\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2}), \tag{1.5}\]
is achieved with the extremals being unique (up to scalings) of the form \(CU_{\lambda}\) for \(C\in\mathbb{R}\) and \(\lambda>0\), where
\[U_{\lambda}(x)=\lambda^{-a}U(\lambda x),\quad\text{with}\quad U(x)=C_{a,b} \left(1+|x|^{-a(q-2)}\right)^{-\frac{2}{q-2}}. \tag{1.6}\]
Here
\[C_{a,b}=\left[\tau^{-2}K(K-2)\right]^{\frac{K-2}{4}},\quad\text{with}\quad \tau:=\frac{a-b}{a(1+a-b)}>1\quad\text{and}\quad K:=\frac{2}{1+a-b}>2.\]
Furthermore, \(U_{\lambda}\) is the unique (up to scalings) positive solution of the following problem
\[-\mathrm{div}(|x|^{-2a}\nabla u)=|x|^{-bq}|u|^{q-2}u\quad\text{in}\quad \mathbb{R}^{2},\quad\int_{\mathbb{R}^{2}}|x|^{-bq}|u|^{q}\mathrm{d}x<\infty. \tag{1.7}\]
Our first result concerns the linearized problem related to (1.7) at the function \(U\). This leads to study the problem:
\[-\mathrm{div}(|x|^{-2a}\nabla v)=(q-1)|x|^{-bq}U^{q-2}v\quad\text{in}\quad \mathbb{R}^{2},\quad v\in\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2}). \tag{1.8}\]
It is easy to verify that \(-aU+x\cdot\nabla U\) (which equals \(\frac{\partial U_{\lambda}}{\partial\lambda}|_{\lambda=1}\)) solves the linear equation (1.8). We say \(U\) is non-degenerate if all the solutions of (1.7) result from the invariance (up to scalings) of (1.8). The non-degeneracy of solutions for (1.8) is a key ingredient in analyzing the blow-up phenomena of solutions to various elliptic equations on bounded or unbounded domain in \(\mathbb{R}^{N}\) or Riemannian manifolds whose asymptotic behavior is encoded in (1.6). Therefore, it is quite natural to ask the following question:
_is solution \(U\) non-degenerate?_
We give an affirmative answer when \(b_{\rm FS}(a)<b<a+1\), however when \(b=b_{\rm FS}(a)\) there exist new solutions to the linearized problem that "replace" the ones due to the translations invariance.
**Theorem 1.1**.: _Assume that (1.4) holds. If \(b>b_{\rm FS}(a)\), the space of solutions of (1.8) has dimension \(1\) and is spanned by \(-aU+x\cdot\nabla U\), and in this case we say \(U\) is non-degenerate. Otherwise, if \(b=b_{\rm FS}(a)\), the space of solutions of (1.8) has dimension \(3\) and is spanned by_
\[-aU+x\cdot\nabla U,\quad\text{and}\quad Z_{i}(x)=\frac{|x|^{-\frac{a(q-2)}{2}- 1}x_{i}}{(1+|x|^{-a(q-2)})^{\frac{q}{q-2}}},\ i=1,2. \tag{1.9}\]
**Remark 1.2**.: _In the proof of Theorem 1.1, we will use the change \(v(s)=u(s^{\tau})\) with \(\tau=\frac{a-b}{a(1+a-b)}\) (note that assumption (1.4) implies \(\tau>1\)) in the radial case (see [6] for the case \(N\geq 3\)) which transforms this problem into classical Laplacian linearized problem as in [2], in fact, it transforms the dimension \(N=2\) into \(K=\frac{2}{1+a-b}(>2)\). Note that when \(b=b_{\rm FS}(a)\), although \(Z_{i}\) given as in (1.9) solves (1.8) that "replace" the ones due to the translations invariance, \(Z_{i}\not\sim\frac{\partial U}{\partial x_{i}}\) for every \(i\in\{1,2\}\). It is worth mentioning that our method is different from Felli-Schneider [22]._
A direct application of Theorem 1.1 is studying the gradient stability of (CKN) inequality (1.5), which states as the following.
**Theorem 1.3**.: _Assume that \(-\infty<a<0\) and \(b_{\rm FS}(a)<b<a+1\). Then there exists a constant \(\mathcal{B}=\mathcal{B}(a,b)>0\) such that for every \(u\in\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})\), it holds that_
\[\int_{\mathbb{R}^{2}}|x|^{-2a}|\nabla u|^{2}{\rm d}x-\mathcal{S}_{a,b}\left( \int_{\mathbb{R}^{2}}|x|^{-qb}|u|^{q}{\rm d}x\right)^{\frac{2}{q}}\geq \mathcal{B}{\rm dist}(u,\mathcal{M})^{2}, \tag{1.10}\]
_where \(\mathcal{M}=\{cU_{\lambda}:c\in\mathbb{R},\lambda>0\}\) is the set of extremal functions for (CKN) inequality (1.5), and \({\rm dist}(u,\mathcal{M}):=\inf\limits_{c\in\mathbb{R},\lambda>0}\|u-cU_{ \lambda}\|_{\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})}\)._
**Remark 1.4**.: _It is worth mentioning that when \(b=b_{\rm FS}\), (1.10) does not hold. Indeed, we take \(\{u_{n}=U+\varepsilon_{n}Z_{1}\}\) satisfying \(\varepsilon_{n}\to 0\) as \(n\to\infty\) where \(Z_{1}\) is given as in (1.9). Note that \({\rm dist}(u_{n},\mathcal{M})^{2}\leq\varepsilon_{n}^{2}\|Z_{1}\|^{2}_{ \mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})}\), and from the proof of Lemma 3.1 we know \({\rm dist}(u_{n},\mathcal{M})^{2}=\|U+\varepsilon_{n}Z_{1}-c_{n}U_{\lambda_{n} }\|^{2}_{\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})}\) for some \(c_{n}\in\mathbb{R}\) and \(\lambda_{n}>0\), then we must have \({\rm dist}(u_{n},\mathcal{M})^{2}=\varepsilon_{n}^{2}\|Z_{1}\|^{2}_{\mathcal{D }^{1,2}_{a}(\mathbb{R}^{2})}\) due to \(\langle U,Z_{1}\rangle_{\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})}=\langle U_{ \lambda_{n}},Z_{1}\rangle_{\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})}=0\). Thus it holds_
\[\frac{\int_{\mathbb{R}^{2}}|x|^{-2a}|\nabla u_{n}|^{2}{\rm d}x-\mathcal{S}_{a,b} \left(\int_{\mathbb{R}^{2}}|x|^{-qb}|u_{n}|^{q}{\rm d}x\right)^{\frac{2}{q}}}{ {\rm dist}(u_{n},\mathcal{M})^{2}}\to 0,\quad\text{as $n\to\infty$}.\]
_Here we use the fact_
\[\int_{\mathbb{R}^{2}}|x|^{-2a}|\nabla Z_{1}|^{2}\mathrm{d}x=(q-1)\int_{\mathbb{R} ^{2}}|x|^{-qb}U^{q-2}Z_{1}^{2}\mathrm{d}x.\]
As mentioned in the beginning of this section, it is natural to consider the minimization problem
\[\mathcal{B}:=\inf_{u\in\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})\setminus\mathcal{ M}}\mathcal{E}(u)>0, \tag{1.11}\]
where
\[\mathcal{E}(u):=\frac{\int_{\mathbb{R}^{2}}|x|^{-2a}|\nabla u|^{2}\mathrm{d}x -\mathcal{S}_{a,b}\left(\int_{\mathbb{R}^{2}}|x|^{-qb}|u|^{q}\mathrm{d}x \right)^{\frac{2}{q}}}{\mathrm{dist}(u,\mathcal{M})^{2}},\quad u\in\mathcal{D }_{a}^{1,2}(\mathbb{R}^{2})\setminus\mathcal{M}. \tag{1.12}\]
Following the arguments established by Konig in [25], we will show:
**Theorem 1.5**.: _Under the assumption of Theorem 1.3. Then \(\mathcal{B}\) is achieved, that is, there is \(u_{0}\in\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})\setminus\mathcal{M}\) such that \(\mathcal{E}(u_{0})=\mathcal{B}\)._
As an affiliated result, we first show \(\mathcal{B}<1-\frac{q-1}{\mu_{3}}\) as in Corollary 3.3, then combined with Lemma 3.1 it is easy to see that the minimizers of \(\mathcal{B}\) can not be close to \(\mathcal{M}\). This transforms finding minimizers into proving strong convergence of minimizing sequence, see Proposition 4.4. On the other hand, taking a test function \(U+U_{\lambda}\) as \(\lambda\to 0^{+}\), and by using the change of variable \(v(s)=u(s^{\tau})\) stated as in Remark 1.2, from [25, Proposition 3.1] we deduce
\[\mathcal{B}<2-2^{\frac{2}{q}},\]
see Lemma 4.3. This estimate is crucial for the proof of Theorem 1.5 because of the function \(\frac{(1+\eta^{q})^{\frac{2}{q}}-1}{\eta^{2}}\) is strictly increasing in \(\eta\in(0,\infty)\).
Comparing with [25], in present paper there is no translation invariance which leads to \(\mathrm{dist}(u,\mathcal{M})\) can not always be achieved, thus we limit the minimizing sequence \(\{u_{n}\}\) of \(\mathcal{B}\) to satisfy \(\mathrm{dist}(u_{n},\mathcal{M})\) is achieved. Thought we can not obtain the conclusion as [25] which states that all minimizing sequences must converge towards a nontrivial minimizer, we just prove the existence of minimizers for \(\mathcal{B}\).
**Remark 1.6**.: _As mentioned in previous, Wei and Wu [37] obtained the stability conclusion of (1.10) type under the conditions: \(N\geq 3\), \(a<0\) and \(b_{\mathrm{FS}}(a)\leq b<a+1\) or \(0\leq a<a_{c}\) and \(a\leq b<a+1\). However, it should be noticed that when \(a<0\) and \(b=b_{\mathrm{FS}}(a)\) such stability does not hold. Anyway, once the stability conclusion is established, by taking the same arguments, the best stability constant can also be achieved._
### Structure of the paper
The paper is organized as follows. Section 2 is devoted to characterizing all the solutions to (1.8) and proving Theorem 1.1, then we give the related spectral analysis with the help of Theorem 1.1. In Section 3 we study the stability of (CKN) inequality (1.5) by using spectral analysis combined with a compactness theory, and give the proof of Theorem 1.3. Finally, in Section 4 we first show \(\mathcal{B}<2-2^{\frac{2}{q}}\) then prove the existence of minimizers for the best stability constant \(\mathcal{B}\).
## 2. **Linearized problem**
First of all, let us rewrite the linear equation (1.8) as
\[-|x|^{2}\Delta v+2a(x\cdot\nabla v)=(q-1)C_{a,b}^{q-2}\frac{|x|^{-a(q-2)}}{(1+|x| ^{-a(q-2)})^{2}}v\quad\text{in}\quad\mathbb{R}^{2},\quad v\in\mathcal{D}_{a}^{ 1,2}(\mathbb{R}^{2}). \tag{2.1}\]
Then by using the standard spherical decomposition and making the change of variable \(r\mapsto r^{\frac{a-b}{a(1+a-b)}}\), we can characterize all solutions to the linearized problem (2.1).
### Proof of Theorem 1.1
Let us make the standard partial wave decomposition of (2.1), namely
\[v(x)=v(r,\theta)=\sum_{k=0}^{\infty}\sum_{m=1}^{l_{k}}\varphi_{k,m}(r)\Psi_{k, m}(\theta), \tag{2.2}\]
where \(r=|x|\), \(\theta=\frac{x}{|x|}\in\mathbb{S}^{1}\), and
\[\varphi_{k,m}(r)=\int_{\mathbb{S}^{1}}v(r,\theta)\Psi_{k,m}(\theta)\mathrm{d}\theta.\]
Here \(\Psi_{k,m}(\theta)\) denotes the \(k\)-th spherical harmonic, i.e., it satisfies
\[-\Delta_{\mathbb{S}^{1}}\Psi_{k,m}=\lambda_{k}\Psi_{k,m}, \tag{2.3}\]
where \(\Delta_{\mathbb{S}^{1}}\) is the Laplace-Beltrami operator on \(\mathbb{S}^{1}\) with the standard metric and \(\lambda_{k}\) is the \(k\)-th eigenvalue of \(-\Delta_{\mathbb{S}^{1}}\). It is well known that
\[\lambda_{k}=k^{2},\quad k=0,1,2,\ldots, \tag{2.4}\]
whose multiplicity is \(l_{k}\), where
\[l_{0}:=1;\quad l_{k}:=\frac{2k(k-1)!}{k!}=2,\text{ for }k\geq 1,\]
and that
\[\mathrm{Ker}(\Delta_{\mathbb{S}^{1}}+\lambda_{k})=\mathbb{Y}_{k}(\mathbb{R}^{ 2})|_{\mathbb{S}^{1}},\]
where \(\mathbb{Y}_{k}(\mathbb{R}^{2})\) is the space of all homogeneous harmonic polynomials of degree \(k\) in \(\mathbb{R}^{2}\). It is standard that \(\lambda_{0}=0\) and the corresponding eigenfunction of (2.3) is the constant function that is \(\Psi_{0,1}=c\in\mathbb{R}\setminus\{0\}\). The second eigenvalue \(\lambda_{1}=1\) and the corresponding eigenfunctions of (2.3) are \(\Psi_{1,m}=x_{m}/|x|\), \(m=1,2\). In fact, \(e^{\pm ik\theta}=\cos(k\theta)\pm i\sin(k\theta)\) are the eigenfunctions of the Laplace-Beltrami operator in \(\mathbb{S}^{1}\) with respect to the eigenvalue of \(k^{2}\). We refer to the proof of [3, Proposition 1] for details, see also [30].
The following results can be obtained by direct calculation,
\[\Delta(\varphi_{k,m}(r)\Psi_{k,m}(\theta))= \Psi_{k,m}\left(\varphi_{k,m}^{\prime\prime}+\frac{\varphi_{k,m}^ {\prime}}{r}\right)+\frac{\varphi_{k,m}}{r^{2}}\Delta_{\mathbb{S}^{1}}\Psi_{k,m}\] \[= \Psi_{k,m}\left(\varphi_{k,m}^{\prime\prime}+\frac{\varphi_{k,m} ^{\prime}}{r}-\frac{k^{2}}{r^{2}}\varphi_{k,m}\right), \tag{2.5}\]
and
\[x\cdot\nabla(\varphi_{k,m}(r)\Psi_{k,m}(\theta)) =\sum_{j=1}^{2}x_{j}\frac{\partial(\varphi_{k,m}(r)\Psi_{k,m}(\theta ))}{\partial x_{j}} \tag{2.6}\] \[=\varphi_{k,m}^{\prime}r\Psi_{k,m}+\varphi_{k,m}\frac{\partial \Psi_{k,m}}{\partial\theta_{l}}\sum_{j=1}^{2}\frac{\partial\theta_{l}}{ \partial x_{j}}x_{j}=\varphi_{k,m}^{\prime}r\Psi_{k,m},\]
due to
\[\sum_{j=1}^{2}\frac{\partial\theta_{l}}{\partial x_{j}}x_{j}=0\quad\text{for all}\quad l=1,2.\]
Then putting together (2.2), (2.5), and (2.6) into (2.1), the function \(v\) is a solution of (2.1) if and only if \(\varphi_{k,m}\in\mathcal{W}\) is a classical solution of the system
\[\left\{\begin{aligned} &\varphi_{k,m}^{\prime\prime}+\frac{(1-2a) \varphi_{k,m}^{\prime}}{r}-\frac{k^{2}\varphi_{k,m}}{r^{2}}+(q-1)C_{a,b}^{q-2 }\frac{r^{-a(q-2)-2}\varphi_{k,m}}{(1+r^{-a(q-2)})^{2}}=0\quad\text{in}\quad r \in(0,\infty),\\ &\varphi_{k,m}^{\prime}(0)=0\quad\text{if}\quad k=0,\quad\text{ and}\quad\varphi_{k,m}(0)=0\quad\text{if}\quad k\geq 1,\end{aligned}\right. \tag{2.7}\]
for all \(m=1,\ldots,l_{k}\), where \(\mathcal{W}:=\{w\in C^{1}([0,\infty))|\int_{0}^{\infty}|w^{\prime}|^{2}r^{1-2 a}\mathrm{d}r<\infty\}\). We make the change of variable \(r=t^{\tau}\) with
\[\tau=\frac{a-b}{a(1+a-b)}>0, \tag{2.8}\]
and let
\[\eta_{k,m}(t)=\varphi_{k,m}(r), \tag{2.9}\]
that transforms (2.7) into the following equations for all \(\eta_{k,m}\in\widetilde{\mathcal{W}}\), \(k=0,1,2,\ldots\) and \(m=1,\ldots,l_{k}\),
\[\eta_{k,m}^{\prime\prime}+\frac{K-1}{t}\eta_{k,m}^{\prime}-\frac{\tau^{2}k^{2 }}{t^{2}}\eta_{k,m}+\frac{K(K+2)}{(1+t^{2})^{2}}\eta_{k,m}=0. \tag{2.10}\]
where \(\widetilde{\mathcal{W}}:=\{w\in C^{1}([0,\infty))|\int_{0}^{\infty}|w^{\prime }|^{2}t^{K-1}\mathrm{d}t<\infty\}\), and
\[K:=\frac{2}{1+a-b}>2. \tag{2.11}\]
Here we have used the fact \(\tau^{2}(q-1)C_{a,b}^{q-2}=K(K+2)\).
Now, let us consider the linear operator
\[\mathcal{A}_{k}(\eta):=\left(t^{K-1}\eta^{\prime}\right)^{\prime}+(\tilde{2}^ {*}-1)V^{\tilde{2}^{*}-2}t^{K-1}\eta-\tau^{2}k^{2}t^{K-3}\eta,\quad\eta\in \widetilde{\mathcal{W}}, \tag{2.12}\]
where
\[\tilde{2}^{*}=\frac{2K}{K-2},\quad\text{and}\quad V(t)=[K(K-2)]^{\frac{K-2}{4 }}(1+t^{2})^{-\frac{K-2}{2}}.\]
Note that solving the equation (2.10) is equivalent to solve \(\mathcal{A}_{k}(\eta)=0\) for all \(k\geq 0\).
\(\bullet\)_The case \(k=0\)_.
We know that the function
\[\eta_{0}(t)=\frac{1-t^{2}}{(1+t^{2})^{\frac{K}{2}}}\sim\frac{K-2}{2}V(t)+tV^{ \prime}(t)\]
solves the equation (2.10). We claim that all the solutions are given by \(\eta=c\eta_{0}\), \(c\in\mathbb{R}\). Indeed, for \(k=0\) a straightforward computation shows that \(\eta_{0}\in\widetilde{\mathcal{W}}\) and \(\mathcal{A}_{0}(\eta_{0})=0\). We look for a second linearly independent solution of the form
\[w(t)=c(t)\eta_{0}(t).\]
Then we get
\[c^{\prime\prime}(t)\eta_{0}(t)+c^{\prime}(t)\left(2\eta_{0}^{\prime}(t)+\frac{ K-1}{t}\eta_{0}(t)\right)=0,\]
and hence
\[\frac{c^{\prime\prime}(t)}{c^{\prime}(t)}=-2\frac{\eta_{0}^{\prime\prime}(t)}{ \eta_{0}^{\prime}(t)}-\frac{K-1}{t}.\]
A direct computation shows that
\[c^{\prime}(t)=\frac{B}{(\eta_{0}(t))^{2}t^{K-1}},\quad\text{for some}\quad B \in\mathbb{R}\setminus\{0\}.\]
Therefore,
\[c(t)\sim Bt^{K-2}\quad\text{and}\quad w(t)=c(t)\eta_{0}(t)\sim B\quad\text{as} \quad t\to+\infty.\]
However, \(w\notin\widetilde{\mathcal{W}}\) due to [29, Lemma 4.1], that is,
\[\int_{0}^{\infty}|w^{\prime}|^{2}t^{K-1}\mathrm{d}t\geq C\left(\int_{0}^{ \infty}|w|^{\widetilde{2}^{*}}t^{K-1}\mathrm{d}t\right)^{\frac{2}{2^{*}}}, \quad\text{for some}\quad C>0. \tag{2.13}\]
\(\bullet\)_The case \(k\geq 1\) and \(b>b_{\mathrm{FS}}(a)\)_.
In this case, we claim that all the solutions in \(\widetilde{\mathcal{W}}\) of \(\mathcal{A}_{k}(\eta)=0\) are identically zero. Assume there exists a function \(\eta_{k}\in\widetilde{\mathcal{W}}\) such that \(\mathcal{A}_{k}(\eta_{k})=0\), that is,
\[\left(t^{K-1}\eta_{k}^{\prime}\right)^{\prime}+(\tilde{2}^{*}-1)V^{\tilde{2}^ {*}-2}t^{K-1}\eta_{k}-\tau^{2}k^{2}t^{K-3}\eta_{k}=0,\quad\text{for all}\quad t>0. \tag{2.14}\]
We claim that \(\eta_{k}\equiv 0\) if \(k\geq 1\). We argue by contradiction. Without loss of generality, we suppose that there exists \(t_{k}>0\) (possibly \(+\infty\)) such that \(\eta_{k}(t)>0\) for any \(t\in(0,t_{k})\) and \(\eta_{k}(t_{k})=0\). In particular, \(\eta_{k}^{\prime}(t_{k})\leq 0\). Note that \(V^{\prime}\in\widetilde{\mathcal{W}}\setminus\{0\}\) satisfies
\[\left(t^{K-1}V^{\prime\prime}\right)^{\prime}+(\tilde{2}^{*}-1)V^{\tilde{2}^{ *}-2}t^{K-1}V^{\prime}-(K-1)t^{K-3}V^{\prime}=0,\quad\text{for all}\quad t>0. \tag{2.15}\]
Multiplying (2.14) by \(V^{\prime}\), (2.15) by \(\eta_{k}\), and integrating between \(0\) and \(t_{k}\) then subtracting the two expressions, we obtain
\[\left[\tau^{2}k^{2}-(K-1)\right]\int_{0}^{t_{k}}t^{K-3}\eta_{k}V^ {\prime}\mathrm{d}t= \int_{0}^{t_{k}}\left(t^{K-1}\eta_{k}^{\prime}\right)^{\prime}V^ {\prime}\mathrm{d}t-\int_{0}^{t_{k}}\left(t^{K-1}V^{\prime\prime}\right)^{ \prime}\eta_{k}\mathrm{d}t\] \[= t_{k}^{K-1}\eta_{k}^{\prime}(t_{k})V^{\prime}(t_{k}). \tag{2.16}\]
Here, we integrate by part due to \(\eta_{k}(t_{k})=0\). Note that under the assumption \(b_{\mathrm{FS}}(a)<b<a+1\) with \(a<0\), it is easy to verify that \(\tau^{2}>K-1\), thus
\[\tau^{2}k^{2}>K-1,\quad\text{for all}\quad k\geq 1.\]
Then a contradiction arises in (2.16) since \(\eta_{k}^{\prime}(t_{k})\leq 0\), \(V^{\prime}(t)<0\) for any \(t>0\) and \(\eta_{k}(t)>0\) in \(t\in(0,t_{k})\). Thus all the solutions in \(\widetilde{\mathcal{W}}\) of \(\mathcal{A}_{k}(\eta_{k})=0\) are \(\eta_{k}\equiv 0\) for \(k\geq 1\).
\(\bullet\)_The case \(k\geq 1\) and \(b=b_{\mathrm{FS}}(a)\)_.
The case \(b=b_{\rm FS}(a)\) implies that \(\tau^{2}=K-1\). Therefore, for \(k=1\), (2.10) reduces to
\[\eta_{1,m}^{\prime\prime}+\frac{K-1}{t}\eta_{1,m}^{\prime}-\frac{K-1}{t^{2}} \eta_{1,m}+\frac{K(K+2)}{(1+t^{2})^{2}}\eta_{1,m}=0,\quad\eta_{1,m}\in\widetilde {\mathcal{W}}. \tag{2.17}\]
It is known that
\[V^{\prime}\sim\frac{t}{(1+t^{2})^{\frac{K}{2}}}\]
is a solution of (2.17). We claim that all the solutions of (2.17) are given by \(cV^{\prime}\), \(c\in\mathbb{R}\). As above, we look for a second linearly independent solution of the form
\[w(t)=c(t)V^{\prime}(t).\]
Since \(V^{\prime}\) is a solution we have
\[\frac{K(K+2)}{(1+t^{2})^{2}}=-\frac{V^{\prime\prime\prime}+\frac{K-1}{t}V^{ \prime\prime}-\frac{K-1}{t^{2}}V^{\prime}}{V^{\prime}},\]
then we get
\[c^{\prime\prime}(t)V^{\prime}(t)+c^{\prime}(t)\left(2V^{\prime\prime}(t)+ \frac{K-1}{t}V^{\prime}(t)\right)=0,\]
and a direct computation shows that
\[c^{\prime}(t)=\frac{B}{(V^{\prime}(t))^{2}t^{K-1}}=B^{\prime}\frac{(1+t^{2})^ {K}}{t^{K+1}},\quad\text{for some}\quad B^{\prime}\in\mathbb{R}\setminus\{0\}.\]
Therefore,
\[c(t)\sim t^{K}\quad\text{and}\quad w(t)=c(t)V^{\prime}(t)\sim t\quad\text{as} \quad t\to+\infty.\]
However, \(w\notin\widetilde{\mathcal{W}}\) because of (2.13). Then for \(k\geq 2\), same as the previous case, we conclude that all the solutions in \(\widetilde{\mathcal{W}}\) of \(\mathcal{A}_{k}(\eta)=0\) are identically zero because of \(\tau^{2}k^{2}>K-1\) for all \(k\geq 2\).
To sum up, let us turn back to (2.7), we obtain the solutions that
\[\varphi_{0}(r)=\frac{1-r^{\frac{2}{\tau}}}{(1+r^{\frac{2}{\tau}})^{\frac{1}{1 +a-b}}},\quad\text{if}\quad b>b_{\rm FS}(a), \tag{2.18}\]
otherwise
\[\varphi_{0}(r)=\frac{1-r^{\frac{2}{\tau}}}{(1+r^{\frac{2}{\tau}})^{\frac{1}{1 +a-b}}},\quad\varphi_{1}(r)=\frac{r^{\frac{1}{\tau}}}{(1+r^{\frac{2}{\tau}})^{ \frac{1}{1+a-b}}},\quad\text{if}\quad b=b_{\rm FS}(a), \tag{2.19}\]
where \(\tau=\frac{a-b}{a(1+a-b)}>0\) is given by (2.8). Note that \(\tau=\frac{a-b}{a(1+a-b)}>0\) satisfies \(-a(q-2)=\frac{2}{\tau}\). Therefore, if \(b>b_{\rm FS}(a)\) then the space of solutions of (2.1) has dimension \(1\) and is spanned by
\[Z_{0}(x)=\frac{1-|x|^{-a(q-2)}}{(1+|x|^{-a(q-2)})^{\frac{a}{q-2}}}.\]
Note that \(Z_{0}\sim\frac{\partial U_{\lambda}}{\partial\lambda}|_{\lambda=1}=-aU+x\cdot \nabla U\), and in this case we say \(U\) is non-degenerate. Otherwise, if \(b=b_{\mathrm{FS}}(a)\) then the space of solutions of (2.1) has dimension \(3\) and is spanned by
\[Z_{0}(x),\quad Z_{1}(x)=\frac{|x|^{-\frac{a(q-2)}{2}-1}x_{1}}{(1+|x|^{-a(q-2)}) ^{\frac{q}{q-2}}},\quad Z_{2}(x)=\frac{|x|^{-\frac{a(q-2)}{2}-1}x_{2}}{(1+|x|^{ -a(q-2)})^{\frac{q}{q-2}}}.\]
That is, when \(b=b_{\mathrm{FS}}(a)\), there exist new solutions to the linearized problem (2.1) that "replace" the ones due to the translations invariance, however it is worth to notice that \(Z_{i}\not\sim\frac{\partial U}{\partial x_{i}}\) for every \(i\in\{1,2\}\). The proof of of Theorem 1.1 is now completed.
### Spectral analysis
Furthermore, based on the result of Theorem 1.1, when \(-\infty<a<0\) and \(b_{\mathrm{FS}}(a)<b<a+1\), let us consider the following eigenvalue problem
\[-\mathrm{div}(|x|^{-2a}v)=\mu|x|^{-qb}U^{q-2}v\quad\text{in}\quad\mathbb{R}^{ 2},\quad v\in\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2}). \tag{2.20}\]
It is easy to verify that \(\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})\) embeds compactly into \(L^{2}(\mathbb{R}^{2},|x|^{-qb}U^{q-2}\mathrm{d}x)\) (see [16]), then following the work of Servadei and Valdinoci [34], we can give the definitions of eigenvalues of problem (2.20) as follows.
**Definition 2.1**.: _The first eigenvalue of problem (2.20) can be defined as_
\[\mu_{1}:=\inf_{v\in\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})\setminus\{0\}}\frac{ \int_{\mathbb{R}^{2}}|x|^{-2a}|\nabla v|^{2}\mathrm{d}x}{\int_{\mathbb{R}^{2}} |x|^{-qb}U^{q-2}v^{2}\mathrm{d}x}. \tag{2.21}\]
_Moreover, for any \(k\in\mathbb{N}^{+}\) the eigenvalues can be characterized as follows:_
\[\mu_{k+1}:=\inf_{v\in\mathbb{P}_{k+1}\setminus\{0\}}\frac{\int_{\mathbb{R}^{2} }|x|^{-2a}|\nabla v|^{2}\mathrm{d}x}{\int_{\mathbb{R}^{2}}|x|^{-qb}U^{q-2}v^{2 }\mathrm{d}x}, \tag{2.22}\]
_where_
\[\mathbb{P}_{k+1}:=\left\{v\in\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2}):\int_{ \mathbb{R}^{2}}|x|^{-2a}\nabla v\cdot\nabla e_{i,j}\mathrm{d}x=0,\quad\text{ for all}\quad i=1,\dots,k,\ j=1,\dots,h_{i}\right\},\]
_and \(e_{i,j}\) are the corresponding eigenfunctions to \(\mu_{i}\) with \(h_{i}\) multiplicity._
Then we have:
**Theorem 2.2**.: _Assume that \(-\infty<a<0\) and \(b_{\mathrm{FS}}(a)<b<a+1\). Let \(\mu_{i}\), \(i=1,2,\dots,\) denote the eigenvalues of (2.20) in increasing order defined as in Definition 2.1. Then \(\mu_{1}=1\) is simple and the corresponding eigenfunction is spanned by \(U\), \(\mu_{2}=q-1\) and the corresponding eigenfunction is spanned by \(-aU+x\cdot\nabla U\). Furthermore, \(\mu_{3}>\mu_{2}=q-1\)._
Proof.: Choosing \(v=U\) in (2.21), then since \(U\) is the solution of equation (1.7) we have
\[\mu_{1}\leq\frac{\int_{\mathbb{R}^{2}}|x|^{-2a}|\nabla U|^{2}\mathrm{d}x}{\int _{\mathbb{R}^{2}}|x|^{-qb}U^{q}\mathrm{d}x}=1. \tag{2.23}\]
Then by the Holder inequality and (CKN) inequality (1.5), we obtain
\[\int_{\mathbb{R}^{2}}|x|^{-qb}U^{q-2}v^{2}\mathrm{d}x\leq \left(\int_{\mathbb{R}^{2}}|x|^{-qb}|U|^{q}\mathrm{d}x\right)^{ \frac{q-2}{q}}\left(\int_{\mathbb{R}^{2}}|x|^{-qb}|v|^{q}\mathrm{d}x\right)^{ \frac{2}{q}}\]
\[= \mathcal{S}_{a,b}\left(\int_{\mathbb{R}^{2}}|x|^{-qb}|v|^{q}\mathrm{d }x\right)^{\frac{2}{q}}\leq\int_{\mathbb{R}^{2}}|x|^{-2a}|\nabla v|^{2}\mathrm{ d}x, \tag{2.24}\]
which implies \(\mu_{1}\geq 1\), thus \(\mu_{1}=1\). Furthermore, note that the equality in (2.24) holds if and only if \(v=U\), therefore the first eigenvalue \(\mu_{1}=1\) with corresponding eigenfunction \(U\) (up to scalar multiplications).
Note also that \(U\) minimizes the functional
\[v\mapsto\Phi(v)=\frac{1}{2}\int_{\mathbb{R}^{2}}|x|^{-2a}|\nabla v|^{2} \mathrm{d}x-\frac{1}{q}\int_{\mathbb{R}^{2}}|x|^{-qb}|v|^{q}\mathrm{d}x, \tag{2.25}\]
on the Nehari manifold
\[\mathcal{N}:=\left\{v\in\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})\setminus\{0\}: \langle\Phi^{\prime}(v),v\rangle=0\right\}.\]
Indeed, for \(v\in\mathcal{N}\) we have by (1.5) that
\[\Phi(v)= \left(\frac{1}{2}-\frac{1}{q}\right)\int_{\mathbb{R}^{2}}|x|^{- qb}|v|^{q}\mathrm{d}x=\left(\frac{1}{2}-\frac{1}{q}\right)\left(\frac{\int_{ \mathbb{R}^{2}}|x|^{-2a}|\nabla v|^{2}\mathrm{d}x}{\left(\int_{\mathbb{R}^{2}} |x|^{-qb}|v|^{q}\mathrm{d}x\right)^{\frac{2}{q}}}\right)^{\frac{q}{q-2}}\] \[\geq \left(\frac{1}{2}-\frac{1}{q}\right)\mathcal{S}_{a,b}^{\frac{q}{ q-2}}=\left(\frac{1}{2}-\frac{1}{q}\right)\left(\frac{\int_{\mathbb{R}^{2}}|x|^{-2a }|\nabla U|^{2}\mathrm{d}x}{\left(\int_{\mathbb{R}^{2}}|x|^{-qb}|U|^{q} \mathrm{d}x\right)^{\frac{2}{q}}}\right)^{\frac{q}{q-2}}=\Phi(U).\]
As a consequence, the second derivative \(\Phi^{\prime\prime}(U)\) given by
\[(\phi,\varphi)\mapsto \int_{\mathbb{R}^{2}}|x|^{-2a}\nabla\phi\cdot\nabla\varphi\mathrm{ d}x-(q-1)\int_{\mathbb{R}^{2}}|x|^{-qb}|U|^{q-2}\phi\varphi\mathrm{d}x\]
is nonnegative quadratic form when restricted to the tangent space \(T_{U}\mathcal{N}\), then we have
\[\int_{\mathbb{R}^{2}}|x|^{-2a}|\nabla\varphi|^{2}\mathrm{d}x\geq(q-1)\int_{ \mathbb{R}^{2}}|x|^{-qb}|U|^{q-2}|\varphi|^{2}\mathrm{d}x,\]
for all \(\varphi\in T_{U}\mathcal{N}\). Since \(T_{U}\mathcal{N}\) has codimension one, we infer that \(\mu_{2}\geq q-1\). Moreover, since \((-aU+x\cdot\nabla U)\) is a solution of (2.20) with \(\mu=q-1\) which indicates \(\mu_{2}\leq q-1\), then we conclude that \(\mu_{2}=q-1\). Then from Theorem 1.1, we complete the proof.
Indeed, as in Section 2, given an eigenfunction of the form \(v(x)=\varphi(r)\Psi(\theta)\) where \(r=|x|\) and \(\theta=\frac{x}{|x|}\in\mathbb{S}^{N-1}\), the eigenvalue problem corresponds to the following system
\[0= \Delta_{\mathbb{S}^{N-1}}\Psi+\lambda\Psi\quad\text{on }\mathbb{S}^{N-1}, \tag{2.27}\] \[0= \varphi^{\prime\prime}+\frac{(1-2a)\varphi^{\prime}}{r}-\frac{k^ {2}\varphi}{r^{2}}+\mu C_{a,b}^{q-2}\frac{r^{-a(q-2)-2}\varphi}{(1+r^{-a(q-2) })^{2}}\quad\text{in }r\in(0,\infty). \tag{2.26}\]
Making the change of variable: \(r=t^{\tau}\) with \(\tau:=(a-b)/[a(1+a-b)]\), and let
\[\eta(t)=\varphi(r),\]
then multiplying by the integrating factor \(t^{K-1}\), (2.27) is equivalent to
\[0=\left(t^{K-1}\eta^{\prime}\right)^{\prime}-\tau^{2}\lambda t^{K-3}\eta+\mu V ^{2^{*}-2}t^{K-1}\eta\quad\text{on }t\in[0,\infty), \tag{2.28}\]
where \(V(t)=[K(K-2)]^{\frac{K-2}{4}}(1+t^{2})^{-\frac{K-2}{2}}\sim U(x)\) and \(K=2/(1+a-b)\) satisfying \(q=\frac{2K}{K-2}=\tilde{2}^{*}\). For each \(\lambda\) the ordinary differential equation (2.28) takes the form of the Sturm-Liouville eigenvalue problem
\[L\eta+\mu\eta=0\quad\text{on }[0,\infty), \tag{2.29}\]
where
\[L\eta=\frac{1}{\mathfrak{W}}[(\mathfrak{P}\eta^{\prime})^{\prime}-\mathfrak{ D}\eta]\]
with
\[\mathfrak{P}(t)=t^{K-1},\quad\mathfrak{Q}(t)=\mu^{2}\lambda t^{K-3},\quad \mathfrak{W}(t)=V^{\tilde{2}^{*}-2}t^{K-1},\]
and the eigenfunctions belong to
\[\mathcal{H}:=\{g:[0,\infty)\mapsto\mathbb{R}:g\in L^{2}([0,\infty);\mathfrak{ W}),g^{\prime}\in L^{2}([0,\infty);\mathfrak{P})\}.\]
When \(K\) is an integer, [23, Lemma B.3] states that,
1. if \(\eta_{1}\) and \(\eta_{2}\) are two eigenfunctions corresponding to the same eigenvalue \(\alpha\), then \(\eta_{1}=c\eta_{2}\) for some \(c\in\mathbb{R}\);
2. the \(i\)-th eigenfunction of \(L\) has \(i-1\) interior zeros.
On the other hand, since (2.29) is an ODE, even if \(K\) is not an integer the conclusion also holds. Note that the functions \(U\) and \((-aU+x\cdot\nabla U)\) corresponding to \(\mu=1\) and \((q-1)\), furthermore, since \(U\) is positive and \((-aU+x\cdot\nabla U)\) has only one zero, the classical Sturm-Liouville theory shown as previous ensures that \(1\) and \((q-1)\) are the first two eigenvalues. Then from inequality (1.5) and Theorem 1.1, we can also deduce Theorem 2.2.
Since the extremals set \(\mathcal{M}=\{cU_{\lambda}:c\in\mathbb{R},\lambda>0\}\) is two-dimensional manifold embedded in \(\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})\), that is
\[(c,\lambda)\in\mathbb{R}\times\mathbb{R}^{+}\to cU_{\lambda}\in\mathcal{D}_{a }^{1,2}(\mathbb{R}^{2}),\]
then we deduce that the tangential space at \((1,1)\) is given by
\[T_{U}\mathcal{M}=\operatorname{Span}\left\{U,\ \frac{\partial U_{\lambda}}{ \partial\lambda}\Big{|}_{\lambda=1}\right\}. \tag{2.30}\]
Note that \(\frac{\partial U_{\lambda}}{\partial\lambda}|_{\lambda=1}=-aU+x\cdot\nabla U\). Then from the definition of eigenvalues, combining with Theorem 2.2 we deduce the following important spectral gap conclusion.
**Proposition 2.3**.: _Assume that \(-\infty<a<0\) and \(b_{\rm FS}(a)<b<a+1\). It holds that_
\[\int_{\mathbb{R}^{2}}|x|^{-2a}|\nabla v|^{2}{\rm d}x\geq\mu_{3}\int_{\mathbb{ R}^{2}}|x|^{-qb}U^{q-2}v^{2}{\rm d}x,\quad\text{for all }v\in(T_{U}\mathcal{M})^{\perp},\]
_where \(\mu_{3}>q-1\) is the third eigenvalue of (2.20) given in Theorem 2.2. Moreover, equality holds if and only if \(v\) is the eigenfunction of \(\mu_{3}\)._
Note that when \(K\) is an integer, from the work of Rey [32, Appendix D], (2.28) indicates
\[\mu_{k}=\frac{4[\tau^{2}(k-1)^{2}+(k-1)]+K(K-2)}{K(K-2)},\quad k\geq 3.\]
In particular,
\[\mu_{3}=\frac{8(\tau^{2}+1)+K(K-2)}{K(K-2)}>q-1,\quad\tau=\frac{a-b}{a(1+a-b)}, \tag{2.31}\]
moreover, it is achieved by
\[\varrho(x)=U(x)\Psi_{2}(x),\]
with \(\Psi_{2}\) a spherical harmonic of degree \(k=2\) for \(-\Delta_{\mathbb{S}^{2}}\). On the other hand, since (2.28) is an ODE, even if \(K\) is not an integer the conclusion also holds.
## 3. **Stability of (CKN) inequality**
The main ingredient of the stability of (CKN) inequality (1.5) is contained in the following lemma, in which the behavior near the extremal functions set \(\mathcal{M}\) is studied. In order to shorten formulas, for \(u\in\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})\) we denote
\[\|u\|:=\left(\int_{\mathbb{R}^{2}}|x|^{-2a}|\nabla u|^{2}\mathrm{d}x\right)^{ \frac{1}{2}},\quad\|u\|_{*}:=\left(\int_{\mathbb{R}^{2}}|x|^{-qb}|u|^{q} \mathrm{d}x\right)^{\frac{1}{q}}, \tag{3.1}\]
and
\[d_{n}:=\mathrm{dist}(u_{n},\mathcal{M})=\inf_{c\in\mathbb{R},\lambda>0}\|u_{n }-cU_{\lambda}\|.\]
**Lemma 3.1**.: _Assume that \(-\infty<a<0\) and \(b_{\mathrm{FS}}(a)<b<a+1\). Then for any sequence \(\{u_{n}\}\subset\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})\setminus\mathcal{M}\) satisfying \(\inf_{n\in\mathbb{N}}\|u_{n}\|>0\) and \(d_{n}\to 0\), it holds that_
\[\liminf_{n\to\infty}\frac{\|u_{n}\|^{2}-\mathcal{S}_{a,b}\|u_{n}\|_{*}^{2}}{d _{n}^{2}}\geq 1-\frac{q-1}{\mu_{3}}, \tag{3.2}\]
_where \(\mu_{3}>q-1\) is given as in Theorem 2.2._
Proof.: Taking the same arguments as in [37, Proposition 4.1], we know that for each \(u_{n}\in\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})\setminus\mathcal{M}\) satisfying \(\inf_{n\in\mathbb{N}}\|u_{n}\|>0\) and \(d_{n}\to 0\), when \(n\) is sufficiently large, there exist \(c_{n}\in\mathbb{R}\setminus\{0\}\) and \(\lambda_{n}>0\) such that \(d_{n}=\|u_{n}-c_{n}U_{\lambda_{n}}\|\). In fact,
\[\begin{split}\|u_{n}-cU_{\lambda}\|^{2}=&\|u_{n}\|^ {2}+c^{2}\|U_{\lambda}\|^{2}-2c\langle u_{n},U_{\lambda}\rangle_{\mathcal{D}_{ a}^{1,2}(\mathbb{R}^{2})}\\ \geq&\|u_{n}\|^{2}+c^{2}\|U\|^{2}-2|c|\|u_{n}\|\|U\|. \end{split} \tag{3.3}\]
Thus the minimizing sequence of \(d_{n}^{2}\), say \(\{c_{n,m},\lambda_{n,m}\}\), must satisfy \(|c_{n,m}|\leq C\) for some \(C\geq 1\) independent of \(m\) which means \(\{c_{n,m}\}\) is bounded, furthermore \(\{c_{n,m}\}\) is away from zero due to \(\inf_{n\in\mathbb{N}}\|u_{n}\|>0\). On the other hand, by using the Vitali's convergence theorem we deduce
\[\left|\int_{|\lambda x|\geq\rho}|x|^{-2a}\nabla u_{n}\nabla U_{\lambda} \mathrm{d}x\right|\leq \|U\|\left(\int_{|x|\geq\frac{\rho}{\lambda}}|x|^{-2a}|\nabla u _{n}|^{2}\mathrm{d}x\right)^{1/2}=o_{\lambda}(1)\]
as \(\lambda\to 0^{+}\) for any fixed \(\rho>0\). Moreover, by the explicit form of \(U\) we have
\[\left|\int_{|\lambda x|\leq\rho}|x|^{-2a}\nabla u_{n}\nabla U_{\lambda} \mathrm{d}x\right|\leq \int_{|y|\leq\rho}|y|^{-2a}|\nabla(u_{n})_{\frac{1}{\lambda}}(y)|| \nabla U(y)|\mathrm{d}y\]
\[\leq \|u_{n}\|\left(\int_{|y|\leq\rho}|y|^{-2a}|\nabla U|^{2}\mathrm{d}y \right)^{1/2}=O(\rho^{\frac{-2a}{b-a}})=o_{\rho}(1)\]
as \(\rho\to 0^{+}\) which is uniform for \(\lambda>0\), where \((u_{n})_{\frac{1}{\lambda}}(y)=\lambda^{a}u_{n}(\lambda^{-1}y)\). Thus by taking \(\lambda\to 0^{+}\) and then \(\rho\to 0^{+}\), we obtain
\[\left|\int_{\mathbb{R}^{N}}|x|^{-2a}\nabla u_{n}\nabla U_{\lambda}\mathrm{d}x \right|\to 0\quad\text{as}\quad\lambda\to 0^{+}. \tag{3.4}\]
Moreover,
\[\left|\int_{|\lambda x|\leq R}|x|^{-2a}\nabla u_{n}\nabla U_{\lambda}\mathrm{d }x\right|\leq \|U\|\left(\int_{|x|\leq\frac{R}{\lambda}}|x|^{-2a}|\nabla u_{n}|^ {2}\mathrm{d}x\right)^{1/2}=o_{\lambda}(1)\]
as \(\lambda\to+\infty\) for any fixed \(R>0\), and
\[\left|\int_{|\lambda x|\geq R}|x|^{-2a}\nabla u_{n}\nabla U_{ \lambda}\mathrm{d}x\right|\leq \int_{|y|\geq R}|y|^{-2a}|\nabla(u_{n})_{\frac{1}{\lambda}}(y) ||\nabla U(y)|\mathrm{d}y\] \[\leq \|u_{n}\|\left(\int_{|y|\geq R}|y|^{-2a}|\nabla U|^{2}\mathrm{d} y\right)^{1/2}=O(R^{\frac{2a}{b-a}})=o_{R}(1)\]
as \(R\to+\infty\) which is uniform for \(\lambda>0\). Thus by taking first \(\lambda\to+\infty\) and then \(R\to+\infty\), we also obtain
\[\left|\int_{\mathbb{R}^{N}}|x|^{-2a}\nabla u_{n}\nabla U_{\lambda}\mathrm{d}x \right|\to 0\quad\text{as}\quad\lambda\to+\infty. \tag{3.5}\]
Combining with (3.4) and (3.5), it follows from (3.3) and \(d_{n}\to 0\), \(\inf_{n}\|u_{n}\|>0\) that the minimizing sequence \(\{c_{n,m},\lambda_{n,m}\}\) must satisfying \(1/C\leq|\lambda_{n,m}|\leq C\) for some \(C\geq 1\) independent of \(m\) which means \(\{\lambda_{n,m}\}\) is also bounded. Thus for each \(u_{n}\in\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})\setminus\mathcal{M}\), when \(n\) is sufficiently large, \(d_{n}^{2}\) can be attained by some \(c_{n}\in\mathbb{R}\) and \(\lambda_{n}>0\). Note that \(\{c_{n}\}\) is away from zero for \(n\) sufficiently large.
Since \(\mathcal{M}\) is two-dimensional manifold embedded in \(\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})\), that is
\[(c,\lambda)\in\mathbb{R}\times\mathbb{R}^{+}\to cU_{\lambda}\in\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2}),\]
then same as in (2.30), under suitable transformation, we deduce that the tangential space at \((c_{n},\lambda_{n})\) is given by
\[T_{c_{n}U_{\lambda_{n}}}\mathcal{M}=\mathrm{Span}\left\{U_{\lambda_{n}}, \frac{\partial U_{\lambda}}{\partial\lambda}\Big{|}_{\lambda=\lambda_{n}} \right\},\]
and we must have that \((u_{n}-c_{n}U_{\lambda_{n}})\) is perpendicular to \(T_{c_{n}U_{\lambda_{n}}}\mathcal{M}\), in particular,
\[\int_{\mathbb{R}^{2}}|x|^{-2a}\nabla U_{\lambda_{n}}\cdot\nabla(u_{n}-c_{n}U_ {\lambda_{n}})\mathrm{d}x=\int_{\mathbb{R}^{2}}|x|^{-qb}U_{\lambda_{n}}^{q-1}( u_{n}-c_{n}U_{\lambda_{n}})\mathrm{d}x=0.\]
Furthermore, from Proposition 2.3 we have
\[\int_{\mathbb{R}^{2}}|x|^{-2a}|\nabla(u_{n}-c_{n}U_{\lambda_{n}})|^{2}\mathrm{ d}x\geq\mu_{3}\int_{\mathbb{R}^{2}}|x|^{-qb}U_{\lambda_{n}}^{q-2}(u_{n}-c_{n}U_{ \lambda_{n}})^{2}\mathrm{d}x. \tag{3.6}\]
Let \(u_{n}=c_{n}U_{\lambda_{n}}+d_{n}w_{n}\), then \(w_{n}\) is perpendicular to \(T_{c_{n}U_{\lambda_{n}}}\mathcal{M}\),
\[\|w_{n}\|=1\quad\text{and}\quad\|u_{n}\|^{2}=d_{n}^{2}+c_{n}^{2}\|U\|^{2},\]
in particular,
\[\int_{\mathbb{R}^{2}}|x|^{-2a}\nabla U_{\lambda_{n}}\cdot\nabla w_{n}\mathrm{d}x= \int_{\mathbb{R}^{2}}|x|^{-qb}U_{\lambda_{n}}^{q-1}w_{n}\mathrm{d}x=0. \tag{3.7}\]
Then we can rewrite (3.6) as follows:
\[\int_{\mathbb{R}^{2}}|x|^{-qb}U_{\lambda_{n}}^{q-2}w_{n}^{2}\mathrm{d}x\leq \frac{1}{\mu_{3}}. \tag{3.8}\]
By using Taylor's expansion, we deduce
\[\int_{\mathbb{R}^{2}}|x|^{-qb}|u_{n}|^{q}\mathrm{d}x= \int_{\mathbb{R}^{2}}|x|^{-qb}|c_{n}U_{\lambda_{n}}+d_{n}w_{n}|^{q }\mathrm{d}x\] \[= |c_{n}|^{q}\int_{\mathbb{R}^{2}}|x|^{-qb}U_{\lambda_{n}}^{q} \mathrm{d}x+qd_{n}|c_{n}|^{q-1}\int_{\mathbb{R}^{2}}|x|^{-qb}U_{\lambda_{n}}^ {q-1}w_{n}\mathrm{d}x\] \[+\frac{q(q-1)d_{n}^{2}|c_{n}|^{q-2}}{2}\int_{\mathbb{R}^{2}}|x|^{ -qb}U_{\lambda_{n}}^{q-2}w_{n}^{2}\mathrm{d}x+o(d_{n}^{2}) \tag{3.9}\] \[= |c_{n}|^{q}\|U\|^{2}+\frac{q(q-1)d_{n}^{2}|c_{n}|^{q-2}}{2}\int_{ \mathbb{R}^{2}}|x|^{-qb}U_{\lambda_{n}}^{q-2}w_{n}^{2}\mathrm{d}x+o(d_{n}^{2}).\]
Then combining with (3.8) and (3.9), by the concavity of \(t\mapsto t^{\frac{2}{q}}\) since \(2<q<\infty\), we obtain
\[\|u_{n}\|_{*}^{2}= \left(\int_{\mathbb{R}^{2}}|x|^{-qb}|u_{n}|^{q}\mathrm{d}x\right) ^{\frac{2}{q}}\] \[\leq c_{n}^{2}\left(\|U\|^{2}+\frac{q(q-1)d_{n}^{2}c_{n}^{-2}}{2\mu_{3} }+o(d_{n}^{2})\right)^{\frac{2}{q}}\] \[= c_{n}^{2}\left(\|U\|^{\frac{4}{q}}+\frac{2}{q}\frac{q(q-1)d_{n}^ {2}c_{n}^{-2}}{2\mu_{3}}\|U\|^{\frac{4}{q}-2}+o(d_{n}^{2})\right) \tag{3.10}\] \[= c_{n}^{2}\|U\|^{\frac{4}{q}}+\frac{d_{n}^{2}(q-1)}{\mu_{3}}\|U\| ^{\frac{4}{q}-2}+o(d_{n}^{2}).\]
Therefore, for \(n\) sufficiently large,
\[\|u_{n}\|^{2}-\mathcal{S}_{a,b}\|u_{n}\|_{*}^{2}\geq d_{n}^{2}+c_{n}^{2}\|U\|^{2}-\mathcal{S}_{a,b}\left[c_{n}^{2}\|U\|^{ \frac{4}{q}}+\frac{d_{n}^{2}(q-1)}{\mu_{3}}\|U\|^{\frac{4}{q}-2}+o(d_{n}^{2})\right]\] \[= d_{n}^{2}\left(1-\frac{q-1}{\mu_{3}}\mathcal{S}_{a,b}\|U\|^{ \frac{4}{q}-2}\right)+c_{n}^{2}\left(\|U\|^{2}-\mathcal{S}_{a,b}\|U\|^{\frac{4 }{q}}\right)+o(d_{n}^{2})\] \[= d_{n}^{2}\left(1-\frac{q-1}{\mu_{3}}\right)+o(d_{n}^{2}),\]
due to \(\mathcal{S}_{a,b}\|U\|_{*}^{2}=\|U\|^{2}=\|U\|_{*}^{q}\) which implies \(\mathcal{S}_{a,b}=\|U\|^{2-\frac{4}{q}}\), then (3.2) follows immediately.
As in the Sobolev and Hardy-Sobolev inequality, we establish the following relative Lions' type concentration and compactness principle (see [26]) of minimizing sequence of best constant \(\mathcal{S}_{a,b}\) which is crucial for our result.
**Proposition 3.2**.: _Assume that \(-\infty<a<0\) and \(b_{\mathrm{FS}}(a)<b<a+1\). Let \(\{u_{n}\}\subset\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})\) be a minimizing sequence of the best constant \(\mathcal{S}_{a,b}\) in (1.5). Then there exists \(\lambda_{n}\subset(0,+\infty)\) such that \((u_{n})_{\lambda_{n}}\to u_{0}\) strongly in \(\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})\) as \(n\to\infty\) up to a subsequence, where \(u_{0}\) is a minimizer of \(\mathcal{S}_{a,b}\). Here \((u_{n})_{\lambda_{n}}(\cdot):=\lambda_{n}^{-a}u_{n}(\lambda_{n}\cdot)\)._
Proof.: We follow the arguments as those in [37, Proposition 3.1]. Without loss of generality, we may assume that \(\|u_{n}\|_{*}=1\), that is,
\[\|u_{n}\|^{2}\to\mathcal{S}_{a,b}.\]
Then, \(\{u_{n}\}\) is bounded in \(\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})\). Thus up to a subsequence, still labeled by \(\{u_{n}\}\), \(u_{n}\rightharpoonup u\) weakly in \(\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})\) as \(n\to\infty\) for some \(u\in\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})\). As in [10, Proposition 2.2], let us make the change
\[u_{n}(x)=|x|^{-a}v_{n}\left(-\ln|x|,\frac{x}{|x|}\right), \tag{3.11}\]
then \(\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})\) is isomorphic to the Hilbert space \(H^{1}(\mathcal{C})\) with above transformation, where \(\mathcal{C}:=\mathbb{R}\times\mathbb{S}^{1}\) is the standard cylinder and the inner product in \(H^{1}(\mathcal{C})\) is given by
\[\langle\phi,\varphi\rangle_{H^{1}(\mathcal{C})}=\int_{\mathcal{C}}(\nabla \phi\cdot\nabla\varphi+a^{2}\phi\varphi)\mathrm{d}\mu\]
with \(\mathrm{d}\mu\) being the volume element on \(\mathcal{C}\). Then \(v_{n}\rightharpoonup v\) weakly in \(H^{1}(\mathcal{C})\). Under the assumption (1.4), it holds that \(2<q<\infty\), then by [10, Lemma 4.1], there exists \(\{\lambda_{n}\}\subset\mathbb{R}^{+}\) such that
\[\overline{v}_{n}=v_{n}(t-\lambda_{n},\theta)\rightharpoonup\overline{v}\quad \text{weakly in}\quad H^{1}(\mathcal{C}).\]
Here \(\theta=\frac{x}{|x|}\in\mathbb{S}^{1}\). It follows from the Brezis-Lieb lemma and the concavity of the function \(t^{\frac{2}{q}}\) for \(t\in(0,1)\) with \(q>2\) that
\[\mathcal{S}_{a,b}(1+o_{n}(1))= \|\overline{v}_{n}-\overline{v}\|^{2}_{H^{1}(\mathcal{C})}+\| \overline{v}\|^{2}_{H^{1}(\mathcal{C})}\] \[\geq \mathcal{S}_{a,b}\left[\left(1-\|\overline{v}\|^{q}_{L^{q}( \mathcal{C})}+o_{n}(1)\right)^{\frac{2}{q}}+\|\overline{v}\|^{2}_{L^{q}( \mathcal{C})}\right]\] \[\geq \mathcal{S}_{a,b}(1+o_{n}(1))\]
which implies that \(\overline{v}_{n}\to\overline{v}\) strongly in \(L^{q}(\mathcal{C})\) as \(n\to\infty\). Correspondingly, by (3.11), we obtain \(\|(u_{n})_{\lambda_{n}}\|_{*}\to\|u_{0}\|_{*}\) for some nontrivial \(u_{0}\in\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})\). Then by using semicontinuity of norm,
\[0\leq\|u_{0}\|^{2}-\mathcal{S}_{a,b}\|u_{0}\|_{*}^{2}\leq\lim_{n\to\infty}\|( u_{n})_{\lambda_{n}}\|^{2}-\mathcal{S}_{a,b}\lim_{n\to\infty}\|(u_{n})_{ \lambda_{n}}\|_{*}^{2}=0\]
which implies \((u_{n})_{\lambda_{n}}\to u_{0}\) strongly in \(\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})\) thus \(u_{0}\) is a minimizer of \(\mathcal{S}_{a,b}\).
Now, we are ready to prove our first main result.
**Proof of Theorem 1.3.** Assume that the theorem is not true then we could find a sequence \(\{u_{n}\}\subset\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})\backslash\mathcal{M}\) such that
\[\liminf_{n\to\infty}\frac{\|u_{n}\|^{2}-\mathcal{S}_{a,b}\|u_{n}\|_{*}^{2}}{d _{n}^{2}}=0.\]
By homogeneity, we can assume that \(\|u_{n}\|=1\), and after selecting a subsequence we can assume that \(d_{n}\to\varpi\in[0,1]\) since \(d_{n}=\inf_{c\in\mathbb{R},\lambda>0}\|u_{n}-cU_{\lambda}\|\leq\|u_{n}\|\). If \(\varpi=0\), then we deduce a contradiction by Lemma 3.1.
The other possibility only is that \(\varpi>0\), that is \(d_{n}\to\varpi>0\), then we must have
\[\|u_{n}\|^{2}-\mathcal{S}_{a,b}\|u_{n}\|_{*}^{2}\to 0,\quad\|u_{n}\|=1. \tag{3.12}\]
Then from Proposition 3.2, going if necessary to a subsequence, there exists a sequence of positive numbers \(\{\lambda_{n}\}\) such that
\[(u_{n})_{\lambda_{n}}\to u_{0}\quad\text{in}\quad\mathcal{D}_{a}^{1,2}( \mathbb{R}^{2})\quad\text{as}\quad n\to\infty,\]
for some \(u_{0}\in\mathcal{M}\), here \((u_{n})_{\lambda_{n}}(\cdot)=\lambda_{n}^{-a}u_{n}(\lambda_{n}\cdot)\), which implies
\[d_{n}=\operatorname{dist}(u_{n},\mathcal{M})=\operatorname{dist}\left((u_{n}) _{\lambda_{n}},\mathcal{M}\right)\to 0\quad\text{as}\quad n\to\infty,\]
which leads to a contradiction.
Therefore, the proof of Theorem 1.3 is completed.
From the proof of Lemma 3.1, it is easy to verify that the best stability constant \(\mathcal{B}\) defined in (1.11) satisfies
\[\mathcal{B}\leq 1-\frac{q-1}{\mu_{3}},\]
where \(\mu_{3}(>q-1)\) is the third eigenvalue of (2.20) given as in (2.31). Then following the arguments established by Konig in [24], we will show the inequality is strict.
**Corollary 3.3**.: _Under the assumption in Theorem 1.3. Then the best stability constant \(\mathcal{B}\) defined in (1.11) satisfies_
\[\mathcal{B}<1-\frac{q-1}{\mu_{3}}.\]
Proof.: We build a sequence of test functions of the form \(U+\varepsilon\varrho\), where \(\varepsilon>0\) is sufficiently small, and \(\varrho\in(T_{U}\mathcal{M})^{\perp}\) is the third eigenfunction of (2.20), that is,
\[\int_{\mathbb{R}^{2}}|x|^{-2a}|\nabla\varrho|^{2}\mathrm{d}x=\mu_{3}\int_{ \mathbb{R}^{2}}|x|^{-qb}U^{q-2}\varrho^{2}\mathrm{d}x,\]
particularly,
\[\int_{\mathbb{R}^{2}}|x|^{-2a}\nabla U\cdot\nabla\varrho\mathrm{d}x=0.\]
See \(T_{U}\mathcal{M}\) as in (2.30).
The orthogonality relations, and the fact \(-\mathrm{div}(|x|^{-2a}\nabla U)=|x|^{-bq}U^{q-1}\) imply
\[\|U+\varepsilon\varrho\|^{2}=\|U\|^{2}+\varepsilon^{2}\|\varrho\|^{2},\]
and
\[\int_{\mathbb{R}^{2}}|x|^{-2a}\nabla U\cdot\nabla\varrho\mathrm{d}x=\int_{ \mathbb{R}^{2}}|x|^{-qb}U^{q-1}\varrho\mathrm{d}x=0.\]
On the other hand, a Taylor expansion yields
\[(U+\varepsilon\varrho)^{q}=U^{q}+\varepsilon qU^{q-1}\varrho+\varepsilon^{2} \frac{q(q-1)}{2}U^{q-2}\varrho^{2}+\varepsilon^{3}\frac{q(q-1)(q-2)}{6}U^{q-3 }\varrho^{3}+o(\varepsilon^{3}). \tag{3.13}\]
Note that the Taylor expansion up to a third order is justified no matter the value of \(q\), because \(\varepsilon|\varrho|\ll U\) in every point \(x\in\mathbb{R}^{2}\), as \(\varepsilon\to 0^{+}\). Hence, as in the proof of Lemma 3.1,
\[\|U+\varepsilon\varrho\|_{*}^{2}= \|U\|_{*}^{2}+(q-1)\varepsilon^{2}\|U\|_{*}^{2-q}\int_{\mathbb{R} ^{2}}|x|^{-qb}U^{q-2}\varrho^{2}\mathrm{d}x\] \[+\frac{(q-1)(q-2)}{3}\varepsilon^{3}\|U\|_{*}^{2-q}\int_{\mathbb{ R}^{2}}|x|^{-qb}U^{q-3}\varrho^{3}\mathrm{d}x+o(\varepsilon^{3}).\]
Therefore,
\[\|U+\varepsilon\varrho\|^{2}-\mathcal{S}_{a,b}\|U+\varepsilon \varrho\|_{*}^{2}= \left(\|U\|^{2}-\mathcal{S}_{a,b}\|U\|_{*}^{2}\right)\] \[+\varepsilon^{2}\left[\|\varrho\|^{2}-(q-1)\mathcal{S}_{a,b}\|U \|_{*}^{2-q}\int_{\mathbb{R}^{2}}|x|^{-qb}U^{q-2}\varrho^{2}\mathrm{d}x\right]\] \[-\varepsilon^{3}\frac{(q-1)(q-2)}{3}\mathcal{S}_{a,b}\|U\|_{*}^{2 -q}\int_{\mathbb{R}^{2}}|x|^{-qb}U^{q-3}\varrho^{3}\mathrm{d}x+o(\varepsilon^ {3})\] \[= \varepsilon^{2}\left(1-\frac{q-1}{\mu_{3}}\right)\|\varrho\|^{2} -\varepsilon^{3}\frac{(q-1)(q-2)}{3}\int_{\mathbb{R}^{2}}|x|^{-qb}U^{q-3} \varrho^{3}\mathrm{d}x \tag{3.14}\] \[+o(\varepsilon^{3}),\]
due to \(\mathcal{S}_{a,b}\|U\|_{*}^{2}=\|U\|^{2}=\|U\|_{*}^{q}\). See the definitions of \(\|\cdot\|\) and \(\|\cdot\|_{*}\) as in (3.1).
Note that
\[\mathrm{dist}(U+\varepsilon\varrho,\mathcal{M})^{2}=\inf_{c\in\mathbb{R}, \lambda>0}\|U+\varepsilon\varrho-cU_{\lambda}\|^{2}\leq\varepsilon^{2}\| \varrho\|^{2}\to 0\]
as \(\varepsilon\to 0^{+}\), and \(\|U+\varepsilon\varrho\|^{2}=\|U\|^{2}+\varepsilon^{2}\|\varrho\|^{2}\geq c_{1}\) for some constant \(c_{1}>0\). Then from the proof of Lemma 3.1, for each \(\varepsilon>0\) sufficiently small, there exist \(c_{\varepsilon}\in\mathbb{R}\backslash\{0\}\) and \(\lambda_{\varepsilon}\in(0,+\infty)\) such that
\[\mathrm{dist}(U+\varepsilon\varrho,\mathcal{M})^{2}=\|U+\varepsilon\varrho-c_ {\varepsilon}U_{\lambda_{\varepsilon}}\|^{2}.\]
We claim that \(c_{\varepsilon}\to 1\) and \(\lambda_{\varepsilon}\to 1\) as \(\varepsilon\to 0^{+}\). Then by this claim and the implicit function theorem, for \(\varepsilon>0\) sufficiently small we have
\[\mathrm{dist}(U+\varepsilon\varrho,\mathcal{M})^{2}=\varepsilon^{2}\|\varrho \|^{2}. \tag{3.15}\]
Now, we prove this claim. Note that \(\|U+\varepsilon\varrho-c_{\varepsilon}U_{\lambda_{\varepsilon}}\|^{2}\leq \varepsilon^{2}\|\varrho\|^{2}\) which implies
\[\|U\|^{2}+c_{\varepsilon}^{2}\|U\|^{2}\leq 2c_{\varepsilon}\langle U,U_{\lambda_{\varepsilon}}\rangle_{ \mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})}+2\varepsilon c_{\varepsilon}\langle \varrho,U_{\lambda_{\varepsilon}}\rangle_{\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2 })} \tag{3.16}\] \[\leq 2c_{\varepsilon}\int_{\mathbb{R}^{2}}|x|^{-qb}U^{q-1}U_{\lambda_{ \varepsilon}}\mathrm{d}x+2\varepsilon|c_{\varepsilon}|\|\varrho\|\|U\|,\]
thus \(c_{\varepsilon}\) is positive and bounded uniformly for \(\varepsilon>0\) sufficiently small. Furthermore, from (3.16) we also obtain
\[0\leq(1-c_{\varepsilon})^{2}\|U\|^{2}\leq 2\varepsilon c_{\varepsilon}\|\varrho\|\|U\|\to 0,\]
then \(c_{\varepsilon}\) must tends to \(1\) and \(c_{\varepsilon}=1+O(\varepsilon^{1/2})\). On the other hand, from (3.16) we deduce
\[(1-c_{\varepsilon})^{2}\|U\|^{2}\leq (1+c_{\varepsilon}^{2})\|U\|^{2}-2c_{\varepsilon}\langle U,U_{ \lambda_{\varepsilon}}\rangle_{\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})}\leq 2 \varepsilon c_{\varepsilon}\|\varrho\|\|U\|,\]
then it must be that
\[\langle U,U_{\lambda_{\varepsilon}}\rangle_{\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})} \rightarrow\|U\|^{2}. \tag{3.17}\]
Note that
\[\langle U,v\rangle_{\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})}\leq\|U\|\|v\|,\]
and the equality holds if and only if \(v=U\), thus (3.17) indicates \(\lambda_{\varepsilon}\to 1\).
Then, the expansion (3.14) and (3.15) yield, for \(\varepsilon>0\) sufficiently small, the desired strict inequality
\[\mathcal{B}\leq\mathcal{E}(U+\varepsilon\varrho)<1-\frac{q-1}{\mu_{3}},\]
where \(\mathcal{E}\) is defined in (1.12), provided we can choose \(\varrho\) in a way such that
\[\int_{\mathbb{R}^{2}}|x|^{-qb}U^{q-3}\varrho^{3}\mathrm{d}x>0. \tag{3.18}\]
To achieve this, same as in [24] we make the choice
\[\Psi_{2}(\theta)=\theta_{1}\theta_{2}+\theta_{2}\theta_{3}+\theta_{3}\theta_{ 1},\]
which is clearly a spherical harmonic of degree \(k=2\) for \(-\Delta_{\mathbb{S}^{2}}\). From Subsection 2.2, we can write \(\varrho\) as \(\varrho(x)=U(x)\Psi_{2}(\theta)\), then
\[\int_{\mathbb{R}^{2}}|x|^{-qb}U^{q-3}\varrho^{3}\mathrm{d}x\sim\int_{\mathbb{ S}^{2}}\theta_{1}^{2}\theta_{2}^{2}\theta_{3}^{2}\mathrm{d}\theta>0.\]
Now, the proof is completed.
## 4. **Minimizers for best stability constant**
Hereafter, we always assume \(-\infty<a<0\) and \(b_{\mathrm{FS}}(a)<b<a+1\).
Following the arguments as those in [25], we start by introducing some more notations. Firstly, we denote the standard \(\|\cdot\|_{*}\)-normalized Talenti type bubble by
\[B(x)=c_{a,b}\left(1+|x|^{-a(q-2)}\right)^{-\frac{2}{q-2}}, \tag{4.1}\]
with \(c_{a,b}>0\) chosen such that \(\|B\|_{*}=1\). See the definition of \(\|\cdot\|_{*}\) as in (3.1). As usual, \(B_{\lambda}(x):=\lambda^{-a}B(\lambda x)\) for \(\lambda>0\). Notice that for all \(\lambda>0\),
\[\|B_{\lambda}\|_{*}=\|B\|_{*}=1,\quad\|B_{\lambda}\|^{2}=\mathcal{S}_{a,b}, \quad-\mathrm{div}(|x|^{-2a}\nabla B_{\lambda})=\mathcal{S}_{a,b}|x|^{-bq}B_{ \lambda}^{q-1}.\]
See the definition of \(\|\cdot\|\) as in (3.1). We denote by
\[\mathcal{M}_{1}:=\{B_{\lambda}:\lambda>0\}\subset\mathcal{M}\]
the submanifold of \(\mathcal{M}\) consisting of \(\|\cdot\|_{*}\)-normalized Talenti type bubble.
It does not seem clear whether the functional \(\mathcal{E}(u)\) defined as in (1.12) decreases or increases under rearrangement. The next lemma gives a convenient reformulation of the distant \(\mathrm{dist}(u,\mathcal{M})\) in terms of a new optimization problem
\[\mathbf{m}(u):=\sup_{v\in\mathcal{M}_{1}}(u,|x|^{-qb}v^{q-1})^{2}, \tag{4.2}\]
which can be considered as simpler since it is over the smaller set \(\mathcal{M}_{1}\) and involves no derivative, and it is a variant of the type first introduced in [18]. Here,
\[(u,|x|^{-qb}v^{q-1})=\int_{\mathbb{R}^{2}}|x|^{-qb}v^{q-1}u\mathrm{d}x\]
denotes the pairing between \(L^{q}(\mathbb{R}^{2},|x|^{-qb}\mathrm{d}x)\) and its dual. We will mostly work with this reformulation when proving our results below.
**Lemma 4.1**.: _For each \(u\in\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})\setminus\mathcal{M}\), it holds that_
\[\mathrm{dist}(u,\mathcal{M})^{2}=\|u\|^{2}-\mathcal{S}_{a,b}\mathbf{m}(u). \tag{4.3}\]
_Moreover, if \(\mathrm{dist}(u,\mathcal{M})<\|u\|\) then \(\mathrm{dist}(u,\mathcal{M})\) is achieved. The function \((u,|x|^{-qb}v^{q-1})v\) optimizes \(\mathrm{dist}(u,\mathcal{M})\) if and only if \(v\in\mathcal{M}_{1}\) optimizes \(\mathbf{m}(u)\)._
Proof.: For any \(v\in\mathcal{M}_{1}\) and \(c\in\mathbb{R}\), we have
\[\|u-cv\|^{2}= \|u\|^{2}-2c\mathcal{S}_{a,b}(u,|x|^{-qb}v^{q-1})+c^{2}\mathcal{ S}_{a,b}\] \[= \|u\|^{2}-\mathcal{S}_{a,b}(u,|x|^{-qb}v^{q-1})^{2}+\mathcal{S}_{ a,b}\left(c-(u,|x|^{-qb}v^{q-1})\right)^{2},\]
due to \(-\mathrm{div}(|x|^{-2a}\nabla v)=\mathcal{S}_{a,b}|x|^{-bq}v^{q-1}\). Hence
\[\mathrm{dist}(u,\mathcal{M})^{2}=\inf_{v\in\mathcal{M}_{1}}\inf_{c\in\mathbb{R }}\|u-cv\|^{2}=\|u\|^{2}-\mathcal{S}_{a,b}\sup_{v\in\mathcal{M}_{1}}(u,|x|^{- qb}v^{q-1})^{2},\]
which proves (4.3). The relation between the optimizers of \(\mathrm{dist}(u,\mathcal{M})\) and \(\mathbf{m}(u)\) is now immediate from the fact that
\[\inf_{c\in\mathbb{R}}\left(c-(u,|x|^{-qb}v^{q-1})\right)^{2}\]
is attained uniquely at \(c=(u,|x|^{-qb}v^{q-1})\).
By this relation between optimizers, it only remains to prove that \(\mathbf{m}(u)\) is always achieved if \(\mathbf{m}(u)>0\). Let \(\{B_{\lambda_{n}}\}\) be a minimizing sequence for \(\mathbf{m}(u)\). This sequences converges to some \(B_{\lambda_{0}}\), which plainly is a minimizer, unless \(\lambda_{n}\to 0\) or \(\lambda_{n}\to+\infty\) as \(n\to\infty\). As in the proof of Lemma 3.1, in these two cases it is easy to verify that \((u,|x|^{-qb}B_{\lambda_{n}}^{q-1})=\mathcal{S}_{a,b}^{-1}\langle u,B_{\lambda_ {n}}\rangle_{\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})}\to 0\) as \(n\to\infty\). This completes the proof.
Then, we will show that the best stability constant satisfies
\[\mathcal{B}<2-2^{\frac{2}{q}}.\]
We will do so by considering a sequence of test functions of the form
\[u^{\lambda}(x):=B(x)+B_{\lambda}(x), \tag{4.4}\]
as \(\lambda\to 0^{+}\). Recall the definitions of \(B(x)\) and \(B_{\lambda}(x)\) as in the beginning of this section. The following proposition contains the needed expansion of the terms appearing in \(\mathcal{E}(u^{\lambda})\), where \(\mathcal{E}\) is defined as in (1.12), that is,
\[\mathcal{E}(u^{\lambda}):=\frac{\|u^{\lambda}\|^{2}-\mathcal{S}_{a,b}\|u^{ \lambda}\|_{*}^{2}}{\mathrm{dist}(u^{\lambda},\mathcal{M})^{2}}.\]
**Lemma 4.2**.: _As \(\lambda\to 0^{+}\), the following holds:_
1. \(\|u^{\lambda}\|^{2}=2\mathcal{S}_{a,b}+2\mathcal{S}_{a,b}d_{a,b}\lambda^{-a}+ o(\lambda^{-a});\)
* \(\|u^{\lambda}\|_{*}^{2}=2^{\frac{2}{q}}+2^{\frac{2}{q}+1}d_{a,b}\lambda^{-a}+o( \lambda^{-a})\);
* \(\operatorname{dist}(u^{\lambda},\mathcal{M})^{2}=\mathcal{S}_{a,b}+o(\lambda^{- a})\).
_Here \(d_{a,b}=\int_{\mathbb{R}^{2}}|y|^{-qb}B^{q-1}\mathrm{d}y\) is a positive constant._
Proof.: Let us first prove \((i)\). Clearly,
\[\|u^{\lambda}\|^{2}=\|B\|^{2}+\|B_{\lambda}\|^{2}+2\langle B,B_{\lambda} \rangle_{\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})}=2\mathcal{S}_{a,b}+2\langle B,B_{\lambda}\rangle_{\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})}.\]
Now integrating by parts and using the equation \(-\mathrm{div}(|x|^{-2a}\nabla B)=\mathcal{S}_{a,b}|x|^{-bq}B^{q-1}\), we obtain
\[\langle B,B_{\lambda}\rangle_{\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})}= \mathcal{S}_{a,b}\int_{\mathbb{R}^{2}}|x|^{-bq}B^{q-1}B_{\lambda} \mathrm{d}x=\mathcal{S}_{a,b}d_{a,b}\lambda^{-a}+o(\lambda^{-a}).\]
Next, let us prove \((ii)\). Note that \(B\) and \(B_{\lambda}\) are radial symmetry, then
\[\int_{\mathbb{R}^{2}}|x|^{-qb}(B+B_{\lambda})^{q}\mathrm{d}x= 2\pi\int_{0}^{\infty}(B(r)+\lambda^{-a}B(\lambda r))^{q}r^{1-qb} \mathrm{d}r.\]
As in the proof of Theorem 1.1, let us make the change \(V(s):=B(s^{\tau})\) where \(\tau=\frac{a-b}{a(1+a-b)}>0\), then \(V(s)=c(1+s^{2})^{-\frac{K-2}{2}}\) for some suitable \(c>0\) with \(K=\frac{2}{1+a-b}>2\). Also, let \(\lambda=\zeta^{\tau}\) we define
\[V_{\zeta}(s):=\zeta^{\frac{K-2}{2}}V(\zeta s),\quad\text{then}\quad V_{\zeta} (s)=B_{\lambda}(r).\]
Therefore, from the result of [25, Proposition 3.1\((ii)\)], we have
\[\int_{\mathbb{R}^{2}}|x|^{-qb}(B+B_{\lambda})^{q}\mathrm{d}x= 2\pi\tau^{-1}\int_{0}^{\infty}(V(s)+V_{\zeta}(s))^{\tilde{2}^{*} }s^{K-1}\mathrm{d}s\] \[= 2+2\cdot\tilde{2}^{*}d_{a,b}\zeta^{\frac{K-2}{2}}+o(\zeta^{\frac {K-2}{2}})\] \[= 2+2qd_{a,b}\lambda^{-a}+o(\lambda^{-a}),\]
where \(\tilde{2}^{*}=\frac{2K}{K-2}=q\) and \(\frac{K-2}{2\tau}=-a\). Now \((ii)\) follows from a first-order Taylor expansion of \(t\mapsto t^{\frac{2}{q}}\) at \(t=2\).
We now turn to the proof of \((iii)\). By Lemma 4.1 we can write
\[\operatorname{dist}(u^{\lambda},\mathcal{M})^{2}=\|u^{\lambda}\|^{2}- \mathcal{S}_{a,b}\mathbf{m}(u^{\lambda}). \tag{4.5}\]
Note that
\[\mathbf{m}(u^{\lambda})=\sup_{\mu>0}\left(\int_{\mathbb{R}^{2}}|x|^{-qb}B_{\mu }^{q-1}u^{\lambda}\mathrm{d}x\right)^{2},\]
where \(B_{\mu}(x)=\mu^{-a}B(\mu x)\). Since \(u^{\lambda}\) is positive and radially symmetric-decreasing, then same as the proof of \((ii)\) making the change of \(V(s)=B(r)\) with \(r=s^{\tau}\) and \(\lambda=\zeta^{\tau}\), and from the result of [25, Proposition 3.1\((iii)\)], we have
\[\sup_{v\in\mathcal{M}_{1}}(u^{\lambda},|x|^{-qb}v^{q-1})^{2}= 1+d_{a,b}\zeta^{\frac{K-2}{2}}+o(\zeta^{\frac{K-2}{2}})\] \[= 1+d_{a,b}\lambda^{-a}+o(\lambda^{-a}).\]
As a consequence, by (4.5) and the already established part \((i)\) of this lemma, we deduce that
\[\mathrm{dist}(u^{\lambda},\mathcal{M})^{2}= \|u^{\lambda}\|^{2}-\mathcal{S}_{a,b}\mathbf{m}(u^{\lambda})\] \[= 2\mathcal{S}_{a,b}+2\mathcal{S}_{a,b}d_{a,b}\lambda^{-a}- \mathcal{S}_{a,b}(1+d_{a,b}\lambda^{-a})^{2}+o(\lambda^{-a})\] \[= \mathcal{S}_{a,b}+o(\lambda^{-a}).\]
Now, the proof is completed.
**Lemma 4.3**.: _We have \(\mathcal{B}<2-2^{\frac{2}{q}}\)._
Proof.: By Lemma 4.2, as \(\lambda\to 0^{+}\), we have
\[\mathcal{E}(u^{\lambda})= \frac{2\mathcal{S}_{a,b}+2\mathcal{S}_{a,b}d_{a,b}\lambda^{-a}+o( \lambda^{-a})-\mathcal{S}_{a,b}\left(2^{\frac{2}{q}}+2^{\frac{2}{q}+1}d_{a,b} \lambda^{-a}+o(\lambda^{-a})\right)}{\mathcal{S}_{a,b}+o(\lambda^{-a})}\] \[= \frac{(2-2^{\frac{2}{q}})\mathcal{S}_{a,b}+2\mathcal{S}_{a,b}d_{ a,b}(1-2^{\frac{2}{q}})\lambda^{-a}}{\mathcal{S}_{a,b}}+o(\lambda^{-a})\] \[= (2-2^{\frac{2}{q}})-2(2^{\frac{2}{q}}-1)\lambda^{-a}+o(\lambda^{ -a}).\]
Since \(2<q<\infty\), then for \(\lambda>0\) small enough, it holds that
\[\mathcal{E}(u^{\lambda})<2-2^{\frac{2}{q}}.\]
Moreover, from Lemma 4.2\((iii)\) we know \(u^{\lambda}\in\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})\setminus\mathcal{M}\) for \(\lambda>0\) small enough, therefore \(\mathcal{B}\leq\mathcal{E}(u^{\lambda})<2-2^{\frac{2}{q}}\).
As a consequence of Lemma 4.3, together with some further analysis, we are going to show that there is a minimizer for the best stability constant \(\mathcal{B}\).
It is easy to verify that
\[\inf_{u\in\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})\setminus\mathcal{M}}\frac{\|u \|^{2}-\mathcal{S}_{a,b}\|u\|_{*}^{2}}{\|u\|^{2}}=0.\]
Indeed, for \(w_{n}(x)=U(x)\frac{x_{1}}{|x_{1}|+\frac{1}{n}}\) we have \(\mathbf{m}(w_{n})=0\) and
\[\frac{\|w_{n}\|^{2}-\mathcal{S}_{a,b}\|w_{n}\|_{*}^{2}}{\|w_{n}\|^{2}}\to 0, \quad\text{as }n\to\infty.\]
Therefore, from Lemma 4.1 we can always take \(\mathbf{m}(u)>0\) for the minimization of \(\mathcal{B}\) due to \(\mathcal{B}>0\), that is, let \(\{u_{n}\}\subset\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})\setminus\mathcal{M}\) be a \(\|\cdot\|_{*}\)-normalized minimizing sequence for \(\mathcal{B}\) satisfying
\[\mathcal{E}(u_{n})=\mathcal{B}+o_{n}(1),\quad\|u_{n}\|_{*}=1,\quad\mathbf{m}(u_ {n})>0. \tag{4.6}\]
Here \(o_{n}(1)\) denotes that \(o_{n}(1)\to 0\) as \(n\to\infty\). Then
\[\|u_{n}\|^{2}=(\mathcal{B}+o_{n}(1))\mathrm{dist}(u_{n},\mathcal{M})^{2}+ \mathcal{S}_{a,b}\leq(\mathcal{B}+o_{n}(1))\|u_{n}\|^{2}+\mathcal{S}_{a,b}.\]
Lemma 4.3 indicates \(\mathcal{B}<2-2^{\frac{2}{q}}<1\) due to \(2<q<\infty\), thus \(\{u_{n}\}\) is bounded in \(\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})\). By a theorem of Lions [27], up to rescaling, we may assume that \(u_{n}\rightharpoonup u_{0}\not\equiv 0\) weakly in \(\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})\). Letting \(v_{n}:=u_{n}-u_{0}\), we can thus rewrite
\[u_{n}=v_{n}+u_{0},\quad\text{for some }u_{0}\in\mathcal{D}^{1,2}_{a}(\mathbb{R}^{ 2})\setminus\{0\},\quad v_{n}\rightharpoonup 0\quad\text{weakly in }\mathcal{D}^{1,2}_{a}( \mathbb{R}^{2}). \tag{4.7}\]
We first check that if the convergence is strong, then a minimizer of \(\mathcal{B}\) must exist.
**Proposition 4.4**.: _Let \(\{u_{n}\}\subset\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})\setminus\mathcal{M}\) satisfy (4.6) and (4.7), and suppose that \(v_{n}\to 0\) strongly in \(\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})\). Then \(u_{0}\) is a minimizer for the best stability constant \(\mathcal{B}\)._
Proof.: If \(v_{n}\to 0\) strongly in \(\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})\), that is, \(u_{n}\to u_{0}\) strongly in \(\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})\), then it is clear that \(\|u_{n}\|^{2}\to\|u_{0}\|^{2}\). By (CKN) inequality (1.5), it holds also that \(\|u_{n}\|^{2}_{*}\to\|u_{0}\|^{2}_{*}\) due to \(0\leq\|u_{n}\|^{2}_{*}-\|u_{0}\|^{2}_{*}\leq\|u_{n}-u_{0}\|^{2}_{*}\to 0\). Note that by Holder's inequality, we have
\[\sup_{\lambda>0}\left|(u_{n},|x|^{-qb}B^{q-1}_{\lambda})^{2}-(u_{ 0},|x|^{-qb}B^{q-1}_{\lambda})^{2}\right|\] \[\qquad=\sup_{\lambda>0}\left|\int_{\mathbb{R}^{2}}|x|^{-qb}B^{q-1 }_{\lambda}(u_{n}+u_{0})\mathrm{d}x\int_{\mathbb{R}^{2}}|x|^{-qb}B^{q-1}_{ \lambda}(u_{n}-u_{0})\mathrm{d}x\right|\] \[\qquad\leq\|u_{n}+u_{0}\|_{*}\|u_{n}-u_{0}\|_{*}\leq(1+\|u_{0}\| _{*})\|u_{n}-u_{0}\|_{*}\to 0,\]
and from the definition of \(\mathbf{m}\) as in (4.2),
\[\mathbf{m}(u_{n}) \leq\mathbf{m}(u_{0})+\sup_{\lambda>0}\left[(u_{n},|x|^{-qb}B^{q- 1}_{\lambda})^{2}-(u_{0},|x|^{-qb}B^{q-1}_{\lambda})^{2}\right],\] \[\mathbf{m}(u_{0}) \leq\mathbf{m}(u_{n})+\sup_{\lambda>0}\left[(u_{0},|x|^{-qb}B^{q- 1}_{\lambda})^{2}-(u_{n},|x|^{-qb}B^{q-1}_{\lambda})^{2}\right],\]
due to \(\sup P\leq\sup Q+\sup(P-Q)\), then we can deduce that \(\mathbf{m}(u_{n})\to\mathbf{m}(u_{0})\). Thus from Lemma 4.1, we also have \(\mathrm{dist}(u_{n},\mathcal{M})^{2}\to\mathrm{dist}(u_{0},\mathcal{M})^{2}\). Therefore \(\mathcal{E}(u_{n})\to\mathcal{E}(u_{0})\) and \(u_{0}\) is a minimizer, provided that \(u_{0}\notin\mathcal{M}\) i.e. \(\mathrm{dist}(u_{0},\mathcal{M})\neq 0\).
But for sequence \(\{u_{n}\}\) such that \(\mathrm{dist}(u_{n},\mathcal{M})\to 0\) satisfying \(\|u_{n}\|\geq\mathcal{S}^{1/2}_{a,b}\|u_{n}\|_{*}=\mathcal{S}^{1/2}_{a,b}\), it is known from Lemma 3.1 that \(\liminf_{n\to\infty}\mathcal{E}(u_{n})\geq 1-\frac{q-1}{\mu_{3}}\). On the other hand, Corollary 3.3 indicates \(\lim_{n\to\infty}\mathcal{E}(u_{n})=\mathcal{B}<1-\frac{q-1}{\mu_{3}}\). Hence the minimizing sequence \(\{u_{n}\}\) cannot satisfy \(\mathrm{dist}(u_{n},\mathcal{M})\to 0\). As explained above, the proof is now completed.
Therefore, the proof of Theorem 1.5 now consists in showing that \(v_{n}\to 0\) strongly in \(\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})\) must in fact be the case.
To do so, let us investigate how the components of \(\mathcal{E}(u_{n})\) behave under the decomposition (4.7). It is standard to check that the weak convergence implies
\[\|u_{n}\|^{2}=\|u_{0}\|^{2}+\|v_{n}\|^{2}+o_{n}(1), \tag{4.8}\]
and the Brezis-Lieb lemma [7] indicates
\[1=\|u_{n}\|^{q}_{*}=\|u_{0}\|^{q}_{*}+\|v_{n}\|^{q}_{*}+o_{n}(1). \tag{4.9}\]
Finally, the following lemma give the important information how the distance \(\mathrm{dist}(u_{n},\mathcal{M})^{2}\) decomposes. Recall that by definition \(\mathbf{m}(u):=\sup_{v\in\mathcal{M}_{1}}(u,|x|^{-qb}v^{q-1})^{2}\).
**Lemma 4.5**.: _Let \(\{u_{n}\}\subset\mathcal{D}^{1,2}_{a}(\mathbb{R}^{2})\setminus\mathcal{M}\) satisfy (4.6) and (4.7), then it holds that_
\[\mathbf{m}(u_{n})=\max\{\mathbf{m}(u_{0}),\mathbf{m}(v_{n})\}+o_{n}(1). \tag{4.10}\]
_In particular,_
\[\operatorname{dist}(u_{n},\mathcal{M})^{2}=\|u_{0}\|^{2}+\|v_{n}\|^{2}- \mathcal{S}_{a,b}\max\{\mathbf{m}(u_{0}),\mathbf{m}(v_{n})\}+o_{n}(1). \tag{4.11}\]
Proof.: We follow the arguments as those in [25]. By Lemma 4.1, \(\mathbf{m}(u_{n})\) has an optimizer \(B_{\lambda_{n}}\in\mathcal{M}_{1}\) due to we have assumed \(\mathbf{m}(u_{n})>0\). We consider two different cases.
Suppose first that \(\lambda_{n}\) is bounded away from zero and infinity. Then up to a subsequence \(\lambda_{n}\to\lambda_{0}\) for some \(\lambda_{0}\in(0,\infty)\), and consequently \(B_{\lambda_{n}}\to B_{\lambda_{0}}\) strongly in \(L^{q}(\mathbb{R}^{2},|x|^{-qb}\mathrm{d}x)\). But this implies \((v_{n},|x|^{-qb}B_{\lambda_{n}}^{q-1})\to 0\) by weak convergence \(v_{n}\rightharpoonup 0\). Thus
\[\mathbf{m}(u_{n})= \left((u_{0},|x|^{-qb}B_{\lambda_{n}}^{q-1})+(v_{n},|x|^{-qb}B_{ \lambda_{n}}^{q-1})\right)^{2}=(u_{0},|x|^{-qb}B_{\lambda_{n}}^{q-1})^{2}+o_{n }(1) \tag{4.12}\] \[\leq \mathbf{m}(u_{0})+o_{n}(1).\]
In the remaining, second case, we have \(\lambda_{n}\to 0\) or \(\lambda_{n}\to\infty\) along a subsequence. As in the proof of Lemma 3.1, this can be easily checked to yield
\[(u_{0},|x|^{-qb}B_{\lambda_{n}}^{q-1})=\mathcal{S}_{a,b}^{-1}\langle u_{0},B_ {\lambda_{n}}\rangle_{\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})}\to 0,\]
thus we get
\[\mathbf{m}(u_{n})=(v_{n},|x|^{-qb}B_{\lambda_{n}}^{q-1})^{2}+o_{n}(1)\leq \mathbf{m}(v_{n})+o_{n}(1). \tag{4.13}\]
Combining (4.12) and (4.13), we get
\[\mathbf{m}(u_{n})\leq\max\{\mathbf{m}(u_{0}),\mathbf{m}(v_{n})\}+o_{n}(1), \tag{4.14}\]
at least along some subsequence. But our argument shows that from any subsequence we can extract another subsequence such that the inequality (4.14) holds. Thus (4.14) must in fact hold for entire sequence \(\{u_{n}\}\).
In order to establish (4.10), we will now prove the converse inequality by a similar argument. If \(\mathbf{m}(u_{0})=0\), then it is trivial that \(\mathbf{m}(u_{n})\geq\mathbf{m}(u_{0})+o_{n}(1)\). The other case is \(\mathbf{m}(u_{0})>0\), thus from Lemma 4.1 we let \(B_{\lambda_{u_{0}}}\) be an optimizer for \(\mathbf{m}(u_{0})\), then \((v_{n},|x|^{-qb}B_{\lambda_{u_{0}}}^{q-1})\to 0\) by weak convergence \(v_{n}\rightharpoonup 0\) and thus
\[\mathbf{m}(u_{n})\geq(u_{n},|x|^{-qb}B_{\lambda_{u_{0}}}^{q-1})^{2}=(u_{0},|x| ^{-qb}B_{\lambda_{u_{0}}}^{q-1})^{2}+o_{n}(1)=\mathbf{m}(u_{0})+o_{n}(1). \tag{4.15}\]
If \(\mathbf{m}(v_{n})\to 0\) along some subsequence, then \(\mathbf{m}(u_{n})\geq\mathbf{m}(v_{n})+o_{n}(1)\). The other case is \(\mathbf{m}(v_{n})>0\) as \(n\) sufficiently large, then let \(B_{\lambda_{v_{n}}}\) be an optimizer for \(\mathbf{m}(v_{n})\). Suppose first that \(\lambda_{v_{n}}\to 0\) or \(\lambda_{v_{n}}\to\infty\) along a subsequence. Then, as above \((u_{0},|x|^{-qb}B_{\lambda_{v_{n}}}^{q-1})\to 0\), thus we obtain in that case
\[\mathbf{m}(u_{n})\geq(u_{n},|x|^{-qb}B_{\lambda_{v_{n}}}^{q-1})^{2}=(v_{n},|x| ^{-qb}B_{\lambda_{v_{n}}}^{q-1})^{2}+o_{n}(1)=\mathbf{m}(v_{n})+o_{n}(1). \tag{4.16}\]
If, on the other hand, \(\{\lambda_{v_{n}}\}\) is bounded away from zero and infinity, then up to a subsequence \(\lambda_{v_{n}}\to\lambda_{v_{0}}\) for some \(\lambda_{v_{0}}\in(0,\infty)\). But then \(\mathbf{m}(v_{n})=(v_{n},|x|^{-qb}B_{\lambda_{v_{n}}}^{q-1})^{2}\to 0\) by weak convergence \(v_{n}\rightharpoonup 0\), and so (4.16) holds trivially.
By the same remark as in the first part of the proof, (4.16) holds in fact along the whole sequence \(\{u_{n}\}\). Now, by combining (4.15) with (4.16) and (4.14), the inequality (4.10) follows.
Finally, (4.11) is immediate from Lemma 4.1 together with (4.8) and (4.10).
The next lemma serves as an important preparation for our main argument. Contrary to (4.8), (4.9) and (4.10), here the minimizing property of \(\{u_{n}\}\) comes into work.
**Lemma 4.6**.: _Let \(\{u_{n}\}\subset\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})\setminus\mathcal{M}\) satisfy (4.6) and (4.7), if there is \(c>0\) such that \(\|v_{n}\|\geq c\) when \(n\) is sufficiently large then \(\lim_{n\to\infty}\mathbf{m}(v_{n})=\mathbf{m}(u_{0})\)._
Proof.: Assume firstly that, up to extracting a subsequence, still labeled by \(\{v_{n}\}\),
\[\lim_{n\to\infty}\mathbf{m}(v_{n})>\mathbf{m}(u_{0}). \tag{4.17}\]
Note that
\[1=\|u_{n}\|_{*}^{q}=\|u_{0}\|_{*}^{q}+\|v_{n}\|_{*}^{q}+o_{n}(1),\]
and (4.17) implies
\[0<\lim_{n\to\infty}\mathbf{m}(v_{n})\leq\mathcal{S}_{a,b}\lim_{n\to\infty}\|v _{n}\|_{*}^{2},\]
thus \(\{\|v_{n}\|_{*}\}\) is bounded away from zero and infinity for \(n\) sufficiently large. Multiplying by a constant, we may equivalently consider
\[\tilde{u}_{n}=\frac{v_{n}}{\|v_{n}\|_{*}}+\frac{u_{0}}{\|v_{n}\|_{*}}=:\tilde {v}_{n}+\tilde{u}_{0n},\quad\text{with }\|\tilde{v}_{n}\|_{*}=1.\]
Then by (4.8), (4.9) and Lemma 4.1 we have
\[\mathcal{E}(\tilde{u}_{n})= \frac{\|\tilde{u}_{n}\|^{2}-\mathcal{S}_{a,b}\|\tilde{u}_{n}\|_{ *}^{2}}{\mathrm{dist}(\tilde{u}_{n},\mathcal{M})^{2}}=\frac{\|\tilde{v}_{n}\| ^{2}-\mathcal{S}_{a,b}+\|\tilde{u}_{0n}\|^{2}-\mathcal{S}_{a,b}\left[(1+\| \tilde{u}_{0n}\|_{*}^{q})^{\frac{2}{q}}-1\right]}{\|\tilde{v}_{n}\|^{2}+\| \tilde{u}_{0n}\|^{2}-\mathcal{S}_{a,b}\mathbf{m}(\tilde{u}_{n})}+o_{n}(1)\] \[= \frac{\|\tilde{v}_{n}\|^{2}-\mathcal{S}_{a,b}+\|\tilde{u}_{0n}\|^ {2}-\mathcal{S}_{a,b}\left[(1+\|\tilde{u}_{0n}\|_{*}^{q})^{\frac{2}{q}}-1 \right]}{\|\tilde{v}_{n}\|^{2}-\mathcal{S}_{a,b}\mathbf{m}(\tilde{v}_{n})+\| \tilde{u}_{0n}\|^{2}}+o_{n}(1),\]
due to the assumption (4.17) and Lemma 4.5 imply \(\mathbf{m}(\tilde{u}_{n})=\mathbf{m}(\tilde{v}_{n})+o_{n}(1)\). Our goal is now to estimate the quotient using [25, Lemma 2.4].
Suppose for the moment that \(\tilde{v}_{n}\notin\mathcal{M}\) for \(n\) sufficiently large. Then set
\[A:=\lim_{n\to\infty}\|\tilde{v}_{n}\|^{2}-\mathcal{S}_{a,b},\quad B :=\lim_{n\to\infty}\left[\|\tilde{v}_{n}\|^{2}-\mathcal{S}_{a,b}\mathbf{m}( \tilde{v}_{n})\right]\] \[C:=\lim_{n\to\infty}\left\{\|\tilde{u}_{0n}\|^{2}-\mathcal{S}_{ a,b}\left[(1+\|\tilde{u}_{0n}\|_{*}^{q})^{\frac{2}{q}}-1\right]\right\},\quad D:= \lim_{n\to\infty}\|\tilde{u}_{0n}\|^{2}.\]
Notice that \(A,B,C,D>0\) because we assume \(\tilde{v}_{n}\notin\mathcal{M}\) for \(n\) sufficiently large and because \(\|\tilde{u}_{0n}\|\) is bounded away from zero. Since \(\mathcal{B}=\lim_{n\to\infty}\mathcal{E}(u_{n})=\lim_{n\to\infty}\mathcal{E}( \tilde{u}_{n})=\frac{A+C}{B+D}\) and \(\frac{A}{B}=\lim_{n\to\infty}\mathcal{E}(\tilde{v}_{n})=\lim_{n\to\infty} \mathcal{E}(v_{n})\geq\mathcal{B}\), we must have \(\frac{C}{D}\leq\mathcal{B}\) then \(\frac{A}{B}\geq\frac{C}{D}\).
Now let \(F_{n}\) be the scalar multiple of \(\tilde{u}_{0n}\) such that \(\mathbf{m}(F_{n})=\mathbf{m}(\tilde{v}_{n})\), that is, \(F_{n}=c_{n}\tilde{u}_{0n}\) where \(c_{n}^{2}=\frac{\mathbf{m}(\tilde{v}_{n})}{\mathbf{m}(\tilde{u}_{0n})}=\frac{ \mathbf{m}(v_{n})}{\mathbf{m}(v_{0})}>0\). Then, as a consequence of (4.17), we have
\[\lim_{n\to\infty}\|F_{n}\|_{*}>\lim_{n\to\infty}\|\tilde{u}_{0n}\|_{*}.\]
By [25, Lemma 2.3] which states that
\[\text{the function }\eta\mapsto\frac{(1+\eta^{q})^{\frac{2}{q}}-1}{\eta^{2}}\ \ \text{is strictly increasing in }\eta\in(0,\infty),\]
so that
\[\frac{C}{D}=1-\lim_{n\to\infty}\frac{\mathcal{S}_{a,b}\left[(1+\|\tilde{u}_{0n} \|_{*}^{q})^{\frac{2}{q}}-1\right]}{S_{[\tilde{u}_{0n}]}\|\tilde{u}_{0n}\|_{* }^{2}}>1-\lim_{n\to\infty}\frac{\mathcal{S}_{a,b}\left[(1+\|F_{n}\|_{*}^{q})^{ \frac{2}{q}}-1\right]}{S_{[\tilde{u}_{0n}]}\|F_{n}\|_{*}^{2}}=:\frac{E}{F},\]
where \(S_{[\tilde{u}_{0n}]}=\|\tilde{u}_{0n}\|^{2}/\|\tilde{u}_{0n}\|_{*}^{2}\), and
\[E=\lim_{n\to\infty}\left\{S_{[\tilde{u}_{0n}]}\|F_{n}\|_{*}^{2}-\mathcal{S}_{ a,b}\left[(1+\|F_{n}\|_{*}^{q})^{\frac{2}{q}}-1\right]\right\},\quad F=\lim_{n \to\infty}S_{[\tilde{u}_{0n}]}\|F_{n}\|_{*}^{2}.\]
It is easy to verify that \(\{F_{n}+\tilde{v}_{n}\}\subset\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})\setminus \mathcal{M}\) for \(n\) sufficiently large due to
\[\text{dist}(F_{n}+\tilde{v}_{n},\mathcal{M})^{2}=\frac{\|v_{n}\|^{2}+\|u_{0}\| ^{2}(\frac{\mathbf{m}(v_{n})}{\mathbf{m}(v_{0})})-\mathbf{m}(v_{n})}{\|v_{n} \|_{*}^{2}}\geq\frac{\text{dist}(u_{n},\mathcal{M})^{2}}{\|v_{n}\|_{*}^{2}},\]
and
\[\lim_{n\to\infty}\mathcal{E}(F_{n}+\tilde{v}_{n})=\frac{A+E}{B+F},\]
due to \(\frac{\|F_{n}\|^{2}}{\|F_{n}\|_{2}^{2}}=\frac{\|\tilde{u}_{0n}\|^{2}}{\|\tilde {u}_{0n}\|_{2}^{2}}\). Note that \(D\leq F\), then since \(\frac{A}{B}\geq\frac{C}{D}>\frac{E}{F}\), from [25, Lemma 2.4] we obtain
\[\mathcal{B}=\lim_{n\to\infty}\mathcal{E}(u_{n})=\lim_{n\to\infty}\mathcal{E}( \tilde{u}_{n})=\frac{A+C}{B+D}>\frac{A+E}{B+F}=\lim_{n\to\infty}\mathcal{E}(F _{n}+\tilde{v}_{n}).\]
But this contradicts the definition of \(\mathcal{B}\). Hence (4.17) is impossible.
If, on the other hand, \(\tilde{v}_{n}\in\mathcal{M}\) along some subsequence, then \(A=B=0\) in the above and we directly conclude a contradiction in the same way from \(\frac{C}{D}>\frac{E}{F}\).
The remaining case to treat is that where, up to a subsequence,
\[\mathbf{m}(u_{0})>\lim_{n\to\infty}\mathbf{m}(v_{n}).\]
But here one arrives at a contradiction in a similar fashion, with the roles of \(u_{0}\) and \(v_{n}\) reversed and considering
\[\hat{u}_{n}=\frac{v_{n}}{\|u_{0}\|_{*}}+\frac{u_{0}}{\|u_{0}\|_{*}}=:\hat{v}_ {n}+\hat{u}_{0},\quad\text{with }\|\hat{u}_{0}\|_{*}=1.\]
The fact that \(\hat{D}:=\lim_{n\to\infty}\|\hat{v}_{n}\|^{2}=\lim_{n\to\infty}\|v_{n}\|^{2}/ \|u_{0}\|_{*}^{2}>0\) is guaranteed here by assumption. The rest of the proof is identical to the above.
We are now ready to prove our second main result.
**Proof of Theorem 1.5** Let \(\{u_{n}\}\in\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})\setminus\mathcal{M}\) satisfy (4.6) and (4.7). Suppose for contradiction that \(v_{n}=u_{n}-u_{0}\) does not converge strongly to zero in \(\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})\). Then, after passing to a subsequence, still labeled by \(\{v_{n}\}\), then we have \(\|v_{n}\|\geq c\) for some \(c>0\). Thus Lemma 4.6 indicates
\[\mathbf{m}(u_{0})=\mathbf{m}(v_{n})+o_{n}(1). \tag{4.18}\]
Suppose first that \(\|v_{n}\|_{*}\leq\|u_{0}\|_{*}+o_{n}(1)\). As in the proof of Lemma 4.6, let us consider
\[\hat{u}_{n}=\frac{v_{n}}{\|u_{0}\|_{*}}+\frac{u_{0}}{\|u_{0}\|_{*}}=:\hat{v}_{n} +\hat{u}_{0},\quad\text{with }\|\hat{u}_{0}\|_{*}=1.\]
Due to (4.18) and Lemma 4.5 we may write
\[\operatorname{dist}(\hat{u}_{n},\mathcal{M})^{2}=\|\hat{u}_{0}\|^{2}- \mathcal{S}_{a,b}\mathbf{m}(\hat{u}_{0})+\|\hat{v}_{n}\|^{2}+o_{n}(1).\]
Together (4.8) with (4.9) and Lemma 4.1, we deduce that
\[\mathcal{B}+o_{n}(1)=\frac{\|\hat{u}_{0}\|^{2}-\mathcal{S}_{a,b}+\|\hat{v}_{n }\|^{2}-\mathcal{S}_{a,b}\left[(1+\|\hat{v}_{n}\|_{*}^{q})^{\frac{2}{q}}-1 \right]}{\|\hat{u}_{0}\|^{2}-\mathcal{S}_{a,b}\mathbf{m}(\hat{u}_{0})+\|\hat{ v}_{n}\|^{2}}.\]
Similarly to the proof of Lemma 4.6, by the definition of \(\mathcal{B}\) that
\[\mathcal{B}\leq\mathcal{E}(\hat{u}_{0})=\frac{\|\hat{u}_{0}\|^{2}-\mathcal{S }_{a,b}}{\|\hat{u}_{0}\|^{2}-\mathcal{S}_{a,b}\mathbf{m}(\hat{u}_{0})},\quad \text{if }\hat{u}_{0}\not\in\mathcal{M},\]
and
\[\mathcal{B}+o_{n}(1)=\frac{\|\hat{v}_{n}\|^{2}-\mathcal{S}_{a,b} \left[(1+\|\hat{v}_{n}\|_{*}^{q})^{\frac{2}{q}}-1\right]}{\|\hat{v}_{n}\|^{2} },\quad\text{if }\hat{u}_{0}\in\mathcal{M}.\]
Anyway, from [25, Lemma 2.4] we deduce that
\[\mathcal{B}+o_{n}(1)\geq \frac{\|\hat{v}_{n}\|^{2}-\mathcal{S}_{a,b}\left[(1+\|\hat{v}_{n} \|_{*}^{q})^{\frac{2}{q}}-1\right]}{\|\hat{v}_{n}\|^{2}}=1-\frac{\mathcal{S}_{ a,b}\left[(1+\|\hat{v}_{n}\|_{*}^{q})^{\frac{2}{q}}-1\right]}{S_{[\hat{v}_{n}]}\| \hat{v}_{n}\|_{*}^{2}},\]
due to (4.18) implies \(\{\|\hat{v}_{n}\|_{*}\}\) is bounded away from zero for \(n\) sufficiently large, where \(S_{[\hat{v}_{n}]}=\|\hat{v}_{n}\|^{2}/\|\hat{v}_{n}\|_{*}^{2}\). Note that the assumption \(\|v_{n}\|_{*}\leq\|u_{0}\|_{*}+o_{n}(1)\) implies \(\|\hat{v}_{n}\|_{*}\leq 1+o_{n}(1)\), then from [25, Lemma 2.3] we must have
\[\mathcal{B}\geq 1-\frac{\mathcal{S}_{a,b}\left(2^{\frac{2}{q}}-1\right)}{S_{[ \hat{v}_{n}]}}+o_{n}(1).\]
Since we know by Lemma 4.3 that \(\mathcal{B}<2-2^{\frac{2}{q}}\) with strict inequality, then we find, for \(n\) sufficiently large, that
\[1-\frac{\mathcal{S}_{a,b}\left(2^{\frac{2}{q}}-1\right)}{S_{[\hat{v}_{n}]}}<2- 2^{\frac{2}{q}},\]
which is equivalent to \(S_{[\hat{v}_{n}]}<\mathcal{S}_{a,b}\). But this contradicts the definition of \(\mathcal{S}_{a,b}\).
If we assume instead the reverse inequality \(\|u_{0}\|_{*}\leq\|v_{n}\|_{*}+o_{n}(1)\), we obtain a contradiction by writing
\[\operatorname{dist}(u_{n},\mathcal{M})^{2}=\|u_{0}\|^{2}+\|v_{n}\|^{2}- \mathcal{S}_{a,b}\mathbf{m}(v_{n})+o_{n}(1).\]
due to (4.18) and Lemma 4.5, and arguing in exactly the same way with the roles of \(u_{0}\) and \(v_{n}\) reversed.
Thus we have show that \(v_{n}\) must converge strongly to zero in \(\mathcal{D}_{a}^{1,2}(\mathbb{R}^{2})\). By Proposition 4.4, the proof of Theorem 1.5 is now completed.
## Acknowledgements
The research has been supported by National Natural Science Foundation of China (No. 11971392).
|
2307.14587 | Artificial intelligence-aided protein engineering: from topological data
analysis to deep protein language models | Protein engineering is an emerging field in biotechnology that has the
potential to revolutionize various areas, such as antibody design, drug
discovery, food security, ecology, and more. However, the mutational space
involved is too vast to be handled through experimental means alone. Leveraging
accumulative protein databases, machine learning (ML) models, particularly
those based on natural language processing (NLP), have considerably expedited
protein engineering. Moreover, advances in topological data analysis (TDA) and
artificial intelligence-based protein structure prediction, such as AlphaFold2,
have made more powerful structure-based ML-assisted protein engineering
strategies possible. This review aims to offer a comprehensive, systematic, and
indispensable set of methodological components, including TDA and NLP, for
protein engineering and to facilitate their future development. | Yuchi Qiu, Guo-Wei Wei | 2023-07-27T02:14:09Z | http://arxiv.org/abs/2307.14587v1 | Artificial intelligence-aided protein engineering: from topological data analysis to deep protein language models
###### Abstract
Protein engineering is an emerging field in biotechnology that has the potential to revolutionize various areas, such as antibody design, drug discovery, food security, ecology, and more. However, the mutational space involved is too vast to be handled through experimental means alone. Leveraging accumulative protein databases, machine learning (ML) models, particularly those based on natural language processing (NLP), have considerably expedited protein engineering. Moreover, advances in topological data analysis (TDA) and artificial intelligence-based protein structure prediction, such as AlphaFold2, have made more powerful structure-based ML-assisted protein engineering strategies possible. This review aims to offer a comprehensive, systematic, and indispensable set of methodological components, including TDA and NLP, for protein engineering and to facilitate their future development.
Topological data analysis; Protein language models; Protein engineering; Deep learning and machine learning 2023202011-18
Y. Qiu, Guo-Wei Wei,
1Department of Mathematics, Michigan State University, East Lansing, 48824, MI, USA, 1
Footnote 1: footnotemark:
Department of Mathematics, Michigan State University, East Lansing, 48824, MI, USA, 1
Footnote 2: footnotemark:
Department of Electrical and Computer Engineering, Michigan State University, East Lansing, 48824, MI, USA, 1
Footnote 3: footnotemark:
Corresponding author. [email protected]
FOR PUBLISHER ONLY Received on Date Month Year; revised on Date Month Year; accepted on Date Month Year
## 1 Introduction
Protein engineering aims to design and discover proteins with desirable functions, such as improving the phenotype of living organisms, enhancing enzyme catalysis, and boosting antibody efficacy [1]. It has tremendous impacts on drug discovery, enzyme development and applications, the development of biosensors, diagnostics, and other biotechnology, as well as understanding the fundamental principles of the protein structure-function relationship and achieving environmental sustainability and diversity. Protein engineering has the potential to continue to drive innovation and improve our lives in the future.
Two traditional protein engineering approaches include directed evolution [2] and rational design [3, 4]. Directed evolution is a process used to create proteins or enzymes with improved or novel functions [5]. The method involves introducing mutations into the genetic code of a target protein and screening the resulting variants for improved function. The process is "directed" because it is guided by the desired outcome, such as increased activity, stability, specificity, binding affinity, and fitness. Rational design involves using knowledge of protein structure and function to engineer desirable specific changes to the protein sequence and/or structure [4, 6]. Both approaches resort to experimental screening of astronomically large mutational space, i.e., \(20^{N}\) for protein of \(N\) amino acid residues, which is expensive, time-consuming, and intractable [7]. As a result, only a small fraction of the mutational space can be explored experimentally even with the most advanced high-throughput screening technology.
Recently, data-driven machine learning has emerged as a new approach for directed evolution and protein engineering [8, 9]. Machine learning-assisted protein engineering (MLPE) refers to the use of machine learning models and techniques to improve the efficiency and effectiveness of protein engineering. MLPE not only reduces the cost and expedites the process of protein engineering, but also optimizes the screening and selection of protein variants [10], leading to the higher efficiency and productivity. Specifically, by using machine learning to analyze and predict the effects of mutations on protein function, researchers can rapidly generate and test large numbers of variants, which establish the protein-to-fitness map (i.e., fitness
landscape) from sparsely sampled experimental data [11, 12]. This approach accelerates the process of protein engineering.
The process of data-driven MLPE typically involves several elements, including data collection and preprocessing, model design, feature extraction and selection, algorithm selection and design, model training and validation, experimental validation, and iterative model optimization. Driven by technological advancements in high-throughput sequencing and screening technologies, there has been a substantial accumulation of general-purpose experimental datasets on protein sequences, structures, and functions [13, 14]. These datasets, along with numerous protein-engineering specific deep mutational scanning (DMS) libraries [15], provide valuable resources for machine learning training and validation.
Data representation and feature extraction are crucial steps in the design of machine learning models, as they help to reduce the complexity of biological data and enable more effective model training and prediction. There are several typical types of feature embedding methods, including sequence-based, structure-based [16, 17], physics-based [18, 19], and hybrid methods [5]. Among them, sequence-based embeddings have been dominant due to the success of various natural language processing (NLP) methods such as long short-term memory (LSTM) [21], autoencoders [22], and Transformers [23], which allow unsupervised pre-training on large-scale sequence data. Structure-based embeddings take advantage of existing protein three-dimensional (3D) structures in the Protein Data Bank (PDB) [13] and advanced structure predictions such as AlphaFold2 [24]. These methods further exploit advanced mathematical tools, such as topological data analysis (TDA) [25, 26], differential geometry [27, 28], or graph approaches [29]. Physics-based methods utilize physical models, such as density functional theory [30], molecular mechanics [31], Poisson-Boltzmann model [32], etc. While these methods are highly interpretable, their performance often depends on model parametrization. Hybrid methods may select a combination of two or more types of features.
The designs and selections of MLPE algorithms depend on the availability of data and efficiency of experiments. In real-world scenarios, where smaller training datasets are prevalent, simpler machine learning algorithms such as support vector machines and ensemble methods are often employed for small training datasets, which is often the case in real scenarios. In contrast, deep neural networks are more suitable for larger training datasets. Although regression tasks are typically used to distinguish one set of mutations from another [8], unsupervised zero-shot learning methods can also be utilized to address scenarios with limited data availability [33, 34]. The iterative interplay between experiments and models is another crucial component in MLPE by iteratively screening new data to refine the models. Consequently, the selection of an appropriate MLPE model is influenced by factors like experimental frequency and throughput. This iterative refinement process enables MLPE to deliver optimized protein engineering outcomes.
MLPE has the potential to significantly accelerate the development of new and improved proteins, revolutionizing numerous areas of science and technology (Figure 1). Despite considerable advances in MLPE, challenges remain in many aspects, such as data preprocessing, feature extraction, integration with advanced algorithms, and iterative optimization through experimental validation. This review examines published works and offers insights into these technical advances. We place particular emphasis on the advanced mathematical TDA approaches, aiming to make them accessible to general readers. Furthermore, we review current advanced NLP-based models and efficient MLPE approaches. Last, we discuss potential future directions in the field.
### Sequence-based deep protein language models
In artificial intelligence, natural language processing (NLP) has recently gained much attention for representing and analyzing human language computationally [35]. NLP covers a wide range of tasks, including language translation, sentiment analysis, chatbot development, speech recognition, and information extraction, among others. The development and advancement of various machine learning models have been instrumental in tackling the complex challenges posed by NLP tasks.
Similar to human language, the primary structure of a protein is also represented by a string of amino acids, with 20 canonical amino acids. The analogy between protein sequences and human languages has inspired the development of computational methods for analyzing and understanding proteins using models adopted from NLP (Figure 1a). The self-supervised sequence-based protein language models have been applied to study the underlying patterns and relationships within protein sequences, predict their structural and functional properties, and facilitate protein engineering. These language models are pretrained on a given data allowing to model protein properties for each given protein. There are two major types of protein language models utilizing different resources of protein data [33] (Table 1). The first one is the local evolutionary models which focus on homologs of the target protein such as multiple sequence alignments (MSAs) to learn the evolutionary information from the mostly related mutations. The second one is the global evolutionary models which learn from large protein sequence databases such as UniProt [14] and Pfam [36].
Figure 1: **Machine learning-assisted protein engineering (MLPE).** (a). Machine learning models build fitness predictor using structure and sequence protein data. (b). Zero-shot predictors navigate fitness landscape without labeled data. (c). Greedy acquisition exploits fitness using fitness predictions. (d). Uncertainty acquisition balances exploitation and exploration. The example shows a Gaussian upper confidence bound (UCB) acquisition. (e). Experimental measurements query fitness of candidate proteins in sequential optimization.
### Local evolutionary models
To train a local evolutionary model, MSAs search strategies such as jackhmmer [51] and EvCouplings [52] are first employed. Taking MSAs as inputs, local evolutionary models learn the probabilistic distribution of mutations for a target protein. Probabilistic models, including Hidden Markov Models (HMMs) [53; 37] and Potts-based models [38], are popular in modeling mutational effects. Transformer models have been introduced to learn distribution from MSAs. The MSA Transformer [39] introduces a row- and column-attention mechanism. Recent years, variational autoencoders (VAEs) [54] serve as the alternate to model MSAs by including the dependency between residues and aligning all sequences to a probability distribution. The VAE model DeepSequence [22] and the Bayesian VAE model EVE [40] exhibit excellent performance in modeling mutational effects [55; 33; 5].
### Global evolutionary models
With large-size data, global evolutionary models usually adopt the large NLP models. Convolutional Neural networks (CNNs) [56] models and residual network (ResNet) [57] have been employed for protein sequence analysis [41]. Large-scale models, such as long short-term memory (LSTM) [58], have also gained popularity as seen in Beppler [42], UniRep [21], and eUniRep [43]. In recent years, the Transformer architecture has achieved state-of-the-art performance in NLP by introducing the attention mechanism and the self-supervised learning via the masked filling training strategy [59; 60]. Inspired by these advances, Transformer-based protein language models provide new opportunities for building global evolutionary models. A variety of Transformer-based models have been developed such as evolutionary scale modeling (ESM) [23; 44], ProGen [47], ProteinBERT [49], Tranception [15] and ESM-2 [50].
### Hybrid approach via fine-tune pre-training
Although global evolutionary models can learn a variety of sequences derived from natural evolution, they face challenges in concentrating on local information when predicting the effects of site-specific mutations in a target protein. To enhance the performance of global evolutionary models, fine-tuning strategies are subsequently implemented. Specifically, fine-tune strategy further refines the pre-trained global models with local information using MSAs or target training data. The fine-tuned eUniRep [43] shows significant improvement over UniRep [21]. Similar improvement was also reported for ESM models [23; 44]. The Tranception model also proposed a hybrid approach combining a global autoregressive inference and a local retrieval inference from MSAs [15]. Tranception achieved the advanced performance over other global and local models.
With various language models proposed, comprehensive studies on various models and the strategy in building downstream model is necessary. A study explored different approaches utilizing the sequence embedding to build downstream models [61]. Two other studies further benchmarked
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Architecture**} & \multirow{2}{*}{**Max len**} & \multirow{2}{*}{**Dim**} & \multirow{2}{*}{**\# para**} & \multicolumn{2}{c|}{**Pretrained data**} & \multirow{2}{*}{**Time\({}^{1}\)**} \\ \cline{5-8} \cline{7-8} & & & & & & **Source** & **Size** & \\ \hline \multicolumn{8}{|c|}{**Local Models**} \\ \hline Profile HMMs [37] & Hidden Markov & – & – & – & MSAs & – & Oct 2012 \\ \hline EvMutation [38] & Potts Models & – & – & – & MSAs & – & Jan 2017 \\ \hline MSA Transformer [39] & Transformer & 1024 & 768 & 100M & UniRef50 [14] & 26M & Feb 2021 \\ \hline DeepSequence [22] & VAEs & – & – & – & MSAs & – & Dec 2017 \\ \hline EVE [40] & Bayesian VAEs & – & – & – & MSAs & – & Oct 2021 \\ \hline \multicolumn{8}{|c|}{**Global Models**} \\ \hline TAPE ResNet [41] & ResNet & 1024 & 256 & 38M & Pfam [36] & 31M & Jun 2019 \\ \hline TAPE LSTM [41] & LSTM & 1024 & 2048 & 38M & Pfam [36] & 31M & Jun 2019 \\ \hline TAPE Transformer [41] & Transformer & 1024 & 512 & 38M & Pfam [36] & 31M & Jun 2019 \\ \hline Bepler [42] & LSTM & 512 & 100 & 22M & Pfam [36] & 31M & Feb 2019 \\ \hline UniRep [21] & LSTM & 512 & 1900 & 18M & UniRef50 [14] & 24M & Mar 2019 \\ \hline eUniRep [43] & LSTM & 512 & 1900 & 18M & UniRef50 [14]; & 24M & Jan 2020 \\ \hline ESM-1b [23] & Transformer & 1024 & 1280 & 650M & UniRef50 [14] & 250M & Dec 2020 \\ \hline ESM-1v [44] & Transformer & 1024 & 1280 & 650M & UniRef90 [14] & 98M & Jul 2021 \\ \hline ESM-1F1 [45] & Transformer & – & 512 & 124M & UniRef50 [14]; & 12M sequences; & Sep 2022 \\ \hline ProGen [47] & Transformer & 512 & – & 1.2B & UniParc [14]; & 281M & Jul 2021 \\ & & & & & UniprotKB [14]; & & \\ & & & & & Pfam [36]; NCBI & & \\ & & & & & Taxonomy [48] & & \\ \hline ProteinBERT [49] & Transformer & 1024 & – & 16M & UniRef90 [14] & 106M & May 2021 \\ \hline Tranception [15] & Transformer & 1024 & 1280 & 700M & UniRef100 [14] & 250M & May 2022 \\ \hline ESM-2 [50] & Transformer & 1024 & 5120 & 15B & UniRef90 [14] & 65M & Oct 2022 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of protein language models. # para: number of parameters which are only provided for deep learning models. Max len: maximum length of input sequence. Dim: latent space dimension. Size: pre-trained data size where it refers to number of sequences without specification except MSA transformer includes 26 millions of MSAs. K: thousands; M: millions; B: billions. \({}^{1}\): Time for the first preprint. The input data size, hidden layer dimension, and number of parameters are only provided for global models.
many unsupervised and supervised models in predicting protein fitness [55, 33].
### Structure-based topological data analysis (TDA) models
Aided by advanced NLP algorithms, sequence-based models have become the dominant approach in MLPE [12, 11]. However, sequence-based models suffer from a lack of appropriate description of stereochemical information, such as cis-trans isomerism, conformational isomerism, enantiomers, etc. Therefore, sequence embeddings cannot distinguish stereoisomers, which are widely present in biological systems and play a crucial role in many chemical and biological processes. Structure-based models offer a solution to this problem. TDA has became a successful tool in building structure-based models for MLPE [5].
TDA is a mathematical framework based on algebraic topology [62, 63], which allows us to characterize complex geometric data, identify underlying geometric shapes, and uncover topological structures present in the data. TDA finds its applications in a wide range of fields, including neuroscience, biology, materials science, and computer vision. It is especially useful in situations where the data is complex, high-dimensional, and noisy, and where traditional statistical methods may not be effective. In this section, we provide an overview of various types of TDA methods (Table 2). In addition, we review graph neural networks, which are deep learning frameworks cognizant of topological structures, along with their applications in protein engineering. For those readers who are interested in the deep mathematical details of TDA, we have added a supplementary section dedicated to two TDA methods - persistent homology and persistent spectral graph (PSG) in Supplementary Methods.
### Homology
The basic idea behind TDA is to represent the data as a point cloud in a high-dimensional topological space, and then study the topological invariants of this space, such as the genus number, Betti number, and Euler characteristic. Among them, the Betti numbers, specifically Betti zero, Betti one, and Betti two, can be interpreted as representing connectedness, holes, and voids, respectively [76, 77]. These numbers can be computed as the ranks of the corresponding homology groups in appropriate dimensions.
Homology groups are algebraic structures that are associated with topological spaces [76]. They provide information about the topological connectivity of geometric objects. The basic idea behind homology is to consider the cycles and boundaries of a space. Loosely speaking, a cycle is a set of points in the space that form a closed loop, while a boundary is a set of points that form the boundary of some region in the space. The homology group of a space is defined as the group of cycles modulo the group of boundaries. That is, we identify two cycles that differ by a boundary and consider them to be equivalent. The resulting homology group encodes information about the Betti numbers of the space.
Homology theory has many applications in mathematics and science. It is used to classify topological spaces in category theory, to study the properties of manifolds in differential geometry and algebraic geometry, and to analyze data in various scientific fields [76]. However, the original homology groups offer truly geometry-free representations and are too abstract to carry sufficient geometric information of data. Persistent homology was designed to improve homology groups' ability for data analysis.
### Persistent homology
Persistent homology is a relatively new tool in algebraic topology that is designed to incorporate multiscale topological analysis of data [62, 63]. The basic idea behind persistent homology is to construct a family of geometric shapes of the original data by filtration (Figure 2c). Filtration systematically enlarges the radius of each data point in a point cloud, leading to a family of topological spaces with distinct topological dimensions and connectivity. Homology groups are built from the family of shapes, giving rise to systematic changes in topological invariants, or Betti numbers, at various topological dimensions and geometric scales. Topological invariants based on Betti numbers are expressed in terms of persistence barcodes [78] (Figure 2d), persistence diagrams [79], persistence landscapes [80], or persistence images [81]. Persistent topological representations are widely used in applications, particularly in association with machine learning models [82].
Persistent homology is the most important approach in TDA (see Table 2 for a summary of major TDA approaches). It reveals the shape of data in terms of the topological invariants and has had tremendous success in scientific applications, including image and signal processing [83], machine learning [84], biology [82], and neuroscience[85]. Nonetheless, to effectively analyze complex biomolecular data, persistent homology requires further refinement and adjustment. [86].
### Persistent cohomology and element-specific persistent homology
One major limitation of persistent homology is that it fails to describe heterogeneous information of data point [64]. In other words, it treats all entries in the point cloud equally without considering other important information about the data. Biomolecules, for example, contain many different element types and each atom may have a different atomic partial charge, atomic interaction environment, and electrostatic potential
Figure 2: **Conceptual illustration of the TDA-based protein modeling.** (a). A three-dimensional protein structure. (b). Point cloud representation of protein structure. (c). Simplicial complexes and filtration provide multiscale topological representation of the point cloud. (d). Persistent homology characterizes topological evolution of the point cloud. (e). Persistent Laplacian characterizes shape evolution of the point cloud.
function that cannot be captured by persistent homology. Thus, it is crucial to have a topological technique that can incorporate both geometric and nongeometric information into a unified framework.
Persistent cohomology was developed to provide such a mathematical paradigm [64]. In this framework, nongeometric information can either be prescribed globally or reside locally on atoms, bonds, or many-body interactions. In topological terminology, nongeometric information is defined on simplicial complexes. This persistent cohomology-based approach can capture multiscale geometric features and reveal non-geometric interaction patterns through topological invariants, or enriched persistence barcodes. It has been demonstrated that persistent cohomology outperforms other methods in benchmark protein-ligand binding affinity prediction datasets [64], which is a non-trivial problem in computational drug discovery.
An alternative approach for addressing the limitation of persistent homology is to use element-specific persistent homology (ESPH) [16]. The motivation behind ESPH is the same as that for persistent cohomology, but ESPH is relatively simple. Basically, atoms in the original biomolecule are grouped according to their element types, such as C, N, O, S, H, etc. Then, their combinations, such as CC, CN, CO, etc., are identified, and persistent homology analysis is applied to the atoms in each element combination, resulting in ESPH analysis. As a result, ESPH reduces geometric and biological complexities and embeds chemical and biological information into topological abstraction. The ESPH approach was used to win the D3R Grand Challenges, a worldwide competition series in computer-aided drug design [87].
### Persistent topological Laplacians
However, aforementioned TDA methods are still limited in describing complex data, such as its lack of description of non-topological changes (i.e., homotopic shape evolution) [5], its incapability of coping with directed networks and digraphs (i.e., atomic partial charges and polarizations, gene regulation networks), and its inability to characterize structured data (e.g., functional groups, binding domains, and motifs) [86]. These limitations necessitate the development of innovative strategies.
Persistent topological Laplacians (PTLs) are a new class of mathematical tools designed to overcome the aforementioned challenges in TDA [86]. One of the first methods in this class is the PSG [3], also known as persistent combinatorial Laplacians [3] or persistent Laplacians [4]. PSGs have both harmonic spectra with zero eigenvalues and non-harmonic spectra with non-zero eigenvalues (Figure 2e). The harmonic spectra recover all the topological invariants from persistent homology, while the non-harmonic spectra capture the homotopic shape evolution of data that cannot be described by persistent homology [86]. PSGs have been used for accurate forecasting of emerging dominant SARS-CoV-2 variants BA.4/BA.5 [88], facilitating machine learning-assisted protein engineering predictions [5], and other applications [89].
Like persistent homology, persistent Laplacians are limited in their ability to handle directed networks and atomic polarizations. To address these limitations, persistent path Laplacians have been developed [73]. Their harmonic spectra recover the topological invariants of persistent path homology [65], while their non-harmonic spectra capture homotopic shape evolution. Both persistent path Laplacians and persistent path homology were developed as a generalization of the path complex [90].
None of the PTLs mentioned above are capable of handling different types of elements in a molecule as persistent cohomology does. To overcome this limitation, persistent sheaf Laplacians [72] were designed, inspired by persistent cohomology [64], persistent Laplacians [3], and sheaf Laplacians for cellular sheaves [91]. The aim of persistent sheaf Laplacians is to discriminate between different objects in a point cloud. By associating a set of non-trivial labels with each point in a point cloud, a persistent module of sheaf cochain complexes is created, and the spectra of persistent sheaf Laplacians encode both geometrical and non-geometrical information [72]. The theory of persistent sheaf Laplacians is an elegant method for the fusion of different types of data and opens the door to future developments in TDA, geometric data analysis, and algebraic data analysis.
Persistent hypergraph Laplacians enable the topological description of internal structures or organizations in data [74]. Persistent hyperdigraph Laplacians further allow for the topological Laplacian modeling of directed hypergraphs [75]. These persistent topological Laplacians can be utilized to describe intermolecular and intramolecular interactions. As protein structures are inherently multiscale, it is natural to apply persistent hypergraph Laplacians and persistent
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Method** & **Topological space** & **Node attribute** & **Edge attribute** \\ \hline \multicolumn{4}{|c|}{**Homology-based**} \\ \hline Persistent Homology [62, 63] & Simplicial complex & None & None \\ \hline Element-specific PH (ESPH) [16] & Simplicial complex & Group labeled & Group labeled \\ \hline Persistent Cohomology [64] & Simplicial complex & Labeled & Labeled \\ \hline Persistent Path Homology [65] & Path complex & Path & Directed \\ \hline Persistent Flag Homology [66] & Flag complex & None & Directed \\ \hline Evolutionary homology [67] & Simplicial complex & Weighted & Weighted \\ \hline Weighted persistent homology [68] & Simplicial complex & Weighted & Weighted \\ \hline \multicolumn{4}{|c|}{**Laplacian-based**} \\ \hline Persistent Spectral Graph [3, 4] & Simplicial complex & None & None \\ \hline Persistent Hodge Laplacians [71] & Manifold & Continuum & Continuum \\ \hline Persistent Sheaf Laplacians [72] & Cellular complex & Labeled & Sheaf relation \\ \hline Persistent Path Laplacians [73] & Path complex & Path & Direction \\ \hline Persistent Hypergraph [74] & Hypergraph & Hypernode & Hyperedge \\ \hline Persistent Directed Hypergraphs [75] & Hypergraph & Hypernode & Directed hyperedge \\ \hline \end{tabular}
\end{table}
Table 2: Summary of topological data analysis (TDA) methods for structures.
hyperdigraph Laplacians to delineate the protein structure-function relationship.
Finally, unlike all the aforementioned PTLs, evolutionary de Rham-Hodge Laplacians or persistent Hodge Laplacians are defined on a family of filtration-induced differentiable manifolds [71]. They are particularly valuable for the multiscale topological analysis of volumetric data. Technically, a similar algebraic topology structure is shared by persistent Hodge Laplacians and persistent Laplacians, but the former is a continuum theory for volumetric data and the latter is a discrete formulation for point cloud. As such, their underlying mathematical definitions, i.e., differential forms on manifolds and simplicial complexes on graphs, are sharply different.
### Deep graph neural networks and topological deep learning
Similar to topological data analysis, graph- and topology-based deep learning models have been proposed to capture connectivity and shape information of protein structure data. Graph neural networks (GNNs) consider the low-order interactions between vertices by aggregating information from neighbor vertices. A variety of popular graph neural network layers has been proposed, such as convolution graph networks (GCN) [92], graph attention networks (GAT) [93], graph sample and aggregate (GraphSAGE) [94], Graph Isomorphism Network (GIN) [95], and gated graph neural network [96].
With variety of architectures of GNN layers, self-supervised learning models are widely used for representation learning of graph-based data. Graph autoencoder (GAE) and variational graph autoencoder (VGAE) consist of both encoder and decoder, where the decoder employ a linear inner product to reconstruct adjacent matrix [97]. While most of graph-based self-supervised models only have encoder. Deep graph infomax (DGI) maximizes mutual information between a graph's local and global features to achieve self-supervised learning [98]. Graph contrastive learning (GRACE) constructs positive and negative pairs from a single graph, and trains a GNN to differentiate between them [99]. Self-supervised graph transformer (SSGT) uses masked node prediction to train the model. Given a masked graph, it tries to predict the masked node's attributes from the unmasked nodes [100].
In applications to learning protein structures, GCNs have been widely applied to building structure-to-function map of proteins [101, 102]. Moreover, self-supervised models provide powerful pre-trained model in learning representation of protein structures. GeoPPI [103] proposed a graph neural network-based autoencoder to extract structural embedding at the protein-protein binding interface. The subsequent downstream models allow accurate predictions for protein-protein binding affinity upon mutations [103] and further design effective antibody against SARS-CoV-2 variants [104]. GRACE has been applied to learn geometric representation of protein structures [105]. To adopt the critical biophysical properties and interactions between residues and atoms in protein structures, graph-based self-supervised learning models have been customized to achieve the specific functions. The inverse protein folding protocol was proposed to capture the complex structural dependencies between residues in its representation learning [106, 45]. OAGNNs was proposed to better sense the geometric characteristics such as inner-residue torsion angles, inter-residue orientations in its representation learning [107].
Topological deep learning, proposed by Cang and Wei in 2017 [108], is an emerging paradigm. It integrates topological representations with deep neural networks for protein fitness learning and prediction [108, 87, 5]. Similar graph and topology-based deep learning architectures have also been proposed to capture connectivity and shape information of protein structure data [88, 75]. Inspired by TDA, high-order interactions among neural nodes were proposed in \(k\)-GNNs [109] and simplicial neural networks [110].
### Artificial intelligence-aided protein engineering
Protein engineering is a typical black-box optimization problem, which focuses on finding the optimal solution without explicitly knowing the objective function and its gradient. In protein engineering, the goal in designing algorithms for this problem is to efficiently search for the best sequence within a large search space:
\[x^{*}=\operatorname*{arg\,max}_{x\in\mathcal{S}}f(x), \tag{1}\]
where \(\mathcal{S}\) is an unlabeled candidate sequence library, \(x\) is a sequence in the library and \(f(x)\) is the unknown sequence-to-fitness map for optimization. The fitness landscape, \(f(\mathcal{S})\), is a high-dimensional surface that maps amino acid sequences to properties such as activity, selectivity, stability, and other physicochemical features.
There are two practical challenges in protein engineering. First, the fitness landscape is usually epistatic [111, 112], where the contribution of individual amino acid residues to protein fitness have dependency to each other. The interdependence leads to complex, non-linear interactions among different residues. In other word, the fitness landscape contains large number of local optima. For example, in a four-site mutational fitness landscape for GB1 protein with \(20^{4}=160,000\) mutations, 30 local maximum fitness peaks were found [111]. Either traditional directed evolution experiments such as single-mutation walk and recombination, or machine learning models, is difficult to find the global optima without trapped at local one. Second, protein engineering process usually collects limited number of data comparing to the huge sequence library. There are an enormous number of ways to mutate any given protein: for a 300-amino-acid protein, there are 5,700 possible single-amino-acid substitutions and 32,381,700 ways to make just two substitutions with the 20 canonical amino acids [12]. Even with high-throughput experiments, only a small fraction of the sequence library can be screened. Despite this, many systems only have low-throughput assays such as membrane proteins [113], making the process more difficult.
With enriched data-driven protein modeling approaches from protein sequences to structures, recent advanced machine learning methods have been widely developed to accelerate protein engineering in silico (Figure 1a) [11, 112, 114, 115, 11]. Utilizing a limited experimental capacity, machine learning models can effectively augment the fitness evaluation process, enabling the exploration of a vast search space \(\mathcal{S}\). This approach facilitates the discovery of optimal solutions within complex design spaces, despite constraints on the number of trials or experiments.
Using a limited number of experimentally labeled sequences, machine learning models can carry out zero-shot or few-shot predictions [11]. The accuracy of these predictions largely depends on the distribution of the training data, which influences the model's ability to generalize to new sequences. Concretely, if the training data is representative or closer to a given sequence, the model is more likely to make
accurate predictions for that specific sequence. Conversely, if the training data is not representative or distant from the given sequence, the model's predictive accuracy may be compromised, leading to less reliable results. Therefore, MLPE are usually an iterative process between machine learning models and experimental screens. Incorporating the exploration-exploitation trade-off in this context is essential for achieving optimal results. During the iterative process, the model must balance exploration, where it seeks uncertain regions that machine learning models have low accuracy, with exploitation, where it refines and maximizes fitness based on previously gained knowledge. A right balance is critical to preventing overemphasis on either exploration or exploitation leading, which may lead to suboptimal solutions. In particular, the epistatic nature of protein fitness landscapes influences the exploration-exploitation trade-off in the design process.
MLPE methods need to take the experimental capacity into account when attempt to balance the exploitation-exploration. In this section, we discuss different strategies upon the number of experimental capacity. First, we discuss zero-shot strategy when no labeled experimental data is available. Second, we discuss supervised models for performing greedy search (i.e., exploitation). Last, we discuss uncertainty quantification models that balance exploration and exploitation trade-off.
### Unsupervised zero-shot strategy
First, we review the zero-shot strategy that interrogates protein fitness with an unsupervised manner (Figure 0(b) and Table 3). This is designed for the scenarios in the early stage designs where no experiments have been conducted or the experimentally labeled data is too limited allowing accurate fitness predictions from supervised models [11, 5]. They delineate a fitness landscape at the early stage of protein engineering. Essential residues can be identified and prioritized for mutational experiments, allowing for a more targeted approach to protein engineering [22]. Additionally, the initial fitness landscape can be utilized to filter out protein candidates with a low likelihood of exhibiting the desired functionality. By focusing on sequences with higher probabilities, protein engineering process can be made more efficient and effective [34].
Zero-shot predictions rely on the model's ability to recognize patterns in naturally observed proteins, enabling it to make informed predictions for new sequences without having direct training data for the target protein. As discussed in Section 3, protein language models, particularly generative models, learn the distribution of naturally observed proteins which are usually functional. The learned distribution can be used to assess the likelihood that a newly designed protein lies within the distribution of naturally occurring proteins, thus providing valuable insights into its potential functionality and stability [11].
VAEs are popular local evolutionary models for zero-shot predictions such as DeepSequence [22] and EVE models [40]. In VAEs, the conditional probability distribution \(p(x|z,\theta)\) is the decoder in a form of neural network with parameters \(\theta\), where \(x\) is the sequence being query and \(z\) is its latent space variable. Similar, encoder, \(q(z|x,\phi)\), is modeled by another neural network with parameters \(\phi\) to approximate the true posterior distribution \(p(z|x)\). For a given sequence \(x\), its probabilistic likelihood in VAEs is \(p(x|\theta)\) parameterized by parameters \(\theta\). Direct computation of this probability, \(p(x|\theta)=\int p(x|z,\theta)dz\), is intractable in the general case. The evidence lower bound (ELBO) forming a variation inference [54] provides a lower bound of the log likelihood:
\[\log p(x|\theta)\geq\text{ELBO}(x)=\mathbb{E}_{q}\log p(x|z,\theta)-\text{KL} \left(q(z|x,\phi)|p(z)\right). \tag{2}\]
ELBO is taken as the scoring function to quantify the mutational likelihood of each query sequence. The ELBO-based zero-shot predictions show advanced performance reported in multiple works [35, 5, 5].
Transformer is the currently state-of-the-art model which has been used in many supervised tasks [23]. It learns a global distribution of nature proteins. It has also been proved to have advanced performance for zero-shot predictions [33, 44]. The training of Transformer uses mask filling that refers to the process of predicting masked amino acid in a given input sequence by leveraging the contextual information encoded in the Transformer's self-attention mechanism [59, 60]. The mask filling procedure creates a classification layer on the top of the Transformer architecture. Given a sequence \(x\), the masked filling classifier generate probability distributions for amino acids at masked positions. Suppose \(x\) has \(L\) amino acids \(x=x_{1}x_{2}\cdots x_{L}\), by masking a single amino acid at \(i\)-th position, the classifier calculates the conditional probability of \(p(x_{i}|x^{(-i)})\), where \(x^{(-i)}\) is the remaining sequence excluding the masked \(i\)-th position. To reduce the computational cost, the pseudo-log-likelihoods (PLLs) are usually used to estimate the log-likelihood of a given sequence [33, 34]:
\[\text{PLL}(s)=\sum_{i=1}^{L}\log P(s_{i}|s^{(-i)}). \tag{3}\]
The PPLs assume the independence between amino acids. To consider the dependence between amino acids, one can calculate the conditional probability by summing up all possible factorization [34]. But this approach leads to much higher computational cost.
Furthermore, many different strategies have been employed to make zero-shot predictions. Fine-tune model can improve the predictions by combining both local and global evolutionary models [43]. Tranception scores combine global autoregressive inference and an local MSAs retrieval inference to make more accurate predictions. In addition to these sequence-based models, the structure-based GNN-based models including ESM-1F [45] and RGC [116] have also been proposed by utilizing large-scale structural data from AlphaFold2. However, the structure-based model is still limited in accuracy comparing to sequence-based models.
### Supervised regression models
Supervised regression models are among the most prevalent approaches used in guiding protein engineering, as they enable greedy search strategies to maximize protein fitness (Figure 0(c)). These models, including statistical, machine learning, and deep learning techniques, rely on a set of labeled data as their training set to predict the fitness landscape. By leveraging the information contained within the training data, supervised regression models can effectively estimate the relationship between protein sequences and their fitness, providing valuable insights for protein engineering and optimization [12, 1].
A variety of supervised models have been applied to predict protein fitness. In general, statistical models and machine learning models such as linear regression [117], ridge regression [33], support vector machine (SVM) [118], random forest
[119], gradient boosting tree [120] have accurate performance for small training set. And deep learning methods such as deep neural networks [121], convolutional neural networks (CNNs) [17], attention-based neural networks [122] are more accurate with large size of training data. However, in protein engineering, the size of training data increases sequentially which make the supervised models difficult to provide accurate performance all time. Alternatively, the ensemble regression was proposed to provide robust fitness predictions despite of training data size [11, 123]. The ensemble regression average predictions from multiple supervised models and they provide more accurate and robust performance than single model [5]. To remove the inaccurate models in the average, cross-validation is usually used to rank accuracy of each model and only top models are taken to average the predictions. Paired with the zero-shot strategy, the ensemble regression trained on informed training set pre-selected by zero-shot predictions can efficiently pick up the global optimal protein with a few round of experiments [34, 124, 125]. And such approach has been applied to enable resource-efficient engineering CRISPR-Cas9 genome editor activities [126].
Rather than the architectures of supervised models, the predictive accuracy highly rely on the amount of information obtained from the featurization process (Table 3). The physical-chemical properties extract the properties of individual amino acids or atoms [127]. The energy-based scores provide descriptions for the overall property of the target protein [18]. However, neither of them successfully take the complex interactions between residues and atoms into account. To tackle this challenge, recent mathematics-initiated topological and geometric descriptors achieved great success in predicting protein fitness including protein-protein interactions [17], protein stability [120], enzyme activity, and antibodyectivity [5]. The aforementioned descriptors (Section 10) extract structural information from atoms at different characteristic lengths. Furthermore, the sequence-based protein language models provide another featurization strategies. The deep pre-trained models have the latent space which provide the informative representation of each given sequence. Building supervised models from the deep embedding exhibits accurate performance [5, 128]. Recent works combine different types of sequence-based features [129, 33] or combine structure-based and sequence-based features [5] show the complementary roles of different featurization approaches.
Active learning models for exploration-exploitation balance
With the extensive accurate protein-to-fitness machine learning models, active learning further designs iterative strategy between models and experiments to sequentially optimize fitness with the consideration of exploitation-exploration trade-off (Figure 1d-e) [115].
To balance the exploitation-exploration trade-off, the supervised models require to predict not only the protein fitness but also quantify the uncertainty of the given protein [130]. The most popular uncertainty quantification in protein engineering is Gaussian process (GP) [131], which automatically calibrate the balance. Especially, GP using the upper confidence bounds (UCBs) acquisition has efficient convergent rate theoretically for solving the black-box optimization (Equation 1). A variety protein engineering employed GP to accelerate the fitness optimization. For examples, the light-gated channelthroughs (ChRs) were engineered to improve photocurrence and light sensitivity [132, 133], green fluorescent protein has been engineered to become yellow fluorescence [134], acyl-ACP reductase was engineered to improve fatty alcohol production [135], P450 enzyme has been engineered to improve thermostability [136].
The tree-based search strategy is also efficient by building a hierarchical search path, such as the hierarchical optimistic optimization (HOO) [137], the deterministic optimistic optimization (DOO), and the simultaneous optimistic optimization (SOO) [138]. To handle the discrete mutational space in protein engineering, an unsupervised clustering approach was employed to construct the hierarchical tree structure [124, 125].
Recently, researchers have turned to generative models to quantify uncertainty in protein engineering, employing methods such as Variational Autoencoders (VAEs) [54, 22, 40], generative adversarial networks (GANs) [139, 140], and autoregressive language models [141, 15]. Generative models are a class of machine learning algorithms that aim to learn the underlying data distribution of a given dataset, in order to generate new, previously unseen data points that resemble the training data. These models capture the inherent structure and patterns present in the data, enabling them to create realistic and diverse samples that share the same characteristics as the original data. For examples, ProGen [47] is a large language model that generate functional protein sequences across diverse families. A Transformer-based antibody language models utilize fine-tuning processes to assist design antibody [142]. Recently, a novel Transformer-based model called ReLSO has been introduced [143]. This innovative approach simultaneously generates protein sequences and predicts their fitness using its latent space representation. The attention-based relationships learned by the jointly trained ReLSO model offer valuable insights into sequence-level fitness attribution, opening up new avenues for optimizing proteins.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline \multicolumn{5}{|c|}{**Zero-shot predictors**} \\ \hline \multirow{2}{*}{Model name} & \multicolumn{3}{c|}{training set size} \\ \cline{2-5} & \multicolumn{2}{c|}{0} \\ \hline ESM-1b PLL [23, 33] & \multicolumn{2}{c|}{0.435} \\ \hline eUniRep PLL [127] & \multicolumn{2}{c|}{0.411} \\ \hline EVE [40] & \multicolumn{2}{c|}{0.497} \\ \hline Tranception [15] & \multicolumn{2}{c|}{0.478} \\ \hline DeepSequence [22] & \multicolumn{2}{c|}{**0.504**} \\ \hline \multicolumn{5}{|c|}{**Supervised models**} \\ \hline \multirow{2}{*}{Embedding name} & \multicolumn{3}{c|}{training set size} \\ \cline{2-5} & 24 & 96 & 168 & 240 \\ \hline Persistent Homology [5] & 0.263 & 0.432 & 0.496 & 0.534 \\ \hline Persistent Laplacian [5] & **0.280** & **0.457** & **0.525** & **0.564** \\ \hline ESM-1b [23] & 0.219 & 0.421 & 0.494 & 0.537 \\ \hline eUniRep [43] & 0.259 & 0.432 & 0.485 & 0.515 \\ \hline Georgiev [127] & 0.169 & 0.326 & 0.402 & 0.446 \\ \hline UniRep [21] & 0.183 & 0.347 & 0.420 & 0.462 \\ \hline Onehot & 0.132 & 0.317 & 0.400 & 0.450 \\ \hline Bepler [42] & 0.139 & 0.287 & 0.353 & 0.396 \\ \hline TAPE LSTM [41] & 0.259 & 0.436 & 0.492 & 0.522 \\ \hline TAPE ResNet [41] & 0.080 & 0.216 & 0.305 & 0.358 \\ \hline TAPE Transformer [41] & 0.146 & 0.304 & 0.371 & 0.418 \\ \hline \end{tabular}
\end{table}
Table 3: **Comparisons for fitness predictors.** Results were adopted from TopFit [5]. Performance was reported by average Spearman correlation over 34 DMS datasets and 20 repeats. Supervised model use ensemble regression from 18 regression models [5].
### Conclusions and future directions
In this review, we discuss the advanced deep protein language models for protein modeling. We further provide an introduction of topological data analysis methods and their applications in protein modeling. Relying on both structure-based and sequence-based models, MLPE methods were widely developed to accelerate protein engineering. In the future, various machine learning and deep learning will have potential perspectives in protein engineering.
### Accurate structure prediction methods enhanced accurate structure-based models
Comparing to sequence data, three-dimensional protein structural data offer more comprehensive and explicit descriptions of the biophysical properties of a protein and its fitness. As a result, structure-based models usually provide superb performance than sequence-based models for supervised tasks with small training set [5, 120].
As protein sequence databases continue to grow, self-supervised models demonstrate their ability to effectively model proteins using large-scale data. The protein sequence database provides a vast amount of resources for building sequence-based models, such as UniProt [14] database contains hundreds of millions sequences. In contrast, protein structure databases are comparatively limited in size. The largest among them, Protein Data Bank (PDB), contains only 205 thousands of protein structures as of 2023 [13]. Due to the abundance of data resources, sequence-based models typically outperform structure-based models significantly [116].
To address the limited availability of structure data, researchers have focused on developing highly accurate deep learning techniques aimed at enabling large-scale structure predictions. These state-of-the-art methodologies have the potential to significantly expand the database of known protein structures. Two prominent methods are AlphaFold2 [24] and RosettaFold [144], which have demonstrated remarkable capabilities in predicting protein structures with atomic-level accuracy. By harnessing the power of cutting-edge deep learning algorithms, these tools have successfully facilitated the accurate prediction of protein structures, thus contributing to the expansion of the structural database.
Both AlphaFold2 and RosettaFold are alignment-based, which rely on MSAs of the target protein for structure prediction. Alignment-based approaches can be highly accurate when there are sufficient number of homologous sequences (that is, MSAs depth) in the database. Therefore, these methods may have reduced accuracy with low MSAs depth in database. In addition, the MSAs search is time consuming which slows down the prediction speed. Alternatively, alignment-free methods have also been proposed to tackle these limitations [145]. An early work RGN2 [146] exhibits more accurate predictions than AlphaFold2 on orphans proteins which lack of MSAs. Supervised transformer protein language models predict orphan protein structures [147]. With the development of variety of large-scale protein language models in recent years, the alignment-free structural prediction methods incorporate with these models to exhibit their accuracy and efficiency. For example, ESMFold [50] and OmegaFold [148] achieve similar accuracy with AlphaFold2 with faster speed. Moreover, extensive language model-based methods were developed for structural predictions of single-sequence and orphan proteins [149, 150, 151, 152]. Large-scale protein language models will provide powerful toolkit for protein structural predictions.
In building protein fitness model, the structural TDA-based model has exemplified that the AlphaFold2 structure is as reliable as the experimental structure [5]. The zero-shot model, ESM-IF1, also shows advanced performance with coupling with the large structure AlphaFold database [45]. In the light of the revolutionary structure predictive models, structure-based models will open up a new avenue in protein engineering, from directed evolution to de novo design [153, 154]. More sophisticated TDA methods will be demanded to handle the large-scale datasets. Large-scale deep graph neural networks will need to be further developed, for example, to consider the high-order interactions using simplicial neural networks [110, 155].
### Large highthroughput datasets enabled larger scale models
Current MLPE methods are usually designed for limited training set. The ensemble regression is an effective approach to accurately learn the fitness landscape with small but increasing size of training sets from deep mutational scanning [34].
The breakthrough biotechnology, next-generation sequencing (NGS) [156] largely enhances the capacity of DMS for collecting supervised fitness data in various protein systems [112, 111, 157]. The resulting large-scale deep mutational scanning databases expand the exploration range of protein engineering. Deeper machine learning models are emerging to enhance the accuracy and adaptivity for protein engineering.
### Competing interests
No competing interest is declared.
### Author contributions statement
Y.Q. and G.W.W conceived, wrote, and revised the manuscript.
#### Acknowledgments
This work was supported in part by NIH grants R01GM126189 and R01AI164266, NSF grants DMS-2052983, DMS-1761320, and IIS-1900473, NASA grant 80NSSC21M0023, Michigan Economic Development Corporation, MSU Foundation, Bristol-Myers Squibb 65109, and Pfizer.
## References
* [1]H. Narayanan, F. Dingfelder, A. Butte, N. Lorenzen, M. Sokolov, and P. Arosio (2021) Machine learning for biologics: opportunities for protein engineering, developability, and formulation. Trends in pharmacological sciences42 (3), pp. 151-165. Cited by: SS1.
* [2]F. H Arnold (1998) Design by directed evolution. Accounts of chemical research31 (3), pp. 125-131. Cited by: SS1.
* [3]M. Karplus and J. Kuriyan (2005) Molecular dynamics and protein function. Proceedings of the National Academy of Sciences102 (10), pp. 6679-6685. Cited by: SS1.
* [4]S. E. Boyken, Z. Chen, B. Groves, R. A. Langan, G. Oberdorfer, A. Ford, J. M. Gilmore, C. Xu, F. DiMaio, J. H. Pereira, et al. (2016) De novo design of protein homo-oligomers with modular hydrogen-bond network-mediated specificity. Science352 (6286), pp. 680-687. Cited by: SS1.
* [5]P. A. Romero and F. H. Arnold (2009) Exploring protein fitness landscapes by directed evolution. Nature reviews Molecular cell biology10 (12), pp. 866-876. Cited by: SS1.
* [6]G. Bhardwaj, V. Khipple Mulligan, C. D. Bahl, J. M. Gilmore, P. J. Harvey, O. Cheneval, G. W. Buchko, S. V. P. P. K. Kaas, A. Eletsky, et al. (2016) Accurate de novo design of hyperstable constrained peptides. Nature538 (7625), pp. 329-335. Cited by: SS1.
* [7]N. A. Pierce and E. Winfree (2002) Protein design is np-hard. Protein engineering15 (10), pp. 779-782. Cited by: SS1.
* [8]N. E. Siddoff, U. Schwanberg, and M. D. Davari (2020) Machine learning-assisted enzyme engineering. Methods in enzymology643, pp. 281-315. Cited by: SS1.
* [9]S. Mazurenko, Z. Prokop, and J. Damborsky (2019) Machine learning in enzyme engineering. ACS Catalysis10 (2), pp. 1210-1223. Cited by: SS1.
* [10]D. J. Diaz, A. V. Kulikova, A. D. Ellington, and C. O. Wilke (2023) Using machine learning to predict the effects and consequences of mutations in proteins. Current Opinion in Structural Biology78, pp. 102518. Cited by: SS1.
* [11]B. J. Wittmann, K. E. Johnston, Z. Wu, and F. H. Arnold (2021) Advances in machine learning for directed evolution. Current opinion in structural biology69, pp. 11-18. Cited by: SS1.
* [12]K. Y. Yang, Z. Wu, and F. H. Arnold (2019) Machine-learning-guided directed evolution for protein engineering. Nature methods16 (8), pp. 687-694. Cited by: SS1.
* [13]H. M. Berman, J. Westbrook, Z. Feng, G. Gilliland, T. N. Bhat, H. Weissig, I. N. Shindyalov, and P. E. Bourne (2000) The protein data bank. Nucleic acids research28 (1), pp. 235-242. Cited by: SS1.
* [14]U. von, M. Dias, J. Frazer, J. Marchena Hurtado, A. N. Gomez, D. Marks, and Y. Gal (2022) Tranception: protein fitness prediction with autoregressive transformers and inference-time retrieval. In International Conference on Machine Learning, pp. 16990-17017. Cited by: SS1.
* [15]P. Notin, M. Dias, J. Frazer, J. Marchena Hurtado, A. N. Gomez, D. Marks, and Y. Gal (2022) Tranception: protein fitness prediction with autoregressive transformers and inference-time retrieval. In International Conference on Machine Learning, pp. 16990-17017. Cited by: SS1.
* [16]Z. Cang and G. Wei (2018) Integration of element specific persistent homology and machine learning for protein-ligand binding affinity prediction. International journal for numerical methods in biomedical engineering34 (2), pp. e2914. Cited by: SS1.
* [17]M. Wang, Z. Cang, and G. Wei (2020) A topology-based network tree for the prediction of protein-protein binding affinity changes following mutation. Nature Machine Intelligence2 (2), pp. 116-123. Cited by: SS1.
* [18]J. Sclymkowitz, J. Borg, F. Stricher, R. Nys, F. Rousseau, and L. Serrano (2005) The foldx web server: an online force field. Nucleic acids research33 (suppl2), pp. W382-W388. Cited by: SS1.
* [19]J. Koehler Leman, B. D. Weitzner, S. M. Lewis, J. Adolf-Bryfogle, N. Alam, R. F. Alford, M. Aprahamian, D. Baker, K. A. Barlow, P. Barth, et al. (2020) Macromolecular modeling and design in rosetta: recent methods and frameworks. Nature methods17 (7), pp. 665-680. Cited by: SS1.
* [20]Y. Qiu and G. Wei (2023) Persistent spectral theory-guided protein engineering. Nature Computational Science, pp. 1-15. Cited by: SS1.
* [21]E. C. Alley, G. Khimulya, S. Biswas, M. AlQuraishi, and G. M. Church (2019) Unified rational protein engineering with sequence-based deep representation learning. Nature Methods16 (12), pp. 1315-1322. Cited by: SS1.
* [22]A. J. Riesselman, J. B. Ingraham, and D. S. Marks (2018) Deep generative models of genetic variation capture the effects of mutations. Nature Methods15 (10), pp. 816-822. Cited by: SS1.
* [23]A. Rives, J. Meier, T. Sercu, S. Goyal, Z. Lin, J. Liu, D. Guo, M. Ott, C. L. Zitnick, J. Ma, et al. (2021) Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. Proceedings of the National Academy of Sciences118 (15). Cited by: SS1.
* [24]J. J. Wee and K. Xia (2021) Ollivier persistent ricci curvature-based machine learning for the protein-ligand binding affinity prediction. Journal of Chemical Information and Modeling61 (4), pp. 1617-1626. Cited by: SS1.
* [25]J. W. and G. Wei (2019) DG-gl: differential geometry-based geometric learning of molecular datasets. International journal for numerical methods in biomedical engineering35 (3), pp. e3179. Cited by: SS1.
* [26]J. W. and G. Wei (2019) Persistent spectral theory-guided protein engineering. Nature Computational Science, pp. 1-15. Cited by: SS1.
* [27]D. D. Nguyen and G. Wei (2019) Agl-score: algebraic graph learning score for protein-ligand binding scoring, ranking, docking, and screening. Journal of chemical information and modeling59 (7), pp. 3291-3304. Cited by: SS1.
[MISSING_PAGE_POST]
* [32] Jiahui Chen, Weihua Geng, and Guo-Wei Wei. Milmc: Machine learning-based implicit-solvent monte carlo. _Chinese journal of chemical physics_, 34(6):683-694, 2021.
* [33] Chloe Hsu, Hunter Nisonoff, Clara Fannjiang, and Jennifer Listgarten. Learning protein fitness models from evolutionary and assay-labeled data. _Nature biotechnology_, pages 1-9, 2022.
* [34] Bruce J Wittmann, Yisong Yue, and Frances H Arnold. Informed training set design enables efficient machine learning-assisted directed protein evolution. _Cell Systems_, 12(11):1026-1045, 2021.
* [35] Diksha Khurana, Aditya Koli, Kiran Khatter, and Sukhdev Singh. Natural language processing: State of the art, current trends and challenges. _Multimedia tools and applications_, 82(3):3713-3744, 2023.
* [36] Sara El-Gebali, Jaina Mistry, Alex Bateman, Sean R Eddy, Aurelien Luciani, Simon C Potter, Matloob Qureshi, Lorna J Richardson, Gustavo A Salazar, Alfredo Smart, et al. The pfam protein families database in 2019. _Nucleic acids research_, 47(D1):D427-D432, 2019.
* [37] Hashem A Shihab, Julian Gough, David N Cooper, Peter D Stenson, Gary LA Barker, Keith J Edwards, Ian NM Day, and Tom R Gaunt. Predicting the functional, molecular, and phenotypic consequences of amino acid substitutions using hidden markov models. _Human mutation_, 34(1):57-65, 2013.
* [38] Thomas A Hopf, John B Ingraham, Frank J Poelwijk, Charlotte PI Scharfe, Michael Springer, Chris Sander, and Debora S Marks. Mutation effects predicted from sequence co-variation. _Nature biotechnology_, 35(2):128-135, 2017.
* [39] Rohshan M Rao, Jason Liu, Robert Verkuil, Joshua Meier, John Canny, Pieter Abbeel, Tom Sercu, and Alexander Rives. Masa transformer. In _International Conference on Machine Learning_, pages 8844-8856. PMLR, 2021.
* [40] Jonathan Frazer, Pascal Notin, Mafalda Dias, Aidan Gomez, Joseph K Min, Kelly Brock, Yarin Gal, and Debora S Marks. Disease variant prediction with deep generative models of evolutionary data. _Nature_, 599(7883):91-95, 2021.
* [41] Roshan Rao, Nicholas Bhattacharya, Neil Thomas, Yan Duan, Xi Chen, John Canny, Pieter Abbeel, and Yun S Song. Evaluating protein transfer learning with tape. _Advances in Neural Information Processing Systems_, 32:9689, 2019.
* [42] Tristan Bepler and Bonnie Berger. Learning protein sequence embeddings using information from structure. In _International Conference on Learning Representations_, 2018.
* [43] Surojit Biswas, Grigory Khimulya, Ethan C Alley, Kevin M Esevelt, and George M Church. Low-n protein engineering with data-efficient deep learning. _Nature methods_, 18(4):389-396, 2021.
* [44] Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu, and Alex Rives. Language models enable zero-shot prediction of the effects of mutations on protein function. _Advances in Neural Information Processing Systems_, 34, 2021.
* [45] Chloe Hsu, Robert Verkuil, Jason Liu, Zeming Lin, Brian Hie, Tom Sercu, Adam Lerer, and Alexander Rives. Learning inverse folding from millions of predicted structures. In _International Conference on Machine Learning_, pages 8946-8970. PMLR, 2022.
* [46] Christine A Orengo, Alex D Michie, Susan Jones, David T Jones, Mark B Swindells, and Janet M Thornton. Cath-a-a hierarchic classification of protein domain structures. _Structure_, 5(8):1093-1109, 1997.
* [47] Ali Madani, Ben Krause, Eric R Greene, Subu Subramanian, Benjamin P Mohr, James M Holton, Jose Luis Olmos Jr, Caiming Xiong, Zachary Z Sun, Richard Socher, et al. Large language models generate functional protein sequences across diverse families. _Nature Biotechnology_, pages 1-8, 2023.
* [48] Scott Federhen. The ncbi taxonomy database. _Nucleic acids research_, 40(D1):D136-D143, 2012.
* [49] Nadav Brandes, Dan Ofer, Yam Peleg, Nadav Rappoport, and Michal Linial. Proteinbert: a universal deep-learning model of protein sequence and function. _Bioinformatics_, 38(8):2102-2110, 2022.
* [50] Zeming Lin, Halli Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Nikita Smetanin, Robert Verkuil, Ori Kabeli, Yaniv Shmueli, et al. Evolutionary-scale prediction of atomic-level protein structure with a language model. _Science_, 379(6637):1123-1130, 2023.
* [51] Sean R Eddy. Accelerated profile hmm searches. _PLoS computational biology_, 7(10):e1002195, 2011.
* [52] Thomas A Hopf, Anna G Green, Benjamin Schubert, Sophia Mersmann, Charlotte PI Scharfe, John B Ingraham, Agnes Toth-Petroczy, Kelly Brock, Adam J Riesselman, Perry Palmedo, et al. The evcouplings python framework for coevolutionary sequence analysis. _Bioinformatics_, 35(9):1582-1584, 2019.
* [53] Lawrence R Rabiner. A tutorial on hidden markov models and selected applications in speech recognition. _Proceedings of the IEEE_, 77(2):257-286, 1989.
* [54] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. _arXiv preprint arXiv:1312.6114_, 2013.
* [55] Benjamin J Livesey and Joseph A Marsh. Using deep mutational scanning to benchmark variant effect predictors and identify disease mutations. _Molecular systems biology_, 16(7):e9380, 2020.
* [56] Yoon Kim. Convolutional neural networks for sentence classification. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_, pages 1746-1751, Doha, Qatar, October 2014. Association for Computational Linguistics.
* [57] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016.
* [58] Sepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. _Neural computation_, 9(8):1735-1780, 1997.
* [59] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _Advances in neural information processing systems_, pages 5998-6008, 2017.
* [60] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_, 2018.
* [61] Nicki Skafte Detlefsen, Soren Hauberg, and Wouter Boomsma. Learning meaningful representations of protein sequences. _Nature communications_, 13(1):1914, 2022.
* [62] Herbert Edelsbrunner, John Harer, et al. Persistent homology-a survey. _Contemporary mathematics_, 453(26):257-282, 2008.
* [63] Afra Zomorodian and Gunnar Carlsson. Computing persistent homology. In _Proceedings of the twentieth annual symposium on Computational geometry_, pages 347-356, 2004.
* [64] Zixuan Cang and Guo-Wei Wei. Persistent cohomology for data with multicomponent heterogeneous information. _SIAM journal on mathematics of data science_, 2(2):396-418, 2020.
* [65] Samir Chowdhury and Facundo Memoli. Persistent path homology of directed networks. In _Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms_, pages 1152-1169. SIAM, 2018.
* [66] Daniel Lutgebetnam, Dejan Gove, Jason P Smith, and Ran Levi. Computing persistent homology of directed flag complexes. _Algorithms_, 13(1):19, 2020.
* [67] Zixuan Cang, Elizabeth Munch, and Guo-Wei Wei. Evolutionary homology on coupled dynamical systems with applications to protein flexibility analysis. _Journal of applied and computational topology_, 4:481-507, 2020.
* [68] Zhenyu Meng, D Vijay Anand, Yunpeng Lu, Jie Wu, and Kelin Xia. Weighted persistent homology for biomolecular data analysis. _Scientific reports_, 10(1):2079, 2020.
* [69] Rui Wang, Duc Duy Nguyen, and Guo-Wei Wei. Persistent spectral graph. _International journal for numerical methods in biomedical engineering_, 36(9):e3376, 2020.
* [70] Facundo Memoli, Zhengchao Wan, and Yusu Wang. Persistent laplacians: Properties, algorithms and implications. _SIAM Journal on Mathematics of Data Science_, 4(2):858-884, 2022.
* [71] Jiahui Chen, Rundong Zhao, Yiying Tong, and Guo-Wei Wei. Evolutionary de rham-hodge method. _Discrete and continuous dynamical systems. Series B_, 26(7):3785, 2021.
* [72] Xiaoqi Wei and Guo-Wei Wei. Persistent sheaf laplacians. _arXiv preprint arXiv:2112.10906_, 2021.
* [73] Rui Wang and Guo-Wei Wei. Persistent path laplacian. _Foundations of Data Science_, 5:26-55, 2023.
* [74] Xiang Liu, Huitao Feng, Jie Wu, and Kelin Xia. Persistent spectral hypergraph based machine learning (psh-ml) for protein-ligand binding affinity prediction. _Briefings in Bioinformatics_, 22(5):bbab127, 2021.
* [75] Dong Chen, Jian Liu, Jie Wu, and Guo-Wei Wei. Persistent hyperdigraph homology and persistent hyperdigraph laplacians. _arXiv preprint arXiv:2304.00345_, 2023.
* [76] Tomasz Kaczynski, Konstantin Michael Mischaikow, and Marian Mrozek. _Computational homology_, volume 3. Springer, 2004.
* [77] Larry Wasserman. Topological data analysis. _Annual Review of Statistics and Its Application_, 5:501-532, 2018.
* [78] Robert Ghrist. Barcodes: the persistent topology of data. _Bulletin of the American Mathematical Society_, 45(1):61-75, 2008.
* [79] David Cohen-Steiner, Herbert Edelsbrunner, and John Harer. Stability of persistence diagrams. In _Proceedings of the twenty-first annual symposium on Computational geometry_, pages 263-271, 2005.
* [80] Peter Bubenik et al. Statistical topological data analysis using persistence landscapes. _J. Mach. Learn. Res._, 16(1):77-102, 2015.
* [81] Henry Adams, Tegan Emerson, Michael Kirby, Rachel Neville, Chris Peterson, Patrick Shipman, Sofya Chepushtanova, Eric Hanson, Francis Motta, and Lori Ziegelmeier. Persistence images: A stable vector representation of persistent homology. _Journal of Machine Learning Research_, 18, 2017.
* [82] Zixuan Cang, Lin Mu, Kedi Wu, Kristopher Opron, Kelin Xia, and Guo-Wei Wei. A topological approach for protein classification. _Computational and Mathematical Biophysics_, 3(1), 2015.
* [83] James R Clough, Nicholas Byrne, Ilkay Oksuz, Veronika A Zimmer, Julia A Schnabel, and Andrew P King. A topological loss function for deep-learning based image segmentation using persistent homology. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 44(12):8766-8778, 2020.
* [84] Chi Seng Pun, Kelin Xia, and Si Xian Lee. Persistent-homology-based machine learning and its applications-a survey. _arXiv preprint arXiv:1811.00252_, 2018.
* [85] Bernadette J Stolz, Heather A Harrington, and Mason A Porter. Persistent homology of time-dependent functional networks constructed from coupled time series. _Chaos: An Interdisciplinary Journal of Nonlinear Science_, 27(4):047410, 2017.
* [86] Guo-Wei Wei. Topological data analysis hearing the shapes of drums and bells. _arXiv preprint arXiv:2301.05025_, 2023.
* [87] Duc Duy Nguyen, Zixuan Cang, Kedi Wu, Mengluun Wang, Yin Cao, and Guo-Wei Wei. Mathematical deep learning for pose and binding affinity prediction and ranking in d3r grand challenges. _Journal of computer-aided molecular design_, 33:71-82, 2019.
* [88] Jiahui Chen, Yuchi Qiu, Rui Wang, and Guo-Wei Wei. Persistent laplacian projected omicron ba. 4 and ba. 5 to become new dominating variants. _Computers in Biology and Medicine_, 151:106262, 2022.
* [89] Zhenyu Meng and Kelin Xia. Persistent spectral-based machine learning (perspect ml) for protein-ligand binding affinity prediction. _Science Advances_, 7(19):eabc5329, 2021.
* [90] AA Grigor'yan, Yong Lin, Yu V Muranov, and Shing-Tung Yau. Path complexes and their homologies. _Journal of Mathematical Sciences_, 248:564-599, 2020.
* [91] Jakob Hansen and Robert Ghrist. Toward a spectral theory of cellular sheaves. _Journal of Applied and Computational Topology_, 3:315-358, 2019.
* [92] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. _arXiv preprint arXiv:1609.02907_, 2016.
* [93] Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. _arXiv preprint arXiv:1710.10903_, 2017.
* [94] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. _Advances in neural information processing systems_, 30, 2017.
* [95] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? _arXiv preprint arXiv:1810.00826_, 2018.
* [96] Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. _arXiv preprint arXiv:1511.05493_, 2015.
* [97] Thomas N Kipf and Max Welling. Variational graph auto-encoders. _arXiv preprint arXiv:1611.07308_, 2016.
* [98] Petar Velickovic, William Fedus, William L Hamilton, Pietro Lio, Yoshua Bengio, and R Devon Hjelm. Deep graph infomax. _arXiv preprint arXiv:1809.10341_, 2018.
* [99] Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. Graph contrastive learning with augmentations. _Advances in neural information processing systems_, 33:5812-5823, 2020.
* [100] Yu Rong, Yatao Bian, Tingyang Xu, Weiyang Xie, Ying Wei, Wenbing Huang, and Junzhou Huang. Self-supervised graph transformer on large-scale molecular data. _Advances in Neural Information Processing Systems_, 33:12559-12571, 2020.
* [101] Shuangli Li, Jingbo Zhou, Tong Xu, Liang Huang, Fan Wang, Haoyi Xiong, Welli Huang, Dejing Dou, and Hui Xiong. Structure-aware interactive graph neural networks for the prediction of protein-ligand binding affinity. In _Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining_, pages 975-985, 2021.
* [102] Vladimir Gligorijevic, P Douglas Renfrew, Tomasz Kosciolek, Julia Koehler Leman, Daniel Berenberg, Tommi Vatanen, Chris Chandler, Bryn C Taylor, Ian M Fisk, Hera Vlamakis, et al. Structure-based protein function prediction using graph convolutional networks. _Nature communications_, 12(1):3168, 2021.
* [103] Xianggen Liu, Yunan Luo, Pengyong Li, Sen Song, and Jian Peng. Deep geometric representations for modeling effects of mutations on protein-protein binding affinity. _PLoS computational biology_, 17(8):e1009284, 2021.
* [104] Sisi Shan, Shitong Luo, Ziqing Yang, Junxian Hong, Yufeng Su, Fan Ding, Lili Fu, Chenyu Li, Peng Chen, Jianzhu Ma, et al. Deep learning guided optimization of human antibody against sars-cov-2 variants with broad neutralization. _Proceedings of the National Academy of Sciences_, 119(11):e212954119, 2022.
* [105] Zuobai Zhang, Minghao Xu, Arian Jamasb, Vijil Chenthamarakshan, Aurelie Lozano, Payel Das, and Jian Tang. Protein representation learning by geometric structure pretraining. _arXiv preprint arXiv:2203.06125_, 2022.
* [106] John Ingraham, Vikas Garg, Regina Barzilay, and Tommi Jaakkola. Generative models for graph-based protein design. _Advances in neural information processing systems_, 32, 2019.
* [107] Jiahan Li, Shitong Luo, Congyue Deng, Chaoran Cheng, Jiaqi Guan, Leonidas Guibas, Jianzhu Ma, and Jian Peng. Orientation-aware graph neural networks for protein structure representation learning. 2022.
* [108] Zixuan Cang and Guo-Wei Wei. Topologynet: Topology based deep convolutional and multi-task neural networks for biomolecular property predictions. _PLoS computational biology_, 13(7):e1005690, 2017.
* [109] Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In _Proceedings of the AAAI conference on artificial intelligence_, volume 33, pages 4602-4609, 2019.
* [110] Stefania Ebi, Michael Defferrard, and Gard Speremann. Simplicial neural networks. _arXiv preprint arXiv:2010.03633_, 2020.
* [111] Nicholas C Wu, Lei Dai, C Anders Olson, James O Lloyd-Smith, and Ren Sun. Adaptation in protein fitness landscapes is facilitated by indirect paths. _Elife_, 5:e16965, 2016.
* [112] Anna I Podgornaia and Michael T Laub. Pervasive degeneracy and epistasis in a protein-protein interface. _Science_, 347(6222):673-677, 2015.
* [113] Yao Zhang, Yuhan Jiang, Kaifu Gao, Dexin Sui, Peixuan Yu, Min Su, Guo-Wei Wei, and Jian Hu. Structural insights into the elevator-type transport mechanism of a bacterial zip metal transporter. _Nature Communications_, 14(1):385, 2023.
* [114] Chase R Freschlin, Sarah A Fahlberg, and Philip A Romero. Machine learning to navigate fitness landscapes for protein engineering. _Current Opinion in Biotechnology_, 75:102713, 2022.
* [115] Brian L Hie and Kevin K Yang. Adaptive machine learning for protein engineering. _Current opinion in structural biology_, 72:145-152, 2022.
* [116] Xiaochen Tian, Ziyin Wang, Kevin K Yang, Jin Su, Hanwen Du, Qinguo Zheng, Guibing Guo, Min Yang, Fei Yang, and Fajie Yuan. Sequence vs. structure: Delving deep into data driven protein function prediction. _bioRxiv_, pages 2023-04, 2023.
* [117] Richard J Fox, S Christopher Davis, Emily C Mundorff, Lisa M Newman, Vesna Gavrilovic, Steven K Ma, Loleta M Chung, Charlene Ching, Sarena Tam, Sheela Muley, et al. Improving catalytic function by progar-driven enzyme evolution. _Nature biotechnology_, 25(3):338-344, 2007.
* [118] Yanzhi Guo, Lezheng Yu, Zhining Wen, and Menglong Li. Using support vector machine combined with auto covariance to predict protein-protein interactions from protein sequences. _Nucleic acids research_, 36(9):3025-3030, 2008.
* [119] Ning Zhang, Yuting Chen, Haoyu Lu, Feiyang Zhao, Roberto Vera Alvarez, Alexander Goncearenco, Anna R Panchenko, and Minghui Li. Mutabind2: predicting the impacts of single and multiple mutations on protein-protein interactions. _Iscience_, 23(3):100939, 2020.
* [120] Zixuan Cang and Guo-Wei Wei. Analysis and prediction of protein folding energy changes upon mutation by element specific persistent homology. _Bioinformatics_, 33(22):3549-3557, 2017.
* [121] Amirali Aghazadeh, Hunter Nisonoff, Orhan Ocal, David H Brookes, Yijie Huang, O Ozan Koyluoglu, Jennifer Listgarten, and Kannan Ramchandran. Epistatic net allows the sparse spectral regularization of deep neural networks for inferring fitness functions. _Nature communications_, 12(1):5225, 2021.
* [122] Christian Dallago, Jody Mou, Kadina E Johnston, Bruce J Wittmann, Nicholas Bhattacharya, Samuel Goldman, Ali Madani, and Kevin K Yang. Flip: Benchmark tasks in fitness landscape inference for proteins. _bioRxiv_, pages 2021-11, 2021.
* [123] Drew H Bryant, Ali Bashir, Sam Sinai, Nina K Jain, Pierce J Ogden, Patrick F Riley, George M Church, Lucy J Colwell, and Eric D Kelsic. Deep diversification of an avav capsid protein by machine learning. _Nature Biotechnology_, 39(6):691-696, 2021.
* [124] Yuchi Qiu, Jian Hu, and Guo-Wei Wei. Cluster learning-assisted directed evolution. _Nature Computational Science_, 1(12):809-818, 2021.
* [125] Yuchi Qiu and Guo-Wei Wei. Clade 2.0: Evolution-driven cluster learning-assisted directed evolution. _Journal of Chemical Information and Modeling_, 62(19):4629-4641, 2022.
* [126] Dawn GL Thean, Hoi Yee Chu, John HC Fong, Becky KC Chan, Peng Zhou, Cynthia CS Kwok, Yee Man Chan, Silvia YL Mak, Gigi CG Choi, Joshua WK Ho, et al. Machine learning-coupled combinatorial mutagenesis
enables resource-efficient engineering of crispr-cas9 genome editor activities. _Nature Communications_, 13(1):2219, 2022.
* [127] Alexander G Georgiev. Interpretable numerical descriptors of amino acid space. _Journal of Computational Biology_, 16(5):703-723, 2009.
* [128] Li Shen, Hongsong Feng, Yuchi Qiu, and Guo-Wei Wei. Sysbi: Sequence-based virtual screening of biomolecular interactions. _arXiv preprint arXiv:2212.13617_, 2022.
* [129] Yunan Luo, Guangdong Jiang, Tianhao Yu, Yang Liu, Lam Vo, Hantian Ding, Yufeng Su, Wesley Wei Qian, Huimin Zhao, and Jian Peng. Ecnet is an evolutionary context-integrated deep learning framework for protein engineering. _Nature communications_, 12(1):1-14, 2021.
* [130] Kevin P Greenman, Ava Soleimany, and Kevin K Yang. Benchmarking uncertainty quantification for protein engineering. In _ICLR2022 Machine Learning for Drug Discovery_, 2022.
* [131] Carl Edward Rasmussen. Gaussian processes in machine learning. In _Summer school on machine learning_, pages 63-71. Springer, 2003.
* [132] Claire N Bedbrook, Kevin K Yang, J Elliott Robinson, Elisha D Mackey, Viviana Gradinaru, and Frances H Arnold. Machine learning-guided channel-hrodopsin engineering enables minimally invasive optogenetics. _Nature methods_, 16(11):1176-1184, 2019.
* [133] Claire N Bedbrook, Kevin K Yang, Austin J Rice, Viviana Gradinaru, and Frances H Arnold. Machine learning to design integral membrane channel-hrodopsins for efficient eukaryotic expression and plasma membrane localization. _PLoS computational biology_, 13(10):e1005786, 2017.
* [134] Yutaka Saito, Misaki Oikawa, Hikaru Nakazawa, Teppei Nide, Tomoshi Kameda, Koji Tsuda, and Mitsuo Umetsu. Machine-learning-guided mutagenesis for directed evolution of fluorescent proteins. _ACS synthetic biology_, 7(9):2014-2022, 2018.
* [135] Jonathan C Greenhalgh, Sarah A Fahlberg, Brian F Pfeger, and Philip A Romero. Machine learning-guided acyl-acp reductase engineering for improved in vivo fatty alcohol production. _Nature communications_, 12(1):5825, 2021.
* [136] Philip A Romero, Andreas Krause, and Frances H Arnold. Navigating the protein fitness landscape with gaussian processes. _Proceedings of the National Academy of Sciences_, 110(3):E193-E201, 2013.
* [137] Sebastien Bubeck, Remi Munos, Gilles Stoltz, and Csaba Szepesvari. X-armed bandits. _Journal of Machine Learning Research_, 12(5), 2011.
* [138] Remi Munos. Optimistic optimization of a deterministic function without the knowledge of its smoothness. _Advances in neural information processing systems_, 24:783-791, 2011.
* [139] Antonia Creswell, Tom White, Vincent Dumoulin, Kai Arulkumaran, Biswa Sengupta, and Anil A Bharath. Generative adversarial networks: An overview. _IEEE signal processing magazine_, 35(1):53-65, 2018.
* [140] Anvita Gupta and James Zou. Feedback gan for dna optimizes protein functions. _Nature Machine Intelligence_, 1(2):105-111, 2019.
* [141] Jung-Eun Shin, Adam J Riesselman, Aaron W Kollasch, Conor McMahon, Elana Simon, Chris Sander, Aashish Manglik, Andrew C Krusse, and Debora S Marks. Protein design and variant prediction using autoregressive generative models. _Nature communications_, 12(1):2403, 2021.
* [142] Sharrol Bachas, Goran Rakocevic, David Spencer, Anand V Sastry, Robel Haile, John M Sutton, George Kasun, Andrew Stachyra, Jahir M Gutierrez, Edriss Yassine, et al. Antibody optimization enabled by artificial intelligence predictions of binding affinity and naturalness. _bioRxiv_, pages 2022-08, 2022.
* [143] Egbert Castro, Abhinav Godavarthi, Julian Rubinfen, Kevin Givechian, Dhananjay Bhaskar, and Smita Krishnaswamy. Transformer-based protein generation with regularized latent space optimization. _Nature Machine Intelligence_, 4(10):840-851, 2022.
* [144] Minkyung Baek, Frank DiMaio, Ivan Anishchenko, Justas Dauparas, Sergey Ovchinnikov, Gyu Rie Lee, Jue Wang, Qian Cong, Lisa N Kinch, R Dustin Schaeffer, et al. Accurate prediction of protein structures and interactions using a three-track neural network. _Science_, 373(6557):871-876, 2021.
* [145] Shaun M Kandathil, Andy M Lau, and David T Jones. Machine learning methods for predicting protein structure from single sequences. _Current Opinion in Structural Biology_, 81:102627, 2023.
* [146] Ratul Chowdhury, Nazim Bouatta, Surojit Biswas, Christina Floristean, Anantr Kharkar, Koushik Roy, Charlotte Rochereau, Gustaf Abdritz, Joanna Zhang, George M Church, et al. Single-sequence protein structure prediction using a language model and deep learning. _Nature Biotechnology_, 40(11):1617-1623, 2022.
* [147] Wenkai Wang, Zhenling Peng, and Jianyi Yang. Single-sequence protein structure prediction using supervised transformer protein language models. _Nature Computational Science_, 2(12):804-814, 2022.
* [148] Ruidong Wu, Fan Ding, Rui Wang, Rui Shen, Xiwen Zhang, Shitong Luo, Chenpeng Su, Zuofan Wu, Qi Xie, Bonnie Berger, et al. High-resolution de novo structure prediction from primary sequence. _BioRxiv_, pages 2022-07, 2022.
* [149] Xiaomin Fang, Fan Wang, Lihang Liu, Jingzhou He, Dayong Lin, Yingfei Xiang, Xiaonan Zhang, Hua Wu, Hui Li, and Le Song. Helixfold-single: Mas-free protein structure prediction by using protein language model as an alternative. _arXiv preprint arXiv:2207.13921_, 2022.
* [150] Thomas D Barrett, Amelia Villegas-Morcillo, Louis Robinson, Benoit Gaujac, David Admete, Elia Saquand, Karim Beguir, and Arthur Flajolet. So manyfolds, so little time: Efficient protein structure prediction with plms and msas. _bioRxiv_, pages 2022-10, 2022.
* [151] Jiaxiang Wu, Fandi Wu, Biaobin Jiang, Wei Liu, and Peilin Zhao. fold-ab: fast and accurate antibody structure prediction without sequence homologs. _bioRxiv_, pages 2022-11, 2022.
* [152] Konstantin Weissenow, Michael Heinzinger, Martin Steinegger, and Burkhard Rost. Ultra-fast protein structure prediction to capture effects of sequence variation in mutation movies. _bioRxiv_, pages 2022-11, 2022.
* [153] Nicola Bordin, Christian Dallago, Michael Heinzinger, Stephanie Kim, Maria Littmann, Clemens Rauer, Martin Steinegger, Burkhard Rost, and Christine Orengo. Novel machine learning approaches revolutionize protein knowledge. _Trends in Biochemical Sciences_, 2022.
* [154] Tamuka M Chidyausiku, Soria R Mendes, Jason C Klima, Marta Nadal, Ulrich Eckhard, Jorge Roel-Touris, Scott Houliston, Tibsay Guevara, Hugh K Haddox, Adam
Moyer, et al. De novo design of immunoglobulin-like domains. _Nature Communications_, 13(1):5661, 2022.
* [155] Alexandros D Keros, Vidit Nanda, and Kartic Subr. Dist2cycle: A simplicial neural network for homology localization. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 36, pages 7133-7142, 2022.
* [156] Stephan C Schuster. Next-generation sequencing transforms today's biology. _Nature methods_, 5(1):16-18, 2008.
* [157] Karen S Sarkisyan, Dmitry A Bolotin, Margarita V Meer, Dinara R Usmanova, Alexander S Mishin, George V Sharonov, Dmitry N Ivankov, Nina G Bozhanova, Mikhail S Baranov, Onuralp Soylemez, et al. Local fitness landscape of the green fluorescent protein. _Nature_, 533(7603):397-401, 2016.
Mathematical theory of topological data analysis (TDA)
### Simplicial complex and chain complex
Graph is a representation for a point cloud consisting of vertices and edges for modeling pairwise interactions, such as atoms and bonds in molecules. Simplicial complex, the generalization of graph, constructs more enriched shapes to include high dimensional objects. A simplicial complex is composed of simplexes up to certain dimensions. A \(k\)-simplex, \(\sigma^{k}\), is a convex hull of \(k+1\) affinely independent points \(v_{0},\ v_{1},\ v_{2},\ \cdots,\ v_{k}\):
\[\sigma^{k}:=[v_{0},\ v_{1},\ v_{2},\ \cdots,\ v_{k}]=\left\{\sum_{i=0}^{k} \lambda_{i}v_{i}\bigg{|}\sum_{i=0}^{k}\lambda_{i}=1;\lambda_{i}\in[0,1],\ \forall i\right\}. \tag{4}\]
In Euclidean space, 0-simplex is a point, 1-simplex is an edge, 2-simplex is a triangle, and 3-simplex is a tetrahedron. The \(k\)-simplex can describe abstract simplex for \(k>3\).
A subset of the \(k+1\) vertices of a \(k\)-simplex, \(\sigma^{k}\), with \(m+1\) vertices forming a convex hull in a lower dimension and is called an \(m\)-face of the \(k\)-simplex \(\sigma^{m}\), denoted as \(\sigma^{m}\subset\sigma^{k}\). A simplicial complex \(K\) is a finite collection of simplexes satisfying two conditions:
1) Any face of a simplex in \(K\) is also in \(K\).
2) The intersection of any two simplexes in \(K\) is either empty or a shared face.
The interactions between two simplexes can be described by adjacency. For example, in graph theory, two vertices (0-simplexes) are adjacent if they share a common edge (1-simplex). Adjacency for \(k\)-simplexes with \(k>0\) includes both upper and lower adjacency. Two distinct \(k\)-simplexes, \(\sigma_{1}\) and \(\sigma_{2}\), in \(K\) are upper adjacent, denoted \(\sigma_{1}\sim_{U}\sigma_{2}\), if both are faces of a \((k+1)\)-simplex in \(K\), called a common upper simplex. Two distinct \(k\)-simplexes, \(\sigma_{1}\) and \(\sigma_{2}\), in \(K\) are lower adjacent, denoted \(\sigma_{1}\sim_{L}\sigma_{2}\), if they share a common \((k-1)\)-simplex as their face, called a common lower simplex. Either common upper simplex or common lower simplex is unique for two upper or lower adjacent simplexes. The upper degree of a \(k\)-simplex, \(\deg_{U}(\sigma^{k})\), is the number of \((k+1)\)-simplexes in \(K\) of which \(\sigma^{k}\) is a face; the lower degree of a \(k\)-simplex, \(\deg_{L}(\sigma^{k})\), is the number of nonempty \((k-1)\)-simplexes in \(K\) that are faces of \(\sigma^{k}\), which is always \(k+1\). The degree of \(k\)-simplex (\(k>0\)) is defined as the sum of its upper and lower degree
\[\deg(\sigma^{k})=\deg_{U}(\sigma^{k})+\deg_{L}(\sigma^{k})=\deg_{U}(\sigma^{k })+k+1. \tag{5}\]
For \(k=0\), the degree of a vertex is:
\[\deg(\sigma^{0})=\deg_{U}(\sigma^{0}). \tag{6}\]
A simplex has orientation determined by the ordering of its vertices, except 0-simplex. For example, clockwise and anticlockwise orderings of three vertices determine the two orientation of a triangle. Two simplexes, \(\sigma_{1}\) and \(\sigma_{2}\), defined on the same vertices are similarly oriented if their orderings of vertices differ from an even number of permutations, otherwise, they are dissimilarly oriented.
Algebraic topology provides a tool to calculate simplicial complex. A \(k\)-chain is a formal sum of oriented \(k\)-simplexes in \(K\) with coefficients on \(\mathbb{Z}\). The set of all \(k\)-chains of simplicial complex \(K\) together with the addition operation on \(\mathbb{Z}\) constructs a free Abelian group \(C_{k}(K)\), called chain group. To link chain groups from different dimensions, the \(k\)-boundary operator, \(\partial_{k}:C_{k}(K)\to C_{k-1}(K)\), maps a \(k\)-chain in the form of a linear combination of \(k\)-simplexes to the same linear combination of the boundaries of the \(k\)-simplexes. For a simple example where the \(k\)-chain has one oriented \(k\)-simplex spanned by \(k+1\) vertices as defined in Eq. (4), its boundary operator is defined as the formal sum of its all \((k-1)\)-faces:
\[\partial_{k}\sigma^{k}=\sum_{i=0}^{k}(-1)^{i}\sigma_{i}^{k-1}=\sum_{i=0}^{k}(- 1)^{i}\left[v_{0},\cdots,\hat{v_{i}},\cdots,\ v_{k}\right], \tag{7}\]
where \(\sigma_{i}^{k-1}=[v_{0},\cdots,\hat{v_{i}},\cdots,\ v_{k}]\) is the \((k-1)\)-simplex with its vertex \(v_{i}\) being removed. The most important topological property is that a boundary has no boundary: \(\partial_{k-1}\partial_{k}=\emptyset\).
A sequence of chain groups connected by boundary operators defines the chain complex:
\[\cdots\xrightarrow{\partial_{k+1}}C_{n}(K)\xrightarrow{\partial_{n}}C_{n-1}( K)\xrightarrow{\partial_{n-1}}\cdots\xrightarrow{\partial_{k}}C_{0}(K) \xrightarrow{\partial_{n}}\emptyset. \tag{8}\]
When \(n\) exceeds the dimension of \(K\), \(C_{n}(K)\) is an empty vector space and the corresponding boundary operator is a zero map.
### Filtration for multiscale chain complexes
Filtration is a process that constructs a nested sequence of simplicial complex allowing a multiscale analysis of the point cloud. It creates a family of simplicial complexes ordered by inclusion (Figure 2c):
\[\emptyset=K^{t_{0}}\subseteq K^{t_{1}}\subseteq\cdots\subseteq K^{t_{n}}=K. \tag{9}\]
where \(K\) is the largest simplicial complex can be obtained from the point cloud.
The filtration induces a sequence of chain complexes
\[\cdots\xrightarrow{\partial_{k+2}^{t_{1}}}\overleftarrow{\partial_{k+1}^{t_{1 }}}\overleftarrow{\partial_{k+1}^{t_{1}}}\overleftarrow{\partial_{k+1}^{t_{1}}} }C_{k}^{t_{k}}\xrightarrow{\partial_{k+1}^{t_{1}}}\cdots\xrightarrow{\partial_{k +1}^{t_{1}}}C_{0}^{t_{1}}\xrightarrow{\partial_{k+1}^{t_{1}}}C_{0}^{t_{1}} \xrightarrow{\partial_{k+1}^{t_{1}}}\emptyset\]
\[\begin{array}{ccccc}|\cap&|\cap&|\cap\\ \cdots\xrightarrow{\partial_{k+2}^{t_{2}}}\overleftarrow{\partial_{k+2}^{t_{2}}} }C_{k+1}^{t_{2}}\xrightarrow{\partial_{k+1}^{t_{2}}}C_{k}^{t_{k}}\xrightarrow{ \partial_{k+1}^{t_{2}}}\cdots\xrightarrow{\partial_{k+1}^{t_{2}}}C_{0}^{t_{2}} \xrightarrow{\partial_{k+1}^{t_{2}}}C_{0}^{t_{2}}\xrightarrow{\partial_{k+1}^{t_{2 }}}C_{0}^{t_{2}}\xrightarrow{\partial_{k+1}^{t_{2}}}C_{0}^{t_{2}}\emptyset\\ &|\cap&|\cap&|\cap\\ &\vdots&\vdots&\vdots&\vdots\\ &|\cap&|\cap&|\cap\\ &\cdots\xrightarrow{\partial_{k+2}^{t_{2}}}\overleftarrow{\partial_{k+2}^{t_{2 }}}}C_{k+1}^{t_{k}}\xrightarrow{\partial_{k+1}^{t_{1}}}C_{k}^{t_{k}}\xrightarrow{ \partial_{k+1}^{t_{1}}}\cdots\xrightarrow{\partial_{k+1}^{t_{1}}}\overleftarrow{ \partial_{k+1}^{t_{1}}}\cdots\xrightarrow{\partial_{k+1}^{t_{1}}}C_{0}^{t_{2}} \xrightarrow{\partial_{k+1}^{t_{2}}}\emptyset\end{array} \tag{10}\]
where \(C_{k}^{t}=C_{k}(K^{t})\) is the chain group for subcomplex \(K^{t}\), and its \(k\)-boundary operator is \(\partial_{k}^{t}:C_{k}(K^{t})\to C_{k-1}(K^{t})\). \(\partial_{k}^{t}\) is the co-boundary operator. Associated with the \(k\)-boundary operator, its adjoint operator is the \(k\)-adjoint boundary operator, \(\partial_{k}^{t*}:C_{k-1}(K^{t})\to C_{k}(K^{t})\).
There are various simplicial complex that can be used to construct the filtration, such as Rips complex, \(\hat{C}\)ech complex, and Alpha complex. For example, the Rips complex of \(K\) with radius \(t\) consists of all simplexes with diameter at most \(2t\):
\[V(t)=\left\{\sigma\subseteq K|\text{diam }(\sigma)\leq 2t\right\}. \tag{11}\]
Homology group and persistent homology
With the chain complex defined in Eq. (8), the \(k\)-cycle and \(k\)-boundary groups are defined as:
\[\begin{split} Z_{k}=\text{ker}\ \partial_{k}&=\{c\in C _{k}\mid\partial_{k}c=0\}\\ B_{k}=\text{im}\ \partial_{k+1}&=\{\partial_{k+1}c \mid c\in C_{k+1}\}\end{split} \tag{12}\]
Then the \(k\)-th homology group \(H_{k}\) is defined as
\[H_{k}=Z_{k}/B_{k}. \tag{13}\]
The \(k\)-th Betti number, \(\beta_{k}\), is defined by the rank of \(k\)-th homology group \(H_{k}\) which counts \(k\)-dimensional holes. For example, \(\beta_{0}=\text{rank}(H_{0})\) reflects the number of connected components, \(\beta_{1}=\text{rank}(H_{1})\) reflects the number of loops, and \(\beta_{2}=\text{rank}(H_{2})\) reveals the number of voids or cavities.
Persistent homology is devised to track the multiscale topological information along the filtration [1]. The inclusion map \(K_{i}\subseteq K_{j}\) induces a homomorphism \(f_{k}^{i,j}\) between homology groups \(H_{k}(K_{i})\to H_{k}(K_{i})\) for each dimension \(k\). The \(p\)-persistent \(k\)-th homology group of \(K_{i}\) is defined by
\[H_{k}^{t,p}=Z_{k}^{t}/(B_{k}^{t+p}\cap Z_{k}^{t}), \tag{14}\]
where \(Z_{k}^{t}=\text{ker}\ \partial_{k}^{t}\) and \(B_{k}^{t+p}=\text{im}\ \partial_{k+1}^{t+p}\). Intuitively, this homology group records the \(k\)-dimensional homology classes of \(K_{i}\) that are persistent at least until \(K_{i+p}\). The birth and death of homology classes can be represented by a barcode, a set of intervals (Figure 2d).
### Combinatorial Laplacian.
For \(k\)-boundary operator \(\partial_{k}:C_{k}\to C_{k-1}\) in \(K\), let \(\mathcal{B}_{k}\) be the matrix representation of this operator relative to the standard bases for \(C_{k}\) and \(C_{k-1}\) in \(K\). \(\mathcal{B}_{k}\in\mathbb{Z}^{M\times N}\) is the matrix representation of boundary operator under the standard bases \(\left\{\sigma_{i}^{k}\right\}_{i=1}^{N}\) and \(\left\{\sigma_{j}^{k-1}\right\}_{j=1}^{M}\) of \(C_{k}\) and \(C_{k-1}\). Associated with the boundary operator \(\partial_{k}\), the adjoint boundary operator is \(\partial_{k}^{*}:C_{k-1}\to C_{k}\), where its matrix representation is the transpose of the matrix, \(\mathcal{B}^{T}\), with respect to the same ordered bases to the boundary operator.
The \(k\)-combinatorial Laplacian, a topological Laplacian, is a linear operator \(\Delta_{k}:C_{k}(K)\to C_{k}(K)\)
\[\Delta_{k}:=\partial_{k+1}\partial_{k+1}^{*}+\partial_{k}^{*}\partial_{k}, \tag{15}\]
and its matrix representation, \(L_{k}\), is given by
\[L_{k}=\mathcal{B}_{k+1}\mathcal{B}_{k+1}^{T}+\mathcal{B}_{k}^{T}\mathcal{B}_{ k}. \tag{16}\]
In particular, the \(0\)-combinatorial Laplacian (i.e. graph Laplacian) is given as follows since \(\partial_{0}\) is an zero map:
\[L_{0}=\mathcal{B}_{1}\mathcal{B}_{1}^{T}. \tag{17}\]
The elements of \(k\)-combinatorial Laplacian matrices are
\[\left(L_{k}\right)_{i,j}=\begin{cases}\deg\left(\sigma_{i}^{k}\right),\text{ if }i=j\\ 1,\text{ if }i\neq j,\sigma_{i}^{k}\approx_{U}\sigma_{j}^{k}\text{ and }\sigma_{i}^{k}\sim_{L}\sigma_{j}^{k}\text{ with similar orientation }\end{cases} \tag{18}\]
For \(k=0\), the graph Laplacian matrix \(L_{0}\) is
\[\left(L_{0}\right)_{i,j}=\begin{cases}\deg\left(\sigma_{i}^{0}\right),\text{ if }i=j\\ -1,\text{ if }i\neq j,\sigma_{i}^{0}\sim_{U}\sigma_{j}^{0}\\ 0,\text{ otherwise.}\end{cases} \tag{19}\]
The multiplicity of zero spectra of \(L_{k}\) gives the Betti-\(k\) number, according to combinatorial Hodge theorem [2]:
\[\beta_{k}=\text{dim}(L_{k})-\text{rank}(L_{k})=\text{null}(L_{k}). \tag{20}\]
The Betti numbers describe topological invariants. Specifically, \(\beta_{0}\), \(\beta_{1}\), and \(\beta_{2}\) may be regarded as the numbers of independent components, rings, and cavities, respectively.
### Persistent spectral graph (PSG)
The homotopic shape changes with a small increment of filtration parameter may be subject to noise from the data. The persistence may be considered to enhance the robustness when calculating the Laplacian. First, we define the \(p\)-persistent chain group \(\mathbb{C}_{k}^{t,p}\subseteq C_{k}^{t+p}\) whose boundary is in \(C_{k-1}^{t}\):
\[\mathbb{C}_{k}^{t,p}=\left\{\alpha\in C_{k}^{t+p}\mid\,\,\mathbb{\delta}_{k}^{ t+p}(\alpha)\in C_{k-1}^{t}\right\}, \tag{21}\]
where \(\mathbb{\delta}_{k}^{t+p}:\ C_{k}^{t+p}\to C_{k-1}^{t+p}\) is the \(k\)-boundary operator for chain group \(C_{k}^{t+p}\). Then we can define a \(p\)-persistent boundary operator, \(\mathbb{\delta}_{k}^{t,p}\), as the restriction of \(\mathbb{\delta}_{k}^{t+p}\) on the \(p\)-persistent chain group \(\mathbb{C}_{k}^{t,p}\):
\[\mathbb{\delta}_{k}^{t,p}=\mathbb{\delta}_{k}^{t+p}|_{\mathbb{C}_{k}^{t+p}}: \mathbb{C}_{k}^{t,p}\to C_{k-1}^{t}.S \tag{22}\]
Then PSG defines a family of \(p\)-persistent \(k\)-combinatorial Laplacian operators \(\Delta_{k}^{t,p}:\ C_{k}(K_{t})\to C_{k}(K_{t})\)[3, 4] which is defined as
\[\Delta_{k}^{t,p}=\mathbb{\delta}_{k+1}^{t,p}\left(\mathbb{\delta}_{k+1}^{t,p} \right)^{*}+\left(\mathbb{\delta}_{k}^{t}\right)^{*}\partial_{k}^{t}. \tag{23}\]
We denote \(\mathcal{B}_{k+1}^{t,p}\) and \(\mathcal{B}_{k}^{t}\) as the matrix representations for boundary operators \(\mathbb{\delta}_{k+1}^{t,p}\) and \(\mathcal{\delta}_{k}^{t}\), respectively. Then the Laplacian matrix for \(\mathbb{\Delta}_{k}^{t,p}\) is
\[\mathcal{L}_{k}^{t,p}=\mathcal{B}_{k+1}^{t,p}\left(\mathbb{\delta}_{k+1}^{t,p} \right)^{T}+\left(\mathcal{B}_{k}^{t}\right)^{T}\mathcal{B}_{k}^{t}. \tag{24}\]
Since the Laplacian matrix, \(\mathcal{L}_{k}^{t,p}\), is positive-semidefinite, its spectra are all real and non-negative
\[S_{k}^{t,p}=\text{Spectra}(\mathcal{L}_{k}^{t,p})=\{(\lambda_{1})_{k}^{t,p},( \lambda_{2})_{k}^{t,p},\cdots,(\lambda_{N})_{k}^{t,p}\}, \tag{25}\]
where \(N\) is the dimension of a standard basis for \(C_{k}^{t}\), and \(\mathcal{L}_{k}^{t,p}\) has dimension \(N\times N\). The \(k\)-persistent Betti number \(\mathbb{\delta}_{k}^{t,p}\) can be obtained from the multiplicity of harmonic spectra of \(\mathcal{L}_{k}^{t,p}\):
\[\mathbb{\delta}_{k}^{t,p}=\text{dim}(\mathcal{L}_{k}^{t,p})-\text{rank}(\mathcal{L }_{k}^{t,p})=\text{null}(\mathcal{L}_{k}^{t,p})=\#\{i|(\lambda_{i})_{k}^{t,p} \in\mathbb{\delta}_{k}^{t,p},\text{ and }(\lambda_{i})_{k}^{t,p}\}. \tag{26}\]
In addition, the rest of the spectra, i.e., the non-harmonic spectrum captures additional geometric information. The family of spectra of the persistent Laplacians reveals the homotopic |
2303.09747 | Gate-tunable giant superconducting nonreciprocal transport in few-layer
$T_{\rm d}$-MoTe$_2$ | We demonstrate gate-tunable giant field-dependent nonreciprocal transport
(magnetochiral anisotropy) in a noncentrosymmetric superconductor $T_{\rm
d}$-MoTe$_2$ in the thin limit. Giant magnetochiral anisotropy (MCA) with a
rectification coefficient $\gamma$ = $3.1 \times 10^6$ T$^{-1}$ A$^{-1}$, is
observed at 230 mK, below the superconducting transition temperature ($T_c$).
This is one of the largest values reported so far and is likely attributed to
the reduced symmetry of the crystal structure. The temperature dependence of
$\gamma$ indicates that the ratchet-like motion of magnetic vortices is the
origin of the MCA, as supported by our theoretical model. For bilayer $T_{\rm
d}$-MoTe$_2$, we successfully perform gate control of the MCA and realize
threefold modulation of $\gamma$. Our experimental results provide a new route
to realizing electrically controllable superconducting rectification devices in
a single material. | T. Wakamura, M. Hashisaka, S. Hoshino, M. Bard, S. Okazaki, T. Sasagawa, T. Taniguchi, K. Watanabe, K. Muraki, N. Kumada | 2023-03-17T03:00:26Z | http://arxiv.org/abs/2303.09747v2 | # Gate-tunable giant superconducting nonreciprocal transport in few-layer \(T_{d}\)-MoTe\({}_{2}\)
###### Abstract
We demonstrate gate-tunable giant field-dependent nonreciprocal transport (magnetochiral anisotropy) in a noncentrosymmetric superconductor \(T_{d}\)-MoTe\({}_{2}\) in the thin limit. Giant magnetochiral anisotropy (MCA) with a rectification coefficient \(\gamma=3.1\times 10^{6}\) T\({}^{-1}\) A\({}^{-1}\), is observed at 230 mK, below the superconducting transition temperature (\(T_{c}\)). This is one of the largest values reported so far and is likely attributed to the reduced symmetry of the crystal structure. The temperature dependence of \(\gamma\) indicates that the ratchet-like motion of magnetic vortices is the origin of the MCA, as supported by our theoretical model. For bilayer \(T_{d}\)-MoTe\({}_{2}\), we successfully perform gate control of the MCA and realize threefold modulation of \(\gamma\). Our experimental results provide a new route to realizing electrically controllable superconducting rectification devices in a single material.
Recent intensive studies on nonreciprocal transport have revealed the potential of using noncentrosymmetric materials or inversion-symmetry-breaking multilayer structures to develop novel rectification devices [1; 2; 3]. In systems with broken inversion and time-reversal symmetries, Onsager's reciprocal theorem allows the electrical resistance to be different for opposite current directions. This is called magnetochiral anisotropy (MCA), and it leads to the rectification effect [1].
Broken inversion symmetry is more profitable in superconductors. Rectification via ratchet-like motion of magnetic vortices was reported more than a decade ago for superconductors with asymmetric artificial magnetic nanostructures or with asymmetric antidots as an asymmetric pinning potential, and it was found that as the asymmetry becomes stronger, rectification is more efficient [4; 5; 6; 7; 8; 9]. Recent studies have pointed out that such ratchet-like motion of magnetic vortices is also possible in noncentrosymmetric superconductors and provides large MCA as for other mechanisms such as paraconductivity [10; 11; 12; 13; 14]. In such systems, the asymmetry of the crystal structure intrinsically induces asymmetric pinning potential for magnetic vortices. Taking the analogy with previous findings on superconductors with artificial asymmetric pinning potentials, the symmetry of the crystal should play a crucial role for the efficiency of the rectification. However, previous reports on MCA in noncentrosymmetric superconductors has been limited to those with trigonal symmetry [11; 12; 15; 16; 17]. Therefore, it is particularly called for to explore MCA in noncentrosymmetric superconductors with different symmetries, especially with lower symmetry than trigonal symmetry for further enhancement of the efficiency.
In this Letter, we demonstrate gate-tunable giant MCA in a noncentrosymmetric superconductor \(T_{d}\)-MoTe\({}_{2}\) in the thin limit. \(T_{d}\)-MoTe\({}_{2}\) lacks inversion symmetry and, for thin layers, has only one mirror plane normal to the \(b\)-axis as shown in Fig. 1(a). This reduced symmetry of the crystal structure may make the pinning potential for magnetic vortices highly asymmetric, which can generate a large MCA.
We exploit few-layer
Figure 1: (a) Top view of the crystal structure, in which broken inversion symmetry is evident. For thin layers, only one mirror plane is present. (b) Optical microscope image of a 4 ML device. A thin \(T_{d}\)-MoTe\({}_{2}\) flake is deposited on metallic contacts prepared in advance. (c) Temperature dependence of the resistance of 4 ML and bilayer samples. (d) Comparison of \(R_{2\omega}\) when current is parallel to the \(a\)-axis (red) and \(b\)-axis (blue) taken of the 4 ML sample.
\(T_{\rm d}\)-MoTe\({}_{2}\) for our measurements. Below \(T_{c}\), we observe MCA under a perpendicular magnetic field. The rectification coefficient \(\gamma\), i.e., the ratio of the nonreciprocal (second harmonic) resistance to the linear resistance, reaches 3.1 \(\times\) 10\({}^{6}\) T\({}^{-1}\)A\({}^{-1}\) at 230 mK, one of the largest values reported so far. The monotonic increase in \(\gamma\) with decreasing temperature indicates that the giant MCA is due to the ratchet-like motion of magnetic vortices in the mixed state of the type-II superconductor [10]. Interestingly, despite that \(T_{\rm d}\)-MoTe\({}_{2}\) is a semimetal, in the bilayer sample we can successfully modulate the MCA via an external gate voltage and demonstrate threefold modulation of \(\gamma\). This ability to produce a large variation in the nonreciprocal resistance by changing the gate voltage may provide key insights into the mechanisms behind the giant MCA by associating it with modulation of the superconducting properties.
Thin semimetallic MoTe\({}_{2}\) samples are prepared from high-quality \(T_{\rm d}\)-MoTe\({}_{2}\) crystals with a residual-resistivity ratio (RRR) \(\sim\) 1000 grown via the flux growth method. Mechanically exfoliated flakes are transferred onto prepatterned electrodes on a Si/SiO\({}_{2}\) chip or hexagonal boron-nitride (h-BN) deposited on a Si/SiO\({}_{2}\) chip by a typical dry transfer technique with polycarbonate (PC) and poli-dimethylpolysiloxane (PDMS) [18] in an argon-filled glovebox with a low concentration of O\({}_{2}\) and H\({}_{2}\)O (\(<\) 0.5 ppm). Electrical measurements are performed with a lock-in amplifier and \({}^{3}\)He low temperature measurement system. More details of the fabrication process and measurement setup are in the Supplemental Material [19].
In materials under broken inversion and time-reversal symmetries, Onsager's reciprocal theorem allows the linear longitudinal resistance to be different for opposite current directions [1]. Rikken _et al_. heuristically found a general formula for the nonreciprocal transport, also called the MCA, expressed as [20]
\[R=R_{0}(1+\gamma BI), \tag{1}\]
where \(\gamma\) is the rectification coefficient, which quantifies the efficiency of generating the nonreciprocal resistance. \(R_{0}\), \(B\) and \(I\) are the linear resistance, magnetic field and excitation current, respectively. Substituting equation (1) into Ohm's law \(V=RI\) leads to
\[V=R_{0}I+\gamma BR_{0}I^{2}. \tag{2}\]
The first term is the typical linear voltage response to the current and the second term is related to the nonreciprocal transport. Thus, the nonreciprocal response is obtained as a second harmonic signal for the ac excitation current \(I_{\omega}\propto\sin(\omega t)\).
First, we show the temperature dependence of the resistance for the four monolayer (ML) and bilayer (2 ML) samples in Fig. 1(c). While \(T_{c}\) is low (\(\sim\) 100 mK) for bulk \(T_{\rm d}\)-MoTe\({}_{2}\)[21], that for the 4 ML and bilayer samples is
Figure 2: (a) Experimental data from the 4 ML sample. Top: Nonreciprocal resistance (\(R_{2\omega}=V_{2\omega}/L_{\omega}\)) measured at different temperatures. Bottom: \(R_{\omega}\) signals measured simultaneously with \(R_{2\omega}\). (b) Temperature dependence of \(\gamma\) taken from the 4 ML sample. The orange curve shows the fit based on equation (3). Inset: Experimental data of \(\gamma\) as a function of temperature obtained from the bilayer sample with the fit. (c) Left: Schematic illustration of the motion of magnetic vortices with the velocity \(\mathbf{v}\) driven by an external current \(\mathbf{J}\), which generates an electric field \(\mathbf{E}=\mathbf{B}\times\mathbf{v}\). Right: Image of sawtooth potential assumed as the ratchet pinning potential in (3).
750 mK and 2.2 K, respectively. This large enhancement in \(T_{c}\) for thin layers is consistent with previous studies [22; 23]. Note that here \(T_{c}\) is defined as the temperature where the resistance becomes half of that in the normal state.
Now let us focus on measuring the nonreciprocal transport in the superconducting state. Figure 1(d) shows the second-harmonic longitudinal resistance \(R_{2\omega}\) for \(I_{\omega}\parallel b\) and for \(I_{\omega}\parallel a\) at 230 mK. \(a\) and \(b\) are the crystal axes as defined in Fig. 1(a). A clear peak and dip are observed in \(R_{2\omega}\) for \(I_{\omega}\parallel b\). The field-asymmetric \(R_{2\omega}\) signals are in agreement with the MCA in (1) and are consistent with previous experimental results [15; 16; 24; 11]. Note that the nonlinearity of the resistance due to the transition between the normal and superconducting state is symmetric in \(B\), so it is excluded as the origin of \(R_{2\omega}\). In contrast to the case for \(I_{\omega}\parallel b\), \(R_{2\omega}\) for \(I_{\omega}\parallel a\) is dramatically suppressed. This is also consistent with the geometry of MCA, where the symmetry plane, the directions of the magnetic field and generated second-harmonic voltage are all perpendicular to each other [1]. Note that the finite signal for \(I_{\omega}\parallel a\) is due to misalignment of the electrodes to the crystal axis (see Fig. 1(b)) [19]. Below we focus on the geometry where \(I_{\omega}\parallel b\).
The top part of Fig. 2(a) displays \(R_{2\omega}\) as a function of perpendicular magnetic field measured at different temperatures. The amplitude of the signals monotonically decreases with increasing temperature. Above \(T_{c}\), \(R_{2\omega}\) is completely suppressed, indicating that the effect is related to superconductivity. The bottom part of Fig. 2(a) displays the \(R_{\omega}\) signals measured simultaneously with the \(R_{2\omega}\) signals.
Now that we have obtained \(R_{2\omega}\) and \(R_{\omega}\), we can estimate the value of the rectification coefficient \(\gamma=2R_{2\omega}/(R_{\omega}BI_{\omega})\)[1; 10; 15]. To obtain \(\gamma\), we use the values of \(R_{\omega}\) and \(R_{2\omega}\) at \(B\) where \(R_{2\omega}\) is at a peak. Figure 2(b) shows that \(\gamma\) continues to increase with decreasing temperature and reaches \(\gamma=3.1\)\(\times\)\(10^{6}\) T\({}^{-1}\) A\({}^{-1}\) at 230 mK, the lowest measurement temperature. This value is two to three orders of magnitude larger than that of other two-dimensional superconductors, such as MoS\({}_{2}\) and NbSe\({}_{2}\) as we will discuss later. In the inset of Fig. 2(b) we also plot the temperature dependence of \(\gamma\) for the bilayer sample, which shows the similar trend with slightly smaller amplitudes than those of the 4 ML sample.
So far, several mechanisms have been proposed to explain the MCA in the superconducting state [10; 14]. The temperature dependence of the signals and the direction of the applied magnetic field are clues with which to determine the mechanism. For example, paraconductivity is one of the mechanisms proposed as an origin of MCA under an in-plane magnetic field. Since it is relevant to thermal fluctuations of the superconducting order parameter, the nonreciprocal signal is slightly enhanced above \(T_{c}\) and suppressed much below \(T_{c}\). On the other hand, the ratchet-like motion of the magnetic vortices enhances the MCA below \(T_{c}\) under a perpendicular magnetic field [10]. In the mixed state of type-II superconductors, magnetic fluxes penetrate the superconductor, and they are usually trapped by pinning potentials induced by disorder. External current can drive the magnetic fluxes through the Lorenz force as schematically shown in the left image of Fig. 2(c), if it is large enough to overcome the pinning potential [25; 26]. In superconductors with broken inversion symmetry, the asymmetry of the crystal structure locally affects the shape of the pinning potentials, making them asymmetric [27; 20]. In this case, the magnetic vortices can exhibit ratchet-like motion, where the leftward and rightward motion of the vortex is not equivalent [4; 5]. This asymmetry provides a source for nonreciprocal transport [4; 6; 7; 8; 10; 11; 12; 16]. Increasing \(\gamma\) with decreasing temperature is consistent with the ratchet-like motion of the magnetic vortices as the origin of the MCA [10]. Such a temperature dependence is because thermal fluctuations of the magnetic vortices inside the pinning potential, which disturb the ratchet motion, are suppressed with decreasing temperature, and also the coherence length, which determines the diameter of the vortex, becomes smaller, making the vortex more sensitive to the pinning potentials.
In Fig. 2(b), we plot the theoretical fit to the experimental data based on the following theoretical expression assuming the ratchet-like motion of magnetic vortices as the origin of the MCA [10; 17; 19]:
\[\gamma=\frac{\phi_{0}^{*}\beta\ell}{WB}\frac{g_{2}(\beta U)}{g_{1}(\beta U)}, \tag{3}\]
where \(W\) is the width of the sample, \(\phi_{0}^{*}=h/2|e|\) is the flux quantum and \(\beta=1/k_{B}T\) is the inverse temperature. \(\ell\) and \(U\) are the mean periodicity and the height of the pinning potential for a vortex, respectively. We take the simple potential shape shown in the right figure of Fig. 2(c), where the dimensionless parameter \(f\) controls the asymmetry of the potential. \(g_{1}\) and \(g_{2}\) are dimensionless functions determined from the linear- and second-order responses. The ratio is given by \(\frac{g_{2}(\beta U)}{g_{1}(\beta U)}\sim\frac{f(\beta U)}{180}\) for a moving vortex regime with a small ratchet potential. The fits follow the experimental data qualitatively as shown in Fig. 2(b), which supports the ratchet-like motion of magnetic vortices as the dominant mechanism for the giant MCA in this system. There are two fitting parameters, and we emphasize that the fits shown in Fig. 2(b) are obtained by using the same fitting parameters both for the 4 ML and bilayer sample, which also corroborates the validity of our theoretical model (see SM for more details [19]). Note that the vortex picture may be justified specifically in the intermediate temperature range below \(T_{c}\). At higher temperatures, the vortex mechanism should be replaced by superconducting fluctuation and normal contributions [15]. The origin of the deviation at lower temperatures can be explained by quantum effects of the ratchet-like motion of vortices [28], where quantum tunneling through the ratchet potential suppresses the MCA [11]. Nonetheless, the monotonic enhancement of \(\gamma\) with decreasing temper
ature represents the advantage of \(T_{\rm d}\)-MoTe\({}_{2}\) compared to other high-\(\gamma\) noncentrosymmetric superconductors with paraconductivity-based nonreciprocal transport, because even larger \(\gamma\) is expected at lower temperatures, and the temperature range for large \(\gamma\) is much broader [24].
We next discuss the amplitude of \(\gamma\). Superconductors with trigonal symmetry are often used in MCA measurements, and \(\gamma\) = 8.0 \(\times\) 10\({}^{3}\) T\({}^{-1}\) A\({}^{-1}\) and 3.4 \(\times\) 10\({}^{4}\) T\({}^{-1}\) A\({}^{-1}\) have been obtained for MoS\({}_{2}\) and NbSe\({}_{2}\), respectively [15; 16]. These values are two or three orders of magnitude smaller than the value of \(\gamma\) = 3.1 \(\times\) 10\({}^{6}\) T\({}^{-1}\) A\({}^{-1}\) obtained in our study. The only value comparable to ours reported in the previous studies is \(\gamma\) = 3.2 \(\times\) 10\({}^{6}\) T\({}^{-1}\) A\({}^{-1}\) from a SrTiO\({}_{3}\) Rashba superconductor under an in-plane magnetic field [24]. As for nonsuperconducting material with broken inversion symmetry, (Bi\({}_{1-x}\)Sb\({}_{x}\))\({}_{2}\)Te\({}_{3}\) (BST) topological nanowires provide \(\gamma\sim 1.0\times 10^{5}\) T\({}^{-1}\) A\({}^{-1}\)[3]. Therefore, the value obtained in our study is one of the largest reported so far [11; 12; 15; 16; 17]. We attribute this gigantic enhancement in \(\gamma\) to the reduced symmetry of \(T_{\rm d}\)-MoTe\({}_{2}\) compared with other noncentrosymmetric superconductors employed in the previous studies. In comparison with other two-dimensional superconductors with trigonal symmetry, \(T_{\rm d}\)-MoTe\({}_{2}\) has reduced symmetry with only one mirror plane for thin layers. This reduced symmetry affects the asymmetry of the pinning potential. Since the symmetry of the pinning potential is crucial for the vortex dynamics, as reported previously [8; 9; 29; 30], the lower symmetry in the pinning potentials may generate larger nonreciprocal signals.
Finally, we demonstrate the gate modulation of the MCA for the bilayer \(T_{\rm d}\)-MoTe\({}_{2}\). While gate control of the MCA in the normal state has been studied in a BST topological nanowire [3] and at the LaTiO\({}_{3}\)/SrTiO\({}_{3}\) interface [31], it has not been reported yet in superconductors. The primary reason for this is that the concentration of charge carriers in a superconductor is typically high, making it challenging to employ a conventional solid gate to regulate superconducting characteristics due to the electric field screening on the nanometer scale within the material. We can overcome this problem by thinning down \(T_{\rm d}\)-MoTe\({}_{2}\) to a thickness comparable to the screening length [32; 33]. Figure 3(a) displays the gate dependence of \(T_{c}\) obtained from the bilayer sample. Here the gate voltage (\(V_{g}\)) is applied through a h-BN (34 nm in thickness) as a gate insulator. \(T_{c}\) is successfully modulated by \(V_{g}\), and at \(V_{g}\) = 8 V it is larger by around 20 % compared with at \(V_{g}\) = \(-\)8 V. In addition to the variation of \(T_{c}\), the MCA signals are also largely modulated by \(V_{g}\) (Fig. 3(b)). Figure 3(c) plots \(\gamma\) as a function of \(V_{g}\). Here, \(\gamma\) varies with \(V_{g}\), and \(\gamma\) at \(V_{g}\) = \(-\)8 V is almost three times larger than at \(V_{g}\) = 8 V. Note that this large variation is enabled by modulation of not only \(R_{2\omega}\), but also \(R_{\omega}\) and \(B\).
The gate voltage can modulate some parameters relevant to superconductivity, such as \(T_{c}\), \(B_{c2}\), the magnetic penetration length \(\lambda\) and the coherence length \(\xi\). Interestingly, we found that superconductivity becomes more robust as the gate voltage is made more positive. This means that \(B_{c2}\) becomes larger and \(\xi\) smaller for more positive \(V_{g}\). This trend is counterintuitive when we consider the variation in \(R_{2\omega}\) with \(V_{g}\), because a larger \(B_{c2}\) provides larger \(U_{0}\), and a smaller \(\xi\) should be more advantageous for the ratchet-like motion. By contrast, \(\lambda\), which quantifies the scale for the vortex-vortex interaction, increases as \(V_{g}\) decreases, concomitantly with the decline in superconductivity. Since the importance of
Figure 3: (a) Gate voltage (\(V_{g}\)) dependence of \(T_{c}\) for the bilayer sample taken at 230 mK. (b) \(R_{2\omega}\) as a function of \(B\) at different \(V_{g}\) at 230 mK. (c) \(\gamma\) as a function of \(V_{g}\). Inset: Gate voltage dependence of \(R_{2\omega}\).
the vortex-vortex interaction for the ratchet-like motion has already been discussed in the context of the artificial ratchet potentials [29; 34], it may also play a role in intrinsic ratchet potentials in noncentrosymmetric superconductors. Although further theoretical studies are required to fully understand our experimental data, the demonstration of gate control of nonreciprocal transport illustrates the rich functionality of superconducting nonreciprocal devices for future applications and also provides key insights into exploring detailed mechanisms behind the ratchet-like motion of magnetic vortices.
In conclusion, we have shown giant superconducting nonreciprocal transport (MCA) in thin samples of the noncentrosymmetric superconductor \(T_{\text{d}}\)-MoTe\({}_{2}\). We obtain 3.1 \(\times\) 10\({}^{6}\) T\({}^{-1}\) A\({}^{-1}\) at 230 mK, one of the largest values of \(\gamma\) recorded so far. The temperature dependence of \(\gamma\) supports the ratchet-like motion of magnetic vortices as the origin of the nonreciprocal transport. The giant nonreciprocal signal is likely due to the reduced symmetry of the crystal structure of \(T_{\text{d}}\)-MoTe\({}_{2}\). We have also demonstrated gate modulation of the MCA in the superconducting state. In bilayer \(T_{\text{d}}\)-MoTe\({}_{2}\), we obtain a threefold modulation of \(\gamma\) using a typical solid gate. Simultaneous demonstration of the gigantic MCA and its gate modulation in the superconducting state reveals that \(T_{\text{d}}\)-MoTe\({}_{2}\) is a promising candidate for realizing an electrically-tunable efficient superconducting rectification devices.
We gratefully acknowledge M. Imai, S. Sasaki, H. Murofushi and S. Wang for their support in the experiments. This project is financially supported in part by the JPSJ KAKENHI (grant no. 21H01022, 21H04652, 21K18181, 21H05236, 20H00354 and 19H05790).
|
2301.11771 | Phase separation of passive particles in active liquids | The transport properties of colloidal particles in active liquids have been
studied extensively. It has led to a deeper understanding of the interactions
between passive and active particles. However, the phase behavior of colloidal
particles in active media has received little attention. Here, we present a
combined experimental and numerical investigation of passive colloids dispersed
in suspensions of active particles. Our study reveals dynamic clustering of
colloids in active media due to an interplay of active noise and an attractive
effective potential between the colloids. The size-ratio of colloidal particles
to the bacteria sets the strength of the interaction. As the relative size of
the colloids increases, the effective potential becomes stronger and the
average size of the clusters grows. The simulations reveal a macroscopic phase
separation of passive colloids at sufficiently large size-ratios. We will
present the role of density fluctuations and hydrodynamic interactions in the
emergence of effective interactions. | Pragya Kushwaha, Vivek Semwal, Sayan Maity, Shraddha Mishra, Vijayakumar Chikkadi | 2023-01-27T15:10:48Z | http://arxiv.org/abs/2301.11771v1 | # Phase separation of passive particles in active liquids
###### Abstract
The transport properties of colloidal particles in active liquids have been studied extensively. It has led to a deeper understanding of the interactions between passive and active particles. However, the phase behavior of colloidal particles in active media has received little attention. Here, we present a combined experimental and numerical investigation of passive colloids dispersed in suspensions of active particles. Our study reveals dynamic clustering of colloids in active media due to an interplay of active noise and an attractive effective potential between the colloids. The size-ratio of colloidal particles to the bacteria sets the strength of the interaction. As the relative size of the colloids increases, the effective potential becomes stronger and the average size of the clusters grows. The simulations reveal a macroscopic phase separation of passive colloids at sufficiently large size-ratios. We will present the role of density fluctuations and hydrodynamic interactions in the emergence of effective interactions.
The Brownian colloids self-assemble to display a wide variety of phases depending on their shapes and interactions [1; 2; 3]. Their equilibrium phase behavior is governed by the principles of equilibrium statistical mechanics [4; 5]. However, our understanding of the collective behavior of colloids far from equilibrium remains a challenge [6; 7]. In recent years, active matter has emerged as a new paradigm for understanding nonequilibrium systems [8; 9; 10; 11]. They are known to display many interesting phenomena such as flocking [12; 13], motility induced phase separation [14; 15; 16], active turbulence [17], superfluidity [18], that are absent in equilibrium systems. Therefore, active matter offers novel approaches to colloidal assembly in systems far from equilibrium. In this letter, we have investigated the phase behavior of colloidal particles dispersed in active liquids.
Wu and Libchaber [19] did seminal experiments on the active transport of colloidal particles in suspensions of bacteria. They discovered anomalous diffusion and a large effective diffusion constant, when compared to diffusion at equilibrium, which inspired a slew of theoretical investigations and detailed experiments [20; 21; 22; 23; 24; 25; 26; 27]. The subsequent efforts have elucidated how enhanced diffusion arises due to an interplay of entrainment of colloids by bacteria, far-field hydrodynamic interactions, direct collisions, and the relative size of bacteria and colloid [23; 24; 25]. Further, the effective interaction between a pair of passive particles in active media has been the focus of several investigations. It has been predicted to be attractive, repulsive, and long-ranged, depending on the geometry of passive particles, the activity of active species, and their density [28; 29; 30; 31; 32; 33; 34; 35; 36]. This understanding has opened new routes to colloidal assembly mediated by active fluids [7; 39]. The phase behavior of active-passive mixtures is a topic of recent interest [37; 38; 39; 40; 41; 42; 43; 45; 47], where experimental investigations are scarce [6; 7]. On the one hand, theory and simulations at high Peclet numbers have shown that homogeneous mixtures of active and passive particles are unstable. The underlying physics is similar to motility induced phase separation (MIPS) [40; 42]. On the other hand, in the diffusive limit, theory and simulations of nonequilibrium binary mixtures with different diffusivities and temperatures reveal phase separation [41; 45; 46] due to spinodal-like instability. Surprisingly, there is little known about mixtures at moderate Peclet numbers. This is the range where most of the active matter experiments involving living matter or synthetic systems, such as diffusio-phoretic colloids, fall. A recent study of colloids in active suspensions of bacteria reports dynamical clustering and absence of phase separation at moderate Peclet numbers [6]. The conclusions were based on the phase diagram obtained from variations of Peclet number and rotation rate of active particles. In contrast, earlier numerical studies have shown a macroscopic phase separation [43]. Therefore, it is not clear whether active-passive mixtures show macroscopic phase separation at moderate Peclet numbers.
This letter presents a combined experimental and numerical study of the phase behavior of colloidal particles in active liquids. The experiments were performed using colloids in bacteria suspensions, and simulations of active-passive mixtures were realized using Brownian dynamics [48; 49; 50]. Earlier simulations of active-passive mixtures, by one of the authors of this letter, had shown a significant influence of the size-ratio of passive to active particles on their phase diagram [43]. Motivated by this study, our experiments were performed over a range of densities and sizes of passive colloidal particles in active suspensions of bacteria. The colloids display dynamic clustering due to an interplay of activity and an attractive effective potential. However, the average size of the clusters increases with the size of colloidal particles, suggesting an enhanced interaction between the particles. Using simulations, we confirm an attractive effective potential between passive particles in an active medium. The strength of the interaction is shown to grow with an increasing size-ratio. When the size-ratio is sufficiently large, the interactions are strong enough to
drive the phase separation of passive colloids. The origin of the effective potential in simulations appears to be related to long-ranged density fluctuations of active particles. In contrast, the correlations of density fluctuations of bacteria decay rapidly in experiments. These results indicate a hydrodynamic origin of effective interactions between the colloids in our experiments. Thus, shedding new light on the phase behavior of passive particles in active media.
The active suspensions were prepared using E.coli cells (U5/41 type strain). The cells were cultured using well established protocols in the literature [18; 44]. They are suspended in a motility media to get desired concentrations. Details of the method are given in the supplementary section. The density of bacteria in our experiments was well below the density threshold for the onset of collective motion. The average speed and average size of bacteria cells were estimated to be \(v=33.84\pm 9.98\;\mu m/s\) [supplementary Fig. S1(a)] and \(l=2.68\pm 0.86\;\mu m\) [supplementary Fig. S1(b)], respectively. Their rotational diffusion time scale was estimated to be \(r_{r}=1.67\;s\) [supplementary Fig. S1(c)]. The Peclet number, which is defined as \(Pe=\tau_{r}v/l\), turns out to be \(Pe\sim 21\) for our system. The phase behavior of colloidal particles in suspensions of bacteria was investigated by varying the size and density of the beads at a constant density of bacteria. The diameters of the particles used in the experiments were \(\sigma=7\mu m,\;10\mu m\) and \(15\mu m\), and their density is varied from \(\phi\sim 0.1-0.4\), where \(\phi=N*\pi\sigma^{2}/(4A)\) is the area fraction, \(N\) is the number of colloidal particles in the field of view of area \(A\). The size-ratio \(S=\sigma/l\) is defined as the ratio of the diameter of colloids to the length of the bacteria.
The simulations were performed using a binary mixture of active Brownian particles (ABP) with \(N_{a}\) small active particles of radius \(a_{a}\) and \(N_{p}\) big passive particles of radius \(a_{p}\) (\(a_{p}>a_{a}\)) moving on a two dimensional frictional substrate. The active particles are associated with a self propulsion speed \(v\) and an orientation unit vector \(\hat{v}_{i}\). The equations of motion and other simulation details are given in the supplementary section. We simulate the system in a square box of size \(l_{box}\times l_{box}\), with periodic boundary conditions. The system is defined by the area fractions \(\phi_{a}=N_{a}\pi a_{a}^{2}/l_{box}^{2}\) and \(\phi_{p}=N_{p}\pi a_{p}^{2}/l_{box}^{2}\) of the active and passive particles respectively, the activity \(v\) of active particles and the size-ratio (\(S=a_{p}/a_{a}\)) defined as the ratio of the radius of a passive particle to the radius of an active particle. We start with a random homogeneous distribution of active and passive particles in the box and with random directions for the velocity of active particles. The Eqs. (S1-S3) are updated for all particles and one simulation step is counted after a single update for all the particles. The simulations does not include the hydrodynamic interactions that are present in experiments. The effect of hydrodynamic interaction can be included using coarse-grained studies similar to [51].
The colloids used in our experiments are bigger than \(5\;\mu m\), so they are non-Brownian particles. However, they diffuse in suspensions of bacteria due to active fluctuations with a characteristic super-diffusive motion on short time scales and a diffusive motion on long time scales. To investigate their collective behavior in active suspensions, we first analyze their pair correlation function \(g(r)\), which is shown in the main panel of Fig. 1 at an area fraction of \(\phi\sim 0.1\) and size ratios \(S\sim 2.5-5.5\). The normalized \(g(r)\) for different size-ratios is shifted along the y-axis for clarity. What is prominent is the presence of a sharp peak at \(r=\sigma\), and additional peaks develop at \(r=1.7\sigma\) and \(r=2\sigma\) with increasing size ratio. The peak at \(2\sigma\) indicates a second shell of neighbors, and the one at \(1.7\sigma\) is a signature of hexagonal ordering in the cluster. These observations are evident in the bright field images presented in the insets of Fig.1. The larger size ratios lead to larger clusters with enhanced order. These images are reminiscent of clustering in systems of purely active particles [15]. However, the clusters of passive particles in our experiments break and form much more rapidly. A real-time video of dynamic cluster formation is presented in the supplementary section Video. SV1 for \(\phi\sim 0.10\) and \(S\sim 2.5\). Recent simulations have reported similar dynamic clustering and traveling interfaces of active-passive particles that are not observed in our current study [40; 42]. One of the main difference between our experiments and these simulations are the large Peclet numbers used in simulations. Further, as reported by earlier investigations, self-propulsion of particles is a manifestation of an attractive effective potential between the passive particles due to active fluctuations [28].
We next turn our attention to cluster size distribution (CSD), \(p(n)\), which is a count of clusters of \(n\) particles [43; 52]. The clusters in our experiments were determined by setting a distance criterion of \(r_{c}\leq 1.1\sigma\) to identify pairs of particles as neighbors. This was set based on the
Figure 1: The structure of passive particles in active suspensions. Main panel : The pair correlation function \(g(r)\) for \(\phi\sim 0.10\) and \(S\sim 2.5,\;3.5,\) and \(5.5\). The \(g(r)\) curves are shifted along the \(y-axis\) for clarity. Insets: The bright field images of particles at \(\phi\sim 0.10\) and size ratios \(S\sim 2.5\) (left) and \(S\sim 5.5\) (right), respectively. The scale bar in the images is \(50\mu m\).
position of the first peak of \(g(r)\) in Fig.1, and to account for small polydispersity (\(<5\%\)) in the size distribution of colloidal particles. The results of our analysis are presented in Figs. 2a & 2b. The main panel in Fig.2a shows CSD for varying size ratios of \(S\sim 2.5,\ 3.5,\ \text{and}\ 5.5\) at a density of \(\phi\sim 0.1\). For small size ratios \(S<5\), \(p(n)\) has an exponential form \(exp(-n/n_{0})\) as observed in the equilibrium case [50]. The clustering is weak at these size ratios, however, for \(S>5\) the \(p(n)\) displays a power-law decay with an exponential cut-off at large \(n\), i.e., it is best described by \(p(n)/p(1)\sim 1/n^{\alpha}\ exp(-n/n_{0})\). The fits of this form to our data are shown in the figure using dashed lines. These results indicate that the characteristic size of clusters grows with increasing size ratio. The growth of clusters is dramatic at larger area fractions, the inset of Fig.2a shows cluster distribution at \(\phi\sim 0.3\).
We further elucidate the clustering of colloids by computing the average cluster size using the expression \(<n>=\sum\ n\ p(n)\), which is presented in Fig. 2b where the curves with different symbols correspond to different area fractions ranging from \(\phi\sim 0.1-0.4\). These measurements were made in the steady state where the mean cluster sizes fluctuates around a mean value. This data is provided in Fig. S2(a-c) for various size ratios and area fractions for over 5000 frames or 500 \(s\). What is clear from Fig. 2b is that increasing the size-ratio or the relative size of colloids leads to larger cluster sizes. This suggests that the effective potential between the colloids becomes stronger with an increasing size ratio. One can intuitively understand the underlying physics by considering the interaction between an isolated colloidal particle and a swimmer. When the size of a particle is small, a bacterium entrains the particle to larger distances before changing its direction of motion. However, when the particle is large, the entrainment distance is small, and the scattering angle of the swimmer is large [25]. It indicates that the bacteria can suppress cluster formation when the colloidal particles are smaller. What is not clear from our experiments is whether larger size-ratios lead to a macroscopic phase separation in our system. To understand this aspect, we turn to numerical simulations that allow a detailed exploration of parameter space.
The first quantity we have calculated in the simulations is the effective potential between two passive particles in the medium of ABPs with torque. In order to calculate the effective potential between two passive particles, we choose \(N_{p}=2\) at positions \(\mathbf{r}_{1}\) and \(\mathbf{r}_{2}\), respectively, in a system of ABPs with \(N_{a}=1800\). We keep \(\mathbf{r}_{1}\) fixed and slowly vary \(\mathbf{r}_{2}\) in small steps of \(\Delta x=0.5a_{a}\) starting from the zero surface to surface distance between two passive particles. The cartoon of the system simulated for the force calculation for a fixed \(r\) is shown in Fig. S3 (**SM**). In the figure, ABPs are shown in red and passive particles in blue for \(S=8\). For resolution, only a part of the system near the two passive particles is shown. The active particles' positions and orientations are updated according to the Eqns. (S1 and S2). For each configuration at a given distance between two passive particles, the system is allowed to reach the steady state. Further, we use the steady state configuration to calculate the force \(\mathcal{F}^{S}(r)\) between two-passive particles at a surface to surface separation \(r\), such that \(\mathcal{F}^{S}(r)=\mathbf{F}_{12}(r)+\sum_{i=1}^{N_{a}}\mathbf{F}_{1i}(r)\). Here \(\mathbf{F}_{12}(r)\) is the force due to passive particle \(2^{nd}\) on \(1^{st}\), and \(\sum_{i=1}^{N_{a}}\mathbf{F}_{1i}(r)\) represents the sum of all the forces due to active particles on \(1^{st}\) passive particle for a given configuration of two passive particles at separation \(r\). The potential is then calculated by integrating the force over the distance \(U(r)=\int_{-\infty}^{r}\mathcal{F}^{S}(r)dr\)[53; 54; 55]. Here we set the lower limit as one-fourth of the box-length. The results are averaged over 30 independent realizations.
We calculated the effective potentials \(U(r)\) for \(Pe=25\) (which is comparable to the experimental system) and four size-ratios \(S=3\), 5, 8 and 10. The comparable size-ratio in experimental system is \(S\sim(2.5\) to \(5.5)\). We first plot the effective potential \(U(r)\). In the main panel of Fig. 3(a) we show the plot of \(U(r)\) vs. \(r\) for the system for \(S=3\), 5, 8 and 10. The distance is normalised by the radius of active particles, which is kept fixed to 0.1. The negative side of the potential shows attraction and the positive nature is repulsion. For all the parameters the potential approaches zero at large distances, and it is negative at intermediate distances. The depth of the potential becomes deeper with increasing \(S\). The inset shows the effective potential with the distance \(r\) scaled by the size of passive particles. Surprisingly, the minima of the potentials for the size ratios \(S=5\), 8 and 10 fall at \(r/a_{p}=1\), which implies that the length scale characterizing the range of the interaction potential is set by size passive particles. We investigate further the origin of long-range interactions by considering a single passive particle in the center of our system, as shown in Fig. S4. It is evident that the passive particle disturbs the density field of active particles, leading to clustering around the passive particle. The main panel of Fig. 3(b) shows the normalised density correlation \(C(r)=(\langle\rho(0)\rho(r)\rangle-\langle\rho(r)\rangle^{2})/(\langle\rho(r)^ {2}\rangle-\langle\rho(r)\rangle^{2})\) of
Figure 2: Cluster statistics of passive particles. (a) Cluster size distribution in the main panel is shown for different size-ratios \(S\sim 2.5,\ 3.5,\ \text{and}\ 5.5\) at a density of \(\phi\sim 0.10\). The symbols distinguish different size ratios. The inset shows the CSD plot for same size ratios at \(\phi\sim 0.3\). (b) The average cluster size \(<n>\) for varying \(S\). The curves with different symbols correspond to different particle densities, ranging from \(\phi\sim 0.1-0.4\).
active particles calculated from the surface of the big passive particle for four different size ratios \(S=3,\,5,\,8\) and \(10\). The inset of Fig. 3(b) shows the typical size of clusters \(L(S)\) around a single passive particle in the center of the box for different size ratios \(S\). The \(L(S)\) is measured in terms of size of ABP. Clearly, \(L(S)\) increases with increasing \(S\). The number fluctuations of active particles around an isolated passive particle yield similar conclusions. The Fig. S5 (SM) shows number fluctuations for three different sizes of the passive particle or for three different size ratios \(S=3,\,5\) and \(8\). The details of the calculations are give in the supplementary material. For all the cases the graph is a power law with \(\Delta N\simeq N^{\alpha}\), where \(\alpha\simeq 0.7\) for moderate \(N\) for all \(S\) and starts to deviate for large \(N\). The deviation appears at relatively larger \(N\) on increasing size ratio. Hence increasing the size of passive particle increases the stretch of density fluctuation of ABP's. These results establishes that the density fluctuations play a central role in the emergence of long-range effective attractive interactions between passive particles in our simulations.
We elucidate the effect of such effective potential, full microscopic simulations of mixtures of active and passive were performed using the Eqs. (S1-S3). We simulated the system for \(Pe=25\) and size ratio \(S=3,5\) and \(8\), which are close to experimental values. In the bottom panel of Fig. 3 the steady state snapshots of passive (blue, bigger) and active (red, smaller) are shown for different size ratios \(S=3,\,5\) and \(8\) respectively. Clusters with moderate to strong ordering is found on increasing \(S\). For small \(S=3\) clusters are present but without strong local hexagonal ordering, whereas as we increase \(S\) the ordering and clustering is enhanced. We also calculated the percent of passive particles participating in the largest cluster for different size ratios and it increases from \(35\%\) to \(67\%\) as we increase size from \(3\) to \(8\) (data not shown). Hence for large size ratio passive particles show the macroscopic phase separation.
A similar examination of correlations of density fluctuations of bacteria in experiments reveals that they are suppressed, which is evident from \(C(r)\) in Fig. S6. The clustering of colloidal particles appears to arise from their hydrodynamic interactions. An earlier numerical study of active-passive matter with pusher-type swimmers at dilute concentrations had shown hydrodynamic interactions to stabilize colloidal clusters [59]. In addition, a recent theoretical model of active gels shows a long-ranged attractive effective potential between colloids due to hydrodynamic effects [58]. Considering these studies, hydrodynamics is likely to promote the formation of colloidal clusters.
Our investigations conclude that the interplay of effective potential and active noise determines the phase behavior of colloidal particles in active liquids. The strength of the effective potential is set by the size ratio of passive particles to active ones; larger size ratios lead to stronger interactions. The simulations reveal a long-ranged effective potential extending to several active particle diameters. It appears to emerge from the long-ranged density fluctuations of active particles in the system. When the size-ratio is small, the passive particles display dynamic clusters that form and break rapidly. However, the effective potential is strong enough to lead to phase separation of passive particles at sufficiently large size-ratios. These are the novel features of active-passive mixtures absent in the equilibrium analog of colloid-polymer mixtures where the range of effective potential is short-ranged. The density fluctuations of Bacteria are suppresses in experiments. Further investigation is needed to understand the role of hydrodynamic interactions on the effective potential of colloids in our experiments with active suspensions.
We thank Chaitanya Athale, Apratim Chatterji, Thomas Pucadyil, Sunish Radhakrishnan, Rajesh Singh, and Ganesh Subramanian for helpful discussions and support. We thank Madan Rao for drawing our attention to [58], and Kumar Gourav for assistance in the initial stages of experiments. V.C. acknowledges financial support from IISER Pune and DST/SERB under the project grant CRG/2021/007824. P.K. is supported by CSIR-UGC fellowship 1353. V.S. and S. M. thank I.I.T. (BHU) Varanasi computational facility. V.S. thanks DST INPIRE (INDIA) for the research fellowship. S.M. thanks DST, SERB (INDIA), Project No. ECR/2017/000659 for partial financial support.
Figure 3: Top left figure: The effective potential between a pair,of colloidal particles. The main panel shows the plot of the effective potential \(U(r)\) vs. distance \(r\) for \(Pe=25\) and size-ratios \(S=3,\,5,\,\,8,\,\text{and}\,\,10\). The inset shows the effective potential with the scaled distance \(r/a_{p}\). Top right figure: The main panel shows normalized correlations of density fluctuations \(C(r)\) due to a passive particle. The inset show the length scale extracted from \(C(r)\) as function of the size-ratio. The length scale is expressed in terms of active particle size. Bottom panel: Snapshots of the system obtained from the microscopic simulation: two types of particles for different size ratio \(S=3,5\) and \(8\) (left, central and right columns) at \(Pe=25\). Red particles are ABPs and blue particles are passive particles, for fixed packing fraction \(\phi=0.60\) in a system of size \(l_{box}=140a_{a}\). |
2305.03682 | Reflection of a Diffuser in a Liquid Interface | We present a novel method, based on the Saunderson corrections, to predict
the reflectance between a liquid interface and a dielectric diffuser. In this
method, the diffuse properties of the dielectric are characterized using a
single parameter, the multiple-scattering albedo, which is the same
irrespective of being in contact with air or liquid. We tested this method
using an apparatus based on a total integrating sphere capable of measuring
reflectance in both liquid and gas interfaces across various wavelengths of
light. We observed that the difference in the value of the multiple-scattering
albedo between the sphere full of liquid and empty was less than 0.9$\times
10^{-3}$, with the average difference normalized to the respective uncertainty
of only 0.7. These results confirm the reliability of our method and its
potential for use in a wide range of practical applications. | C. Silva, R. Cabrita, V. N. Solovov, P. Brás, A. Lindote, G. Pereira, M. I. Lopes | 2023-05-05T16:51:04Z | http://arxiv.org/abs/2305.03682v1 | # Reflection of a Diffuser in a Liquid Interface
###### Abstract
We present a novel method, based on the Saunderson corrections, to predict the reflectance between a liquid interface and a dielectric diffuser. In this method, the diffuse properties of the dielectric are characterized using a single parameter, the multiple-scattering albedo, which is the same irrespective of being in contact with air or liquid. We tested this method using an apparatus based on a total integrating sphere capable of measuring reflectance in both liquid and gas interfaces across various wavelengths of light. We observed that the difference in the value of the multiple-scattering albedo between the sphere full of liquid and empty was less than 0.9\(\times 10^{-3}\), with the average difference normalized to the respective uncertainty of only 0.7. These results confirm the reliability of our method and its potential for use in a wide range of practical applications.
## I Introduction
The description of the reflectance within a liquid medium is needed in many physics applications such as liquid scintillators or Cherenkov detectors and computer vision, the computer rendering of realistic images. However, obtaining accurate measurements of the optical properties of surfaces submerged in a liquid presents significant challenges compared to those in air, and as such, measurements in a liquid interface are not typically available, especially for diffuse dielectric reflectors. Diffuse reflection occurs when the light is refracted to the bulk of an inhomogeneous dielectric material. The inhomogeneities act as scatter centers in an otherwise uniform dielectric medium, causing the light to scatter multiple times before returning to the first medium.
Most of the diffuse materials look darker when wet. Two main explanations have been proposed to explain this phenomenon: i) the penetration of the liquid into porous materials reduces the contrast between the refractive index of the pore and the material, increasing the forward scattering and thus increasing the probability of absorption [1]; ii) internal reflection in the liquid layer covering the surface increases the likelihood of absorption by the surface [2]. Nonetheless, observations made directly in the liquid medium show an increase in reflectance. For example, Voss and Zhang observed that the reflectance of a plate made of Spectralon(r) increases by 2% when that plate is submerged in water [3]. Also, in particle physics detectors, which is of particular interest to us, the reflectance of polytetrafluoroethylene (PTFE) to the 178 nm xenon excimer emission light [4] increases from \(\sim\)75% to 95% when immersed in liquid xenon [5; 6]. It should be noted that the temperature of the liquid might also affect the reflectance in the latter case, which is of particular interest in particle physics detectors.
The reflection at a liquid interface is critical in designing particle detectors since many applications in this field use liquids as detection media. Examples include water Cherenkov and scintillator detectors [7; 8], organic scintillators [9], or more recently, liquefied noble gases such as xenon, argon, and helium [10; 11]. In the case of scintillation detectors, the observed optical signal is proportional to the deposited energy, making the internal reflectance an essential parameter for detector performance. For applications like dark matter or coherent elastic neutrino-nucleus scattering (CE\(\nu\)NS) [12], maximizing light collection is crucial to decrease the energy threshold and enable detection. Typically, these detectors use PTFE as an efficient reflector material, and simulating the optical properties of the light collection model requires reflectance properties of surfaces as input. However, standard reflectometers measure reflectance only in air, and measuring reflectance in liquid is complex due to uncertainties arising from light absorption, scattering, and bubble formation. Therefore, predicting reflectance in liquid based on values observed in gas can help overcome the challenges associated with liquid reflectance measurements.
This article is structured as follows: first, we present the method to describe the reflectance of a diffuser, irrespective of the interface. This method is based on the previous work of Lawrence Wolff [13] and uses the Saunderson model [14] in which the internal reflections between the diffuser and the original medium, which might be air or the liquid, are considered directly (sec. II). In this method, the optical properties of the diffuser are described using a single parameter, the multiple-scattering albedo, \(\rho\), that does not depend directly on the first medium. To test these assumptions, we built a setup based on a total integrating sphere that can be filled with different liquids (sec. III). Then, we implemented this model and the geometry of the setup in a Monte Carlo simulation based on the ANTS2 software package [15] (sec. IV) and compared these results with the equation of the sphere derived for our specific geometry. Using these simulations, we obtained the value of the throughput of the sphere for different values of the single scatter albedo. The results are presented in the sec. V. Finally, in sec. VI, we discuss these results and present an analytical method to obtain the multiple-scattering albedo
when the hemispherical reflectance of a surface is known and discuss possible expansions of the current model.
## II Modeling the diffuse reflection
Consider two dielectric media in optical contact. One of these media, further designated as _Medium 1_, is optically transparent with refractive index \(n_{1}\), while another, which we call _diffuser_ is optically inhomogeneous, meaning that the refractive index varies from place to place in the dielectric volume. We also assume that the diffuser is semi-infinite, meaning that it is thick enough that no light is transmitted through the material. The refracted light scatters multiple times in these inhomogeneities before being absorbed or returning to the boundary between the diffuser and the medium 1 (internal boundary). If the light returns to the internal boundary, it can be reflected back to the diffuser (internal reflection) or be refracted to medium 1. If it is refracted, it is part of the diffuse lobe. In the case of internal reflection, the light undergoes a multiple-scatter process in the diffuser again. This process continues until all the light is either absorbed or returns to the medium 1. The internal scattering process depends only on the material's optical characteristics, but the refractions into and from the diffuser and the internal reflections must obey the Fresnel equations [16], adding a dependence on the optical properties of the first medium as well.
In the description of the diffuse reflectance, the most common approach is to model the reflectance of these surfaces using the Lambert law, which states that diffuse materials appear equally bright independently of the viewing angle; therefore, the bidirectional reflectance intensity distribution function, \(f_{r}\)[17], of the surface is given by a constant, (\(f_{r}=\rho_{l}/\pi\)), usually called albedo of the surface. However, as discussed, both refractions add deviations to this law, especially for surfaces illuminated or observed at a large angle. Therefore, L. Wolff modified the Lambert law by introducing two factors accounting for the light that is reflected at both the entrance and exit of the diffuser [13]. In his model, \(f_{r}\) is given by the following sum of two components, specular and diffuse:
\[\begin{split} f_{r}\left(\theta_{i},\theta_{r},\phi_{i},\phi_{r }\right)&=F\left(\theta_{i};\frac{n_{2}}{n_{1}},\right)\delta \left(\theta_{i}-\theta_{r}\right)\delta\left(\phi_{i}+\phi_{r}\right)+\\ &+\frac{1}{\pi}\theta_{d}\left[1-F\left(\theta_{i};\frac{n_{2}}{ n_{1}}\right)\right]\times\\ &\times\left[1-F\left(\arcsin\left(\frac{n_{1}}{n_{2}}\sin\theta_ {r}\right);\frac{n_{1}}{n_{2}}\right)\right],\end{split} \tag{1}\]
where the angles \(\theta_{i}\) and \(\theta_{r}\) are the polar angles of the incident direction of the light (subscript \(i\)) and the viewing direction (subscript \(r\)), \(\phi_{i}\) and \(\phi_{r}\) the corresponding azimuthal angles, \(n_{2}\) the average refractive index of the diffuser, and \(\varrho_{d}\) the total diffuse albedo. \(F\left(\theta;n,\alpha\right)\) corresponds to the reflectivity calculated by the Fresnel Equations (see the appx. A, eq. A1). Since the light is partially polarized when refracted to the diffuser but effectively depolarized after the multiple-scattering process, the polarization of the light is not considered in the Fresnel equations.
In our earlier work [18], we demonstrated that our model accurately reproduces the distribution of reflected light from a diffuser in air, such as PTFE. However, this model has two main limitations. First, the second Fresnel factor (eq. 1) decreases the reflectance along a particular direction, but this light is not absorbed, instead being reflected in another direction. This increase in the reflectance is accounted for in the total diffuse albedo \(\varrho_{d}\), but in highly reflective surfaces, \(\varrho_{d}\) is often larger than 1, which may seem counterintuitive. Second, \(\varrho_{d}\) depends not only on the optical properties of the diffuse medium but also on its refractive index of the medium 1 since it includes all the additional reflections at the internal boundary (as shown in eq. 4 in ref. [13]). Therefore, if the medium 1 changes \(\varrho_{d}\) needs to be estimated again.
To address these two issues, we have introduced a new albedo, the multiple-scattering albedo \(\rho\), which replaces the total diffuse albedo in the eq. 1. \(\rho\) represents the probability that the light refracted to the diffuser or reflected in the internal boundary is not absorbed in the multiple scattering and returns to the boundary between the diffuser and medium 1. This approach is similar to Saunderson's method for describing the reflectance of pigment plastics at an air interface in 1942 [14], which utilized the Kubelka and Munk theory [19]. Our model assumes two things: first, that \(\rho\) is independent of the angle of incidence, and second, that the direction of light is random after multiple scattering, which means that light should follow Lambert's law before being refracted or reflected.
Since the light can be reflected back to the diffuser multiple times, the multiple-scattering albedo relates with the total diffuse albedo through the following summation:
\[\begin{split}\varrho_{d}=&\rho+\overline{\mathcal{F} }_{n_{1}/n_{2}}\rho+\overline{\mathcal{F}}_{n_{1}/n_{2}}^{2}\rho^{2}+....\\ =&\rho\left(1-\rho\overline{\mathcal{F}}_{n_{1}/n_{ 2}}\right)^{-1},\end{split} \tag{2}\]
where \(\overline{\mathcal{F}}_{n_{1}/n_{2}}\) corresponds to the probability of reflection between an interface of refractive index \(n_{2}\) and an interface of refractive index \(n_{1}\). Assuming that the photons arriving at the surface follow Lambertian law, it is given by the integral:
\[\overline{\mathcal{F}}_{n_{1}/n_{2}}=\frac{1}{\pi}\int_{2\pi}F\left(\theta; \frac{n_{1}}{n_{2}}\right)\mathrm{d}\Omega, \tag{3}\]
where \(\theta\) is the angle of reflection and \(\mathrm{d}\Omega\) the element projected solid angle defined as
\[\mathrm{d}\Omega=\cos\theta\sin\phi\mathrm{d}\theta\mathrm{d}\phi. \tag{4}\]
This definition solves the two issues mentioned before: \(\rho\leq\)1 since it is directly linked to a probability, and it
is independent of the optical properties of medium 1 since that information is included in the factor \(\overline{\mathcal{F}_{\pi_{\mathrm{1/n_{2}}}}}\). The equation 3 is integrable for an interface between two dielectrics. The result of the integral is presented in appx. A eq. A2.
## III Experimental Method
To study the effect of the medium interface on surface reflectance, we build a setup aiming to measure the change in the throughput of a total integrating sphere when a liquid replaces the air volume. This setup, composed of four different experimental configurations, is represented in fig. 1. In each configuration, its main elements are: a) the matrix of LEDs with wavelengths ranging from 255 nm until 490 nm, b) a system of collimation and beam sampling, c) the total integrating sphere (TIS), and d) the acquisition system composed by two photomultipliers (PMT) operating in photon counting mode. Using this setup, we can measure the observed flux, \(\Phi_{R}\), after the light has been reflected in the sphere (fig. 1A), and the incident flux, \(\Phi_{I}\), entering the sphere (fig. 1B). Furthermore, since the light reflected in the PMT can return back and be detected, the reflectance results depend on the reflectance of the PMT photocathode. As such, configurations C and D are used to measure the reflectance of the PMT photocathode.
The throughput of the sphere, \(H\), is defined as the ratio between the observed reflected photon flux, \(\Phi_{R}\), and the observed incident flux, \(\Phi_{I}\):
\[H=\frac{\Phi_{R}}{\Phi_{I}}, \tag{5}\]
with both units measured in terms of the number of detected photons (phd) per unit of time. To compare the effect of the medium interface on the sphere's throughput, we measure \(H\) for both air and liquid interfaces and compare them with the results obtained from Monte Carlo simulations (sec. IV) for a given value of multiple-scattering albedo. The experimental method used to obtain \(H_{\mathrm{air}}\) and \(H_{\mathrm{liq}}\) is described below.
### The Experimental Set-Up
#### iii.1.1 The Beam collimation
The light is emitted from a set of 7 LEDs from Roithner Lasertechnik(r), ranging from the UV (\(\lambda\)=255 nm) to the visible green (\(\lambda\)=490 nm). These LEDs exhibit a narrow spectral bandwidth of 10 nm FWHM (full width at half maximum) for the UV LEDs and between 20 and 30 nm FWHM for the visible LEDs. They are soldered in a 1-inch circular PCB and controlled via an electronic board plugged into an Arduino Uno(r) microcontroller board. Further details on the characteristics and positioning of the LEDs can be found in tab. 1.
The light emitted by the LEDs goes through a diffuser (DGUV10-220 from Thorlabs(r)) located 50 mm away from the matrix. The diffuser has a transmittance above 68% for this selection of LEDs. Next, the light is collimated by a \(\diameter\,2\) mm iris diaphragm before reaching the beam-sampler BSF10-UV from Thorlabs(r), placed 209 mm from the matrix and at an angle of 45\({}^{\circ}\) relative to the direction of the incoming light. The beam sampler reflects the light with a probability between 0.5% (\(\lambda\)=255 nm) and 2.8% (\(\lambda\)=490 nm). The reflected light goes through a second \(\diameter\,2\) mm iris diaphragm and is directed towards the reference photomultiplier (Hammamatsu(r)P762-Y001), located 86 mm away from the beam sampler. The transmitted light
Figure 1: Experimental optical set-up. Panel A: measurement of the reflectance with the 1.5 in port closed and the main PMT mounted in the north port. Panel B: measurement of the incident beam with the main PMT mounted in the 1.5 in port facing the incident beam at an angle of 0\({}^{\circ}\). Panel C: measurement of the PMT reflectance with the PMT mounted in the 1.5 in port facing the incident beam at an angle of 8\({}^{\circ}\). Panel D: calibration of the PMT reflectance with the aluminium mirror placed in front of the PMT. The coloured lines represent a typical light ray path.
is further collimated using a \(\diameter{\diameter{1}}\) mm pin-hole, placed 48 mm from the beam sampler. Then, it enters the total integrating sphere (TIS) through a 6 mm thick optical flat made of fused quartz from Crystran(r).
#### ii.1.2 The Total Integrating Sphere
The total integrating sphere is the model 819C-SL-3.3 made of Spectralon(r) (PTFE) from Newport(r), with an internal diameter of 3.3 inches. The sphere has four ports, three of which have an aperture of 1 inch, and one with an aperture of 1.5 inches. Light enters through the 1-inch east port, as shown in fig. 1. To increase the average number of reflections and thus the sensitivity, the aperture of this port was reduced to an internal diameter of 5 mm with a port reducer made of Spectralon(r). We placed an optical trap, 8 mm thick, made of poly-oxymethylene (POM), between the port reducer and the optical flat, which has internal V-grooves to reduce internal reflections. The volume between the west port reducer and the surface of the quartz window has a purge line connected to the west port adaptor to remove any air bubbles trapped there when the sphere is filled with liquid. This sphere is also equipped with an internal baffle that blocks the first bounce of light from reaching the photomultiplier.
To measure the reflected beam, we use the PMT R762P from Hammamatsu(r), mounted in the vertical position on the top of the north port as illustrated in fig. 1A. P762-Y001 and R762P have an external diameter of 20 mm, a synthetic silica glass window, and a bialkaline photocathode. The main PMT is mounted in an optical cage system from Thorlabso(r) attached to a port adapter made of aluminum, which guarantees the PMT is always installed in the same position. This port adapter has a weir above the port reducer to ensure that the PMT window is constantly immersed in the liquid when the sphere is full of liquid. The PMT sits on the top of one of two port reducers made with Spectralon(r). The first port reducer has an internal diameter of 16 mm and the second of 12 mm, and both have a thickness of 6 mm. The use of two-port reducers allows us to examine and eliminate any systematic errors resulting from photocathode uniformity.
When measuring the incident flux (fig. 1B), the cap of the east port is removed and replaced with an adaptor holding the main PMT, R762P, which is mounted in the port. The PMT window faces the beam directly at a \(0^{\circ}\) angle as shown on the right side of fig. 1. However, since the light reflected in the PMT window or photocathode can be reflected back by the sphere, it can increase the value of the incident flux. To minimize this effect, we installed a 3-inch wide and \(\diameter{\diameter{1}}\)-inch tube made of anodized aluminum between the west and east port. In this position, the PMT can be moved forward and backward, with the total distance between the internal surface of the port reducer of the west port and the PMT window ranging between 50 mm and 90 mm. Measuring the light flux at different distances allows for checking the collimation of the incident beam and identifying any systematic associated with the PMT repositioning. This measurement was performed for both air and liquid. All the components of the optical system are placed within a black chamber made of aluminum and stainless steel, with the inside painted with anti-reflective Paint from TS-optics.
#### ii.1.3 Acquisition system
Both PMTs operate in photon counting mode to minimize the impact of PMT gain variations on the measurement results. The operating voltages for the main PMT and monitor PMT were determined following the procedure described in ref. [25] on page 148, resulting in values of +1200V and +1400V, respectively. The signal from both PMTs is fed into similar electronics, first into a fast filter amplifier with a differentor set to a decay time of 10 ns, then discriminated, producing a NIM digital signal 8 ns wide, which is fed into a digital counter.
To ensure accurate measurement results, we evaluated the time resolution of the acquisition system to determine the probability of pile-up. To conduct this evaluation, we arranged the main PMT in the configuration depicted in Fig.,1B and incrementally increased the light output from the LEDs. We then measured the photon flux ratio between the main PMT and the monitor PMT. The main PMT observed an average flux 20 times larger than the monitor PMT, leading to photon pile-up occurring earlier for the main PMT. Our measurements indicated that this value remained constant up to fluxes of \(2.5\times 10^{5}\) detected photons per second (phd/s) in the main PMT. From these measurements, we estimated a data acquisition dead time of 18 ns after the electronic processing. To ensure the accuracy of the light flux measurements, we corrected for pile-up using the procedure outlined in [25] (p. 131). Our pile-up correction had a maximum of
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(\lambda\) & FWHM & n\({}_{\text{SiO}_{2}}\)\({}^{\dagger}\) & n\({}_{\text{H}_{2}\text{O}}\) & \(a_{\text{min}}\)\({}^{\ddagger}\) & \(a_{\text{max}}\)\({}^{\ddagger}\) \\ \([\text{nm}]\) & \([\text{nm}]\) & & & \([\text{km}^{-1}]\) & \([\text{km}^{-1}]\) \\ & & [20] & [21] & [22; 23] & [24] \\ \hline
255 & 11 & 1.5048 & 1.3751 & 75.1 & 51.50 \\
275 & 11 & 1.4960 & 1.3668 & 49.7 & 22.30 \\
285 & 11 & 1.4924 & 1.3634 & 42.6 & 9.39 \\
310 & 11 & 1.4924 & 1.3568 & 25.8 & 2.36 \\
356 & 10–20 & 1.4761 & 1.3488 & 8.6 & 0.98 \\
405 & 19 & 1.4696 & 1.3431 & 6.3 & 2.48 \\
490 & 30 & 1.4629 & 1.3373 & 18.1 & 14.60 \\ \hline \hline \multicolumn{7}{l}{\({}^{\dagger}\) Fused silica} \\ \multicolumn{7}{l}{\({}^{\ddagger}\) Minimum and maximum value of the absorption coefficient of the water} \\ \end{tabular}
\end{table}
Table 1: Characteristics of the LEDs used in the experimental set-up and the optical characteristics of fused silica and water for the specific LED wavelength
0.6% for fluxes of 3.5\(\times\)10\({}^{5}\) phd/s, which was the maximum incident flux observed in this experiment.
### Measuring the incident and reflected flux
In order to eliminate effects from possible instability of the LED output, the relative flux was calculated as the ratio between the count rates of the main and the monitor PMTs:
\[\Phi=\frac{N-N_{0}}{M-M_{0}}, \tag{6}\]
where \(N\) and \(M\) are the number of observed photons (phd) recorded within 1-minute intervals by the main and the monitor PMTs, respectively, while \(N_{0}\) and \(M_{0}\) are the corresponding dark counts recorded during the same interval with all LEDs turned off. The dark count rate observed was between 30 and 70 phd/s for the air measurements and 180 phd/s for the water measurements. Next, all the mentioned fluxes will be relative as defined by eq. 6.
To ensure the stability of the experimental setup and eliminate possible sources of error, a typical measurement sequence comprises several steps. First, we measure the dark count rates of both PMTs, denoted as \(N_{0}\) and \(M_{0}\), respectively. Next, we measure the count rates \(N\) and \(M\) for each LED in turn. Finally, we perform another measurement of \(N_{0}\) and \(M_{0}\). To ensure the reliability of the results, we repeat the sequence of seven LED measurements once or twice to check for any temporal evolution in the observed flux. Such changes could be due to variations in the PMT gain, fluctuations in the LED output, or slight changes in the system's geometry (as discussed in sec. V.1).
To measure the throughput of the sphere in air, \(H_{\mathrm{air}}\), we follow a two-step process. First, we measure the incident flux using the setup shown in Fig.,1B. Next, we mount the PMT in the north port and close the east port with a cap, ensuring that the sphere remains in the same position throughout the measurement.
The measurements in the liquid interface are performed right after the air measurements to ensure that the system's geometry and the sphere's reflectivity do not change significantly. After the measurement of \(\Phi_{R}^{\mathrm{air}}\), the main PMT is removed without moving the sphere, and the liquid is poured into the sphere using a pipette. The PMT is slowly lowered until the PMT window is at the face of the internal surface of the sphere, and the weir of the east port is filled with liquid. The measurements were taken in sequence with the sphere filled at \(\nicefrac{{1}}{{3}}\) (110 m\(\ell\)), 2/3 (220 m\(\ell\)), and full.
Controlling the presence of bubbles in the sphere is crucial in this experiment, particularly near the ports. As such, after the liquid measurements to check the presence of air bubbles between the surface of the PMT and the liquid interface, we removed the PMT and placed it again in the same position using the same procedure, and the sequence of LED measurements was repeated. This method could identify bubbles in contact with the PMT as a significant shift in the observed flux. Additionally, we visually inspected the east port to exclude any bubbles present by removing the full sphere.
### The Photocathode reflectance
For both \(\Phi_{I}\) and \(\Phi_{R}\) measurements, the light can be reflected in the PMT quartz window, photocathode, or the PMT internals [26]. While this light is mainly absorbed in the \(\Phi_{I}\) measurement, it can be reflected in the sphere in the \(\Phi_{R}\) measurement and still be detected, artificially increasing the measured value of the sphere reflectance. To account for this, we must consider the reflectance of the PMT. The probability of reflection in the PMT quartz window can be estimated accurately for both air and liquid interfaces since the refractive index of quartz is well-known for all the wavelengths used. However, the is unknown for the photocathode and PMT internals (\(R_{ph}\)) due to manufacturing process details that are unavailable. Therefore, we measured \(R_{ph}\) directly using a dedicated setup. In the method used here, the throughput of the sphere is measured with the PMT mounted in the east port and compared with the throughput of an aluminum mirror with a known reflectance and mounted in the same position as the PMT. These results are further analyzed in a Monte Carlo simulation described in sec. IV to obtain the PMT reflectance.
To measure the photocathode reflectance, we mount the main PMT in the east port according to fig. 1C, and the monitor PMT is mounted in the north port during the sequence of these measurements to measure the reflected fluxes. The main PMT now faces the incident beam with an angle of 8 degrees to prevent the light reflected in the PMT from escaping through the west port. Since the diameter of the PMT is smaller when compared with the diameter of the east port, we added a ring of Spectralon(r) reflector in front of the PMT (indicated in fig. 1C) to increase the light output of the sphere. This reflector has an external diameter of 1 inch and an internal diameter of 12 mm. Next, the light flux from the PMT in the north port (\(\Phi_{R}^{ph}\)) is acquired for each LED using a similar data analysis process described previously, excluding the reference PMT. To calibrate the setup, we replaced the PMT mounted in the east port with a UV-Enhanced aluminum mirror PFSQ05-03-F01 with a known reflectance from Thorlabs(r), keeping the same geometry with the same Spectralon(r) reflector ring in front of the mirror (fig. 1D) and we measured the reflected flux in the north port (\(\Phi_{R}^{At}\)).The reflectance of the aluminum mirror at an angle of incidence of 12\({}^{\circ}\) was provided by Thorlabs(r), ranging from 87% at 255 nm to 90.3% at 490 nm. By comparing the measured \(\Phi_{R}^{ph}\) and \(\Phi_{R}^{At}\) values, we could determine the reflectance of the PMT photocathode (\(R_{ph}\)).
Each measured flux is divided by the incident flux
to obtain the throughput of the sphere in air, \(H_{\rm air}\), for the photocathode reflectance, \(H_{\rm ph}\), and the aluminum mirror reflectance, \(H_{\rm A\ell}\). We compare these results with the simulated values, which are obtained using the method described in the next section, for each configuration shown in Figures 1A, 1C, and 1D.
## IV The equation of the sphere and the Monte Carlo simulations
### The Equation of the Sphere
The throughput of an integrating sphere can be predicted using appropriate equations. For example, when the sphere wall is directly irradiated, the throughput of a sphere with two ports, an entrance, and an observing port, is given by (see eq. 8 in ref. [27]):
\[H=\frac{\eta_{v}R_{1}\left(1-R_{v}\right)}{1-R\left(1-\eta_{v}-\eta_{e}-\eta_{ a}\right)-R_{e}\eta_{e}-R_{v}\eta_{v}}, \tag{7}\]
where \(R_{1}\) and \(R\) correspond to the hemispherical reflectances of the sphere in the first and the subsequent internal reflections. The port fractions of the entrance (west) and viewing (north) ports are denoted by \(\eta_{\rm c}\) and \(\eta_{v}\), respectively, and \(\eta_{a}\) accounts for the losses due to light absorption. \(R_{e}\) and \(R_{v}\) correspond to the average reflectivity of the entrance and observing ports, respectively. Note that the equation presented here has an additional factor \((1-R_{v})\) compared to the equation from ref.,[28] to account for the reflectance of the viewing port, which reduces its effective area by that amount. The calculation method for each component in this equation is described in appx.,C.
### The Monte-Carlo Simulation
Although the equation for the sphere provided above is widely used to predict the response of integrating spheres, it has several limitations. Firstly, it does not account for partial fills with liquid or the roughness of internal surfaces, nor does it include Rayleigh scattering. Additionally, this equation assumes that all surfaces reflect light diffusely according to the Lambertian reflection model, which may not always be accurate. To address these limitations, we employed a Monte Carlo simulation to model light transport through the sphere. This simulation allows us to incorporate these additional details and investigate their impact on the sphere's output. Our simulation was conducted using ANTS2, a simulation and data processing package that specializes in modeling the transport of optical photons in scintillation-based detectors. By comparing the simulation results with the theoretical predictions from the sphere equation, we can better understand the factors that influence the sphere's performance.
In the Monte Carlo simulation, the internal volume of the integrating sphere is divided into three equally-sized regions to simulate the sequence of measurements at different fill levels: \(\nicefrac{{1}}{{3}}\) capacity, \(\nicefrac{{2}}{{3}}\) capacity, and full. Each region can be filled with a different material to accurately model the effects of different filling levels.
In order to ensure accurate simulation results, it is crucial to provide a detailed description of the optical properties of all materials involved. This includes the optical properties of the Spectralon(r) the PMT's window and photocathode, and the liquid inside the sphere. The refractive indexes of the water and the fused silica glass (or fused quartz) are obtained using the Sellmeier dispersion formulae with the coefficients for the water measured by M. Daimon and A. Masumura [21] (at a temperature of 19\({}^{\circ}\)C), and by I. Malitson [20] for fused silica glass.
The refractive index of the Spectralon(r) was measured by Labsphere(r) to be 1.35 [29] (ASTM D-542), but information on its wavelength dependence is not available. We do have information on the dependence of the refractive index on the wavelength for the Teflon AF(r)[30], a material similar to PTFE. Their results show that the difference in the refractive index between \(\lambda=\)490 nm and \(\lambda=\)255 nm is less than 0.02. Other similar materials also show similar differences in the refractive index at \(\lambda=\)490 nm and \(\lambda=\)255 nm (see ref. [31]). Given this information, for the ultra-violet LEDs, we assumed a range of the refractive index between 1.35 and 1.37 in this analysis.
The reflectance model used in our simulations takes into account both specular and diffuse components. Since Spectralon(r) surfaces are known to be rough, we used the reflectance model proposed in our previous work [32], which adapts equation 1 to account for roughness. To characterize the roughness of the Spectralon(r) surface, we employed a gonioreflectometer as described in ref.,[18], and found that the surface profile follows the Trowbridge-Reitz distribution [33] with a roughness parameter \(\sigma_{\alpha}\) of 0.17. For all other materials, we assumed perfectly smooth surfaces with only a specular component.
Light absorption in the water might impact the results since, as shown in the eq. 7, it cannot be distinguished directly from a reduction in the reflectance of the sphere. A brief discussion of the most important water absorption measurements for the relevant wavelength ranges is described in appx. B. Although the absorbance of the pure water used in this experiment is not well-known, its very low conductivity of 0.163,\(\upmu\)S/cm, which is smaller than the conductivity of water used in previous studies by Irvin and Quickenden (0.430,\(\upmu\)S/cm) [23] and Buiteveld (1.5,\(\upmu\)S/cm) [22], and comparable to the pure water from the Mason and Fry measurements (0.055,\(\upmu\)S/cm) [24], indicates high purity. We assumed a range for the absorption coefficient, \(a_{\rm abs,min}\) to \(a_{\rm abs,max}\), for each LED and considered the differences in the final results as a systematic effect. The higher values were taken from the works of Irvin and Quickenden (\(\lambda<\) 300 nm) and Buiteveld
(\(\lambda>300\) nm), and the lower values were taken from the work of Mason and Fry (see Table 1).
The Rayleigh scattering in water has a negligible impact on the measurements. Our simulations indicate that the output of the sphere is not significantly altered unless the Rayleigh scattering coefficient, \(\sigma_{\mathrm{ray}}\), is greater than 0.2 cm\({}^{-1}\). Calculations by Krockel and Schmidt [34] predict a value of \(\sigma_{\mathrm{ray}}<0.01\) cm\({}^{-1}\) for all the LED wavelengths used in this study, which is well below the sensitivity of the sphere.
### Monte-Carlo results and comparison with the sphere equation
To validate our Monte Carlo model, we conducted simulations of light propagation in the sphere under conditions where eq. 7 is applicable, namely without a baffle and for a perfectly smooth wall surface. In these comparisons, we used a refractive index of 1.35 for the Spectralon(r)and assumed the refractive indices of water and fused silica to be those at \(\lambda\)=255 nm (see tab. 1). We also used the PMT reflectance values obtained from simulations and measurements described in the following sections. The relative difference between the predictions made by the eq. 7, \(H^{\mathrm{TIS\,eq}}\), and the simulation output, \(H^{\mathrm{sim}}\), is less than 4% (\(\left|H^{\mathrm{TIS\,eq}}-H^{\mathrm{sim}}\right|/H^{\mathrm{sim}}\)) for \(\rho_{\mathrm{\acute{i}}}0.6\) and for both the liquid and air. Below 0.6, the sphere equation underestimates the sphere's output by 5% at \(\rho\)=0.5 and 10% at \(\rho\)=0.
We next investigated the effect of surface roughness on the sphere's output, taking into account the roughness parameter \(\sigma_{\alpha}\). Surface roughness can impact the sphere's output in two ways: (a) by increasing the shadowing effect and multiple scattering across the surface, which reduces the overall output, and (b) by reducing the probability of light being reflected back to the entrance port after the initial reflections inside the sphere. Our simulations showed that the effect of surface roughness is small for high albedos because (a) and (b) mostly cancel each other out, but for low albedos, surface roughness leads to a small increase in the sphere's output due to effect (b). For instance, in air, the sphere's output for \(\rho=0.95\) and \(\sigma_{\alpha}=0.17\) was largely unaffected by surface roughness, while in liquid, it decreased only by 0.8
Adding a baffle to the sphere has a more significant effect than roughness. For example, for \(\rho=0.95\), we observed that introducing a baffle reduces by 4.2% compared to a sphere without a baffle. To account for the effect of the baffle and the roughness in the eq. 7, we replaced the value of \(R\) by \(R^{b}\), where \(b\) is an empirical constant, being \(b\)=1.07 for air and \(b=1.12\) in the liquid.
When the sphere is full of water, the light absorption has the most considerable effect, especially for large values of \(\rho\). For example, assuming the minimum absorption length for the LED of 255 nm, we observed a reduction in the throughput of 7.3% for \(\rho=0.95\). To incorporate this effect into the sphere equation, we introduced a factor \(f_{a}\) in eq. 7, which can be estimated using eq.,C8.
After accounting for the effect of the baffle, roughness, and the absorption of water, the relative difference, defined as \(\left|H^{\mathrm{TIS\,eq}}-H^{\mathrm{sim}}\right|/H^{\mathrm{sim}}\), is less than 2% for the relevant range of measurements (\(0.8<\rho<0.99\)).
### Simulation of the PMT reflectance
The Monte-Carlo method described before was applied to the measurement of the photocathode reflectance (fig. 1C and 1D). In the simulation results, we observed that the throughput with the PMT mounted in the east port (\(H_{\mathrm{ph}}\)) and then replaced by the aluminum mirror also mounted in the east port (\(H_{\mathrm{A}\ell}\)) is almost independent of the internal reflectance of the sphere \(\rho\). The difference in the ratio (\(H_{\mathrm{ph}}/H_{\mathrm{A}\ell}\)) is less than 3% between \(\rho\)=0.9 and \(\rho\)=0.99. Considering both measurements simultaneously, we can measure \(R_{ph}\), almost independently on the albedo \(\rho\) of the sphere.
## V Results
### Throughput of the sphere in air and liquid
The observed throughput of the sphere corresponds to the ratio \(\Phi_{R}/\Phi_{I}\). However, the measured incident flux, \(\Phi_{I}^{\mathrm{meas}}\), is slightly less than \(\Phi_{I}\), due to the reflection of the incident light from the PMT window and photocathode.
To obtain the true throughput of the sphere, we need to correct for the fact that the measured incident flux, \(\Phi_{I}^{\mathrm{meas}}\), is slightly lower than the actual incident flux \(\Phi_{I}\) due to the reflection of the incident light from the PMT window and photocathode. Taking into account that this
Figure 2: Comparison between the simulation of the sphere (data points) and the results using equation 7 (full lines): the surface is assumed to be perfectly smooth, the baffle was removed and no light absorption in the water was considered. The refractive index of the Spectralon® is 1.35 and the quartz is 1.5048.
reflected light goes back to the entrance window and can be reflected again in the direction of the PMT, one can write:
\[\Phi_{I}^{\text{mea}}=\frac{1-R_{v}}{1-R_{v}R_{e}}\Phi_{I}, \tag{8}\]
where \(R_{v}\) and \(R_{e}\) are the probabilities of reflection at the PMT and the entrance port, respectively. Consequently, the true throughput, \(H\), is obtained by multiplying \(\Phi_{R}/\Phi_{I}^{\text{mea}}\) by the correction factor \(\frac{1-R_{v}R_{e}}{1-R_{v}}\). The expressions for both \(R_{v}\) and \(R_{e}\) are provided in appx. D.
The throughput \(H_{\text{air}}\) is presented in fig. 3 as a function of the LED wavelength for the two diameters of the north port reducer. Consistent with expectations, the throughput decreases at shorter wavelengths, indicating a lower reflectance of the sphere.
The uncertainty of \(H\) in air, \(\sigma_{H_{\text{air}}}\), is estimated by propagating the uncertainties in the measurements of both fluxes, which is dominated by systematic uncertainties since, for most of the LEDs, the statistical uncertainties associated with Poisson fluctuations are less than 0.1%. The primary source of uncertainty affecting \(\Phi_{R}^{\text{air}}\) and \(\Phi_{I}^{\text{air}}\) is the temporal drift in the response of the PMTs and LEDs. To estimate the uncertainty \(\sigma_{H_{\text{in}}}\), we repeated the sequence of the measurements three times with a 15-minute difference delay between each measurement, as described in sec. III. The relative uncertainty was similar across all LEDs, so we took the average standard deviation across all LEDs. We found that the uncertainty in \(\Phi_{R}^{\text{air}}\) was 0.6%, while that in \(\Phi_{I}^{\text{air}}\) was 0.9%.
The measurement of \(H_{\text{liq}}\) differs from that in air because the sphere must be emptied and dried to remove any liquid present in the purge lines of the west port between the measurement of \(\Phi_{R}^{\text{liq}}\) and \(\Phi_{I}^{\text{liq}}\). This process adds new uncertainties, which are difficult to estimate. To minimize these uncertainties, we adopted a method in which we first obtained the ratio \(\Phi_{R}^{\text{liq}}/\Phi_{I}^{\text{air}}\) and then correct it using the factor \(\mathcal{M}_{\text{inc}}\) to account for the different probability of the light to be refracted through the entrance window into the sphere. As such, \(H_{\text{liq}}\) is given by:
\[H_{\text{liq}}=\frac{\Phi_{R}^{\text{liq}}}{\Phi_{I}^{\text{liq}}}=\left(\frac {\Phi_{R}^{\text{liq}}}{\Phi_{I}^{\text{air}}}\right)\mathcal{M}_{\text{inc}} \left(n_{\text{SiO}_{2}},n_{\text{liq}}\right), \tag{9}\]
where reflected flux in liquid, \(\Phi_{R}^{\text{liq}}\), represents the sphere filled at \(\nicefrac{{1}}{{3}}\), \(\nicefrac{{2}}{{3}}\), or to the top. The ratio \(\mathcal{M}_{\text{inc}}\left(n_{\text{SiO}_{2}},n_{\text{liq}}\right)\) corresponds to the ratio between the incident flux in air and in the liquid. It was calculated considering the refractive indexes of the entrance window and the liquid. It is 1 when the sphere is filled at \(\nicefrac{{1}}{{3}}\), and it ranges from 0.968 (\(\lambda=\)255 nm) to 0.962 (\(\lambda=\)490 nm) when the sphere is at \(\nicefrac{{2}}{{3}}\) capacity and full. To account for possible systematic uncertainties, we also measured \(\mathcal{M}\)inc experimentally by directly measuring the incident flux in the liquid, \(\Phi_{I}^{\text{liq}}\), right after the measurement of the incident beam \(\Phi_{I}^{\text{gas}}\) with the PMT reflectance corrections applied (see eq. 8). The average difference between the observed and calculated value of the ratio, \(\left(\Phi_{I}^{\text{air}}/\Phi_{I}^{\text{liq}}\right)\), was (-0.18\(\pm\)0.22). As such, since the calculated value is not affected by experimental uncertainties, we used the calculated value of \(\mathcal{M}\)inc in our analysis.
Figure 4 shows the ratio \(H_{\text{liq}}/H_{\text{air}}\) for all the LEDs. For a full sphere, the ratio is consistently larger than 1.5 across all wavelengths. However, for partial fills, the ratio drops below 1 due to additional reflections at the interface between the liquid and air, which increases the effective path length of light in the sphere. These reflections lead to a reduction in the throughput of light, which is reflected in the smaller values of the ratio for partial fills.
To ensure the reliability of our measurements in liquid, we repeated the sequence of LED measurements three times for each volume of liquid, looking for any possi
Figure 3: Throughput of the sphere in the air, \(H_{\text{air}}\): the ratio between the reflected flux \(\Phi_{R}^{\text{air}}\) and the incident flux \(\Phi_{I}^{\text{air}}\) is shown as a function of the LED wavelength for the two port reducers used in the north port.
ble temporal changes in the observed fluxes. The liquid measurements have two additional sources of systematic uncertainties that might cause such dependence: a) the dissolution of impurities present in the sphere into the liquid (see appx. B for a discussion on the cleaning procedure), and b) the presence or formation of air bubbles in the east port or the north port. Their impact on the measurements is limited since the throughput decreased by only an average of 0.6% between the first and last measurement, despite a 30-minute time difference.
### Reflectance of the photocathode
The reflectance of the photocathode, \(R_{ph}\), is a necessary input in both the equation of the sphere (eq. 7) and in eq. 8 for correcting the incident flux. The reflectance of the photocathode, \(R_{ph}\) for each LED is obtained using the set of observed quantities \(\mathcal{H}=[H_{\mathrm{air}},H_{\mathrm{ph}},H_{\mathrm{A}\ell}]\) mentioned in sec. III.3 with \(H_{\mathrm{air}}\) being an independent measurement from the one mentioned in sec. V.1. These three quantities are compared in a \(\chi^{2}\) minimization with the results obtained with the simulations, \(\mathcal{S}\), for the respective geometry in order to obtain both the photocathode reflectance, \(R_{ph}\), and the multiple-scattering albedo of the sphere, \(\rho\), for each LED wavelength:
\[\chi^{2}(\rho,R_{ph})=\sum_{i}\frac{\left(\mathcal{S}_{i}-\mathcal{H}_{i} \right)^{2}}{\sigma_{i}^{2}}, \tag{10}\]
where \(i\) runs over the three measurements of the throughput, and \(\sigma_{i}\) is the estimated uncertainty for each measurement. For \(\sigma_{i}\), we made the same assumptions made in sec. V.1.
The result obtained in this minimization for the reflectance of the photocathode, \(R_{ph}\), is shown in fig. 5. Additionally, we show the total reflectance of the PMT in an air interface, which includes the reflection in the PMT window and the multiple possible reflections between the photocathode and the surface of the PMT. Finally, we compared these results with those obtained from Motta and Schonert [26] for two bi-alkaline PMTs with a glass window. As shown, the reflectance of the photocathode strongly depends on the wavelength of the light and compares well with the results obtained by the different authors. Moreover, the absolute difference in the albedo value, \(\rho\), obtained with this method and the method described next is on average only 4\(\times\)10\({}^{-3}\). Therefore, the experimental values obtained here are next used in the data analysis.
### Multiple-scattering albedo
To obtain \(\rho\), its value used in the Monte Carlo simulations (section IVIV.2) is adjusted to obtain the best match between \(H^{\mathrm{sim}}\) and \(H^{\mathrm{obs}}\). For each measurement, we adjust \(\rho\) in such a way that \(|H^{\mathrm{sim}}-H^{\mathrm{obs}}|<\)5\(\times\)10\({}^{-6}\). For the partial fills (1/3 and 2/3 volumes), \(\rho\) can have different values in the liquid and gaseous phases, \(\rho_{\mathrm{liq}}\) and \(\rho_{\mathrm{air}}\). Since \(\rho_{\mathrm{air}}\) is less affected by systematic uncertainties, we adjusted \(\rho_{\mathrm{liq}}\) for these geometries while fixing \(\rho_{\mathrm{air}}\) to the value obtained with the empty sphere.
The uncertainty in \(\rho\) was obtained by error propagation of the sphere equation (see eq. 7). The contributions assumed are from \(\sigma_{H}\), calculated in the previous section, the uncertainty in the refractive index of the PTFE, \(\sigma_{\mathrm{n_{P PTFE}}}\), the uncertainty in the absorption length \(\sigma_{a_{\mathrm{abs}}}\), and the uncertainty in the reflectance of the photocathode. To determine \(\sigma_{\mathrm{n_{P PTFE}}}\), we considered the range between 1.35 and 1.37 for the UV LEDs (\(\lambda_{\mathrm{i}}400\,\mathrm{nm}\)). For the absorption length in water, \(a_{\mathrm{abs}}\pm\sigma_{a_{\mathrm{abs}}}\) is given by the range of absorption lengths (\(a_{\mathrm{abs,min}}\) to \(a_{\mathrm{abs,max}}\)) shown in tab. 1.
The results for \(\rho\) are shown in fig. 6 for the two port reducers in both liquid and air interfaces. For the full sphere, the average difference between the albedo in air and water, \(<|\rho_{\mathrm{air}}-\rho_{\mathrm{liq}}|>\), is 0.9\(\times\)10\({}^{-3}\) with the maximum difference, \(\max(|\rho_{\mathrm{air}}-\rho_{\mathrm{liq}}|)\), being 2.5\(\times\)10\({}^{-3}\). These values increase to 1\(\times\)10\({}^{-3}\) and 4\(\times\)10\({}^{-3}\) for the 1/3 volume.
We do not show the results for the volume 2/3 because these measurements were affected by air bubbles formed in the east port. These bubbles could not be appropriately removed because the liquid level was below the level of the purging lines of the east port. The presence of these bubbles reduces the observed flux by almost the same fraction across all the LEDs.
Figure 5: Measurement of the photocathode reflectance: (blue) observed photo-cathode reflectance as a function of the wavelength; (red) contribution from the PMT quartz window included; (gray and black curves) observations from Motta & Schönert of the dependence of the PMT reflectance with the wavelength [26]. The error bars are five times larger for easier visualization.
### Bi-hemispherical reflectance
The hemispherical reflectance of a diffuser can be characterized with a \(6^{\circ}\) directional-hemispherical factor \(R\left(\theta_{i},\phi_{i};2\pi\right)\) which can be directly measured using an integrating sphere as exemplified in the Weidner and Hsia work [35]. \(R\left(\theta_{i},\phi_{i};2\pi\right)\) is defined as the ratio between the reflected flux measured over the whole hemisphere above the surface and the incident flux assuming a direction of incidence defined by the angles (\(\theta_{i}\), \(\phi_{i}\)) [36, 17]:
\[R\left(6^{\circ},0;2\pi\right)=\frac{1}{\pi}\int_{2\pi}f_{r}\left(\theta_{i}=6 ^{\circ},\phi_{i}=0;\theta_{r},\phi_{r}\right)\mathrm{d}\Omega_{r}, \tag{11}\]
where \(\mathrm{d}\Omega_{r}\) is given by equation 4. Assuming that the surface is smooth, this integral is given by:
\[\begin{split} R\left(6^{\circ},0;2\pi\right)=& F\left(6^{\circ};\frac{n_{2}}{n_{1}}\right)+\frac{\rho}{1-\rho\overline{ \mathcal{F}}_{n_{1}/n_{2}}}\times\\ &\times\left[1-F\left(6^{\circ};\frac{n_{2}}{n_{1}}\right)\right] \left[1-\overline{\mathcal{F}}_{n_{1}/n_{2}}\right].\end{split} \tag{12}\]
This is the well-known Saunderson correction (eq. 6 in [14] and eq. 2 in ref. [37]) with the factor \(k_{1}=F\left(6^{\circ};\nicefrac{{n_{2}}}{{n_{1}}}\right)\) and \(k_{2}=\overline{\mathcal{F}}_{n_{1}/n_{2}}\).
Fig. 7 presents the results for \(R\left(\theta_{i}=6^{\circ};2\pi\right)\) in a liquid and gaseous interfaces. To account for surface roughness, \(R\) is obtained using the Monte Carlo simulation based on ANTS (see sec. IV). As shown, the reflectance in liquid water is larger for all LEDs and geometries. The difference in \(R\) between the liquid and air is between 6% for \(\lambda\)!300 nm, and it decreases to 0.84% for \(\lambda\)=490 nm. The diffuse reflectance causes this increase since the specular reflectance decreases in the liquid due to the smaller difference in the refractive index between the liquid and the PTFE/quartz. The results aligned with the observations from ref. [3], which showed an increase of 2% using a He-Ne laser (\(\lambda\)=632.8 nm) as a light source.
When the roughness parameter, \(\sigma_{\alpha}\), is reduced from 0.17 to 0, the results from the Monte Carlo agree well with eq. 12. The average difference between the reflectance of the smooth surface \(\sigma_{\alpha}=0\) and a rough surface \(\sigma_{\alpha}=0.17\) was -0.14% for the air and +0.08% for the liquid. These findings are consistent with the predictions made by ref. [38], which suggests that surface roughness
Figure 6: Results for the reflectance with the internal Lambertian model: the multiple-scattering albedo dependence with the LED wavelength is shown for the different geometries and the two port reducers. The different geometries have a small shift (!2 nm) in the horizontal axis to improve visualization.
Figure 7: Dependence of the with the wavelength of directional-hemispherical reflectance calculated with the ANTS simulation \(R\left(\theta_{i}=6^{\circ};2\pi\right)\).
has minimal effect on integrated reflectances, although it does affect the angular distribution of reflected light.
## VI Discussion
The reflectance values in air obtained here are lower than the results reported by Weidner et al. [35, 39], which shows reflectance values above 97% for \(\lambda_{i}\)250 nm. As reported by some authors [40, 41], the Spectralon reflectance degrades over time, even when kept under cleaning room conditions. This decrease, stronger for newer samples, is caused by the absorption of impurities, especially aromatic hydrocarbons. We observed this effect as the throughput of the sphere decreased roughly by 50% for 255 nm compared with the initial tests in early 2020. Nonetheless, this decrease in reflectance is advantageous for this work since it increases the range of multiple-scatter albedos that could be assessed with this setup.
The value of the multiple-scattering albedo, \(\rho\), can be obtained analytically, assuming that the roughness has no significant impact on the value of the hemispherical reflectances, which, as discussed earlier, is a valid assumption even for moderately rough surfaces. If the value of the directional-hemispherical factor \(R\left(\theta_{i};2\pi\right)\) has been measured for a specific angle of incidence, the value of \(\rho\) is obtained by replacing the value of \(f_{r}\) defined in eq. 1 in the directional-hemispherical integral (eq. 11). Solving for \(\rho\) results in:
\[\rho=\frac{R\left(\theta_{i};2\pi\right)-F\left(\theta_{i};n\right)}{1-F\left( \theta_{i};n\right)-\overline{\mathcal{F}}_{\nicefrac{{1}}{{n}}}\big{[}1-R \left(\theta_{i};2\pi\right)\big{]}}, \tag{13}\]
where \(n=n_{2}/n_{1}\) corresponds to the relative refractive index.
We assumed that the multiple-scattering albedo value, \(\rho\), does not depend on the angle at which the light refracts into the diffuser. However, Chandrasekhar predicted the dependence of the diffuse reflection with the angle of incidence in his work on Radiative Transfer work ([42, 13]). In semi-infinite diffusers, the diffuse properties depend on the single-scattering albedo, corresponding to the probability that the light is not absorbed during two consecutive scatters and on the angular distribution of scattered light. We implemented the Chandrasekhar model for the isotropic scattering and tested it with our data. This was achieved by replacing the Lambertian model with the eq. 123 in chapter 3 of ref. [42]. The performance of this model is similar to the model presented before, and since it is more complex and computationally more expensive, it has no clear advantage in predicting the sphere reflectance in the liquid compared to the Lambertian model. Nonetheless, the distinction between these two models is more substantial when the surface is illuminated or observed at a larger angle. An integrating sphere is not sensitive to this because the angle of incidence closely follows the Lambertian law disfavouring large angles of incidence and requiring a different geometry to assess the performance of these models at large angles of incidence.
The results from fig. 7 contradicts the common knowledge that surfaces look darker when wet. However, this occurs when a layer of liquid covers the diffuser adding a new optical interface where the light can be reflected back to the diffuser [2]. The results presented here also show this effect since the ratio (\(\Phi_{R}^{\mathrm{liq}}/\Phi_{R}^{\mathrm{air}}\)) was smaller than 1 when the sphere was filled at \(\nicefrac{{1}}{{3}}\) and \(\nicefrac{{2}}{{3}}\).
Another cause of the decrease in reflectance in a liquid interface is when the material is porous and can absorb the liquid, which alters the multiple-scatter albedo of the diffuser. As reported in ref. [1], such absorption changes the refractive index of the diffuser's bulk bringing it closer to the refractive index of the liquid and, consequently, increasing the forward scattering. As such, the light penetrates farther, increasing the absorption probability. Spectralon(r) is a well-known porous material with a density between 1.25 and 1.5 g/cm\({}^{3}\), smaller when compared with crystalline PTFE (2.2 g/cm\({}^{3}\)). However, in the case of measurements with water, this porosity is irrelevant, as it is also hydrophobic with a water permeability of only 0.001% (ASTM D570 test made by Labsphere(r)[29]). However, when we performed these measurements with cyclohexane C\({}_{6}\)H\({}_{12}\), \(\rho\) decreased as much as 10% for 255 nm and 1% for 490 nm. Since C\({}_{6}\)H\({}_{12}\) is an apolar liquid, it soaks the Spectralon(r) entering the air voids in the material and changing its optical properties. We tested this hypothesis by soaking a 2 g piece of Spectralon(r) with C\({}_{6}\)H\({}_{12}\) during 24 h, in which we observed a total mass increase of 12%. Further studies are necessary to fully describe the reflectance when the liquid is absorbed by the diffuser.
## VII Conclusion
This work has demonstrated that a single parameter, the multiple-scatter albedo \(\rho\), can predict the diffuse reflectance in both air and liquid water. To show this, we build a set-up composed of a total integrating sphere capable of measuring the reflectance in both air and liquid. Then, we developped a detailed Monte-Carlo model to predict the sphere's throughput for a specific value of \(\rho\), and we compared it with the sphere equation adapted to the measurement in liquid. Finally, the Monte-Carlo simulation was compared with the obtained data to get \(\rho\) for a specific configuration, and we calculated the difference between the albedo in the air and water \(\Delta\rho=\rho_{\mathrm{air}}-\rho_{\mathrm{liq}}\). For the full sphere, the average difference between the albedo in air and water, \(<|\Delta\rho|>\), is 0.9\(\times\)10\({}^{-3}\) with the maximum difference, max\(|\Delta\rho|\)), being 2.5\(\times\)10\({}^{-3}\). This difference, when normalized to the respective uncertainty (\(<|\Delta\rho/\sigma_{\Delta\rho}|>\)) was 0.7, indicating good agreement between the values of \(\rho\) in liquid water and air.
We also showed that the parameter \(\rho\) can be obtained using eq. 13 with the hemispherical reflectance of the sur
face measured in air at a specific angle of incidence. This allow us to predict the reflectance in a the liquid without without measuring it precisely for that interface,, thereby avoiding complications related to liquid purity, absorption, and the adaptation of the setup. Overall, these findings provide valuable insights for the design and optimization of particle detectors that rely on liquid interfaces.
## Appendix A The Fresnel equations
The Fresnel equations set both the intensities of the reflected and refracted waves. Using these equations we can obtain the reflectivity \(F\) given by (eq. 32 and 33 of ref. [16]):
\[F\left(\theta_{i};n,\alpha\right)=\frac{\tan^{2}\left(\theta_{i}-\theta_{t} \right)}{\tan^{2}\left(\theta_{i}+\theta_{t}\right)}\cos^{2}\alpha+\frac{ \sin^{2}\left(\theta_{i}-\theta_{t}\right)}{\sin^{2}\left(\theta_{i}+\theta_{ t}\right)}\sin^{2}\alpha, \tag{10}\]
where \(\theta_{t}=\arcsin\left(\sin\theta_{i}/n\right)\) and \(\alpha\) is the angle in which the electric field vector of the incident wave makes with the plane of incidence.
When the radiation propagates uniformly in all directions with random polarization (isotropic irradiation), the average reflectance is given by \(\overline{\mathcal{F}}_{n}\) defined previously in the eq. 3. Stern [43] obtained the solution for this integral. For \(n>1\), it is given by:
\[\overline{\mathcal{F}}_{n}= n^{2}\bigg{[}\frac{3n^{2}+2n+1}{3n^{2}\left(n+1\right)^{2}}+ \frac{n^{2}+2n-1}{\left(n^{2}+1\right)^{2}\left(n^{2}-1\right)} \tag{11}\] \[+\frac{n^{2}+1}{\left(n^{2}-1\right)^{2}}\log n-\frac{\left(n^{2 }-1\right)^{2}}{\left(n^{2}+1\right)^{3}}\log\frac{n^{2}+n}{n-1}\bigg{]}.\]
To get the values for \(n<1\), we make use of the relation [2; 43]:
\[\overline{\mathcal{F}}_{n}=1-n^{2}\left(1-\overline{\mathcal{F}}_{n}\right). \tag{12}\]
Most dielectric materials have a refractive index smaller than 2.0. In that case, the integral 11 can be approximated to:
\[\overline{\mathcal{F}}_{n}\simeq 1-n^{2}\left(a+\frac{1-a}{2b^{n-1}-1}\right), \quad 1<n<2, \tag{13}\]
with \(a=0.0364\) and \(b=3.280\) with the same relation (eq. 12) for \(n<1\). The relative error of this approximation is less 6\(\times 10^{-4}\) for \(0.5<n<2.0\).
## Appendix B Measurements with Water
In this section, we provide some details related to the water measurement.
The water used in this study is ThermoFisher spectroscopy-grade ACS water, identified by catalog number 43338 and lot number A0429271. The electrical conductivity of the liquid is measured to be 16.3 \(\mu\)S/m.
#### Cleaning procedure
The sphere is thoroughly cleaned before each measurement using the following procedure: (i) nitrogen is sprayed over the surface to remove any particulates, (ii) the sphere is rinsed with ultrapure water, (iii) it is cleaned with propanol, (iv) it is heated in an oven at a temperature of 45 \({}^{\circ}\)C for 2 hours, (v) vacuum is applied to the sphere at 1 mbar for 3 hours, and (vi) the sphere is finally bathed in an argon atmosphere for at least one hour.
#### Water absorption Coefficient
The reported value of water absorption coefficient in the range of 255-490 nm, as reported by different authors, can differ up to two orders of magnitude [22; 23; 24]. This inconsistency is because water exhibits minimal absorption above 250 nm, so the measurements are prone to systematic uncertainties that are difficult to control. The differences have been attributed to factors such as varying water purity levels, differences in Rayleigh scattering predictions, and variations in measurement methodologies [44]. One of the most cited studies on the absorbance of liquid water is from Irvin and Quickenden for \(\lambda\)!320 nm [23]. They used a differential path length method, in which the absorbance was measured using two cells with different sizes, and then the contribution from Rayleigh scattering was removed. Buiteveld [22] made another significant measurement above 300 nm using a submersible absorption meter [45] and reported a minimum absorption of \(a\)=5.3 km\({}^{-1}\) at 386 nm. However, these measurements are dependent on the considered value of Rayleigh length, which is not well established. In 2016, Mason and Fry [24] measured the absorption coefficients above 250 nm with an integrating cavity, a new technique independent of the scattering effects [46]. They observed a minimum absorbance of 0.81 km\({}^{-1}\) at 344 nm.
## Appendix C Equation of the sphere
As discussed in the section, IVIV.1, the output of the sphere can be predicted with:
\[H=\frac{\eta_{v}R_{1}\left(1-R_{v}\right)}{1-R^{b}\left(1-\eta_{v}-\eta_{e}- \eta_{a}\right)-R_{v}\eta_{v}}. \tag{14}\]
Here, we describe how to obtain each quantity in this equation.
### The port fractions
The port fraction, \(\eta_{i}\), for each port \(i=(e,v)\) can be obtained with:
\[\eta_{i}=\frac{1}{2}\left[1-\sqrt{1-2\frac{r_{i}^{2}}{r^{2}}}\right], \tag{15}\]
where \(r\) corresponds to the sphere's radius and \(r_{i}\) to the radius of the entrance or viewing port. \(\eta_{a}\) describes the effect of the water absorption and is discussed next.
### The reflectance of entrance port
Photons can return to the entrance port (the west port) during the multiple reflections inside the sphere. Upon reaching the quartz window, these photons may undergo reflection or refraction. Based on Monte Carlo simulations (sec. IV), we estimated a maximum probability of 4% for photons to return to the entrance port. However, due to the window's geometry, the photons' incident direction is nearly perpendicular to the window, resulting in refraction and exit from the sphere. Furthermore, in the event of reflection, approximately 50% of the photons are absorbed by the light trap. As a result, we assume the reflectance of the entrance window, \(R_{e}\), to be zero.
### The PMT reflectance
As discussed earlier, the average reflectance of the PMT, \(R_{v}\), is not zero due to the reflection in both the PMT window and the photocathode. The reflectance of the PMT window is determined only by its refractive index and as such it can be predicted using the Fresnel equations for dielectrics. As for the reflectance of the photocathode, we assumed it to be a constant \(R_{ph}\) with no dependence on the angle of incidence. The average reflectance of the quartz window is, \(R_{q}=\overline{\mathcal{F}}_{n_{\text{SiO}_{2}/n_{1}}}\) (see eq. 3), where \(n_{\text{SiO}_{2}}\) is the refractive index of the fused quartz. By considering all the possible multiple reflections, we can arrive to:
\[R_{v}=R_{q}+\frac{R_{ph}\left(1-R_{q}\right)^{2}}{1-R_{ph}R_{q}}. \tag{10}\]
### The hemispherical reflectances
The first reflection inside the sphere occurs at the normal direction and corresponds to the hemispherical reflectance, denoted as \(R_{1}\). The bi-hemispherical reflectance, denoted as \(R\), gives the probability of reflection in the subsequent reflections. The factor \(R_{1}\) corresponds to the hemispherical reflectance in the first reflection inside the sphere, which occurs at normal direction, and \(R\) to the bi-hemispherical reflectance. Both factors are defined with the following integrals:
\[R_{1} =\int_{2\pi}f_{r}\left(0,0;\theta_{r},\phi_{r}\right)\mathrm{d} \Omega_{r}, \tag{11}\] \[R =\frac{1}{\pi}\int_{2\pi}\int_{2\pi}f_{r}\left(\theta_{i},\phi_{i };\theta_{r},\phi_{r}\right)\mathrm{d}\Omega_{r}\mathrm{d}\Omega_{i}. \tag{12}\]
where \(f_{r}\) corresponds to the BRIDF given by eq. 1 and the differentials are defined by eq. 4.
The result of both integrals is:
\[R_{1}= F\left(0;\frac{n_{2}}{n_{1}}\right)\cdot E\left(\sigma_{\alpha} \right)+ \tag{13}\] \[+\frac{\rho}{1-\rho\overline{\mathcal{F}}_{n_{1}/n_{2}}}\left[1- F\left(0;\frac{n_{2}}{n_{1}}\right)\right]\cdot\left[1-\overline{\mathcal{F}}_{n_{ 1}/n_{2}}\right],\]
\[R= \overline{\mathcal{F}}_{n_{2}/n_{1}}+ \tag{14}\] \[+\frac{\rho}{1-\rho\overline{\mathcal{F}}_{n_{1}/n_{2}}}\left[1 -\overline{\mathcal{F}}_{n_{2}/n_{1}}\right]\cdot\left[1-\overline{\mathcal{F }}_{n_{1}/n_{2}}\right],\]
where the factor \(E\) accounts for a fraction of the light specularly reflected in the first reflection in the east port and then exited through the entrance port. This factor is zero for smooth surfaces (\(\sigma_{\alpha}=0\)) and increases with the roughness of the surface being 0.6 to \(\sigma_{\alpha}=0.17\).
### The light absorption
The factor \(\eta_{a}\) quantifies the reduction in the throughput of the sphere due to light absorption. It is given by the product of the absorption coefficient \(a_{\text{abs}}\) and the average path length \(d_{\text{mean}}\) between two consecutive reflections inside the sphere, as follows:
\[\eta_{a}=d_{\text{mean}}\cdot a_{\text{abs}}, \tag{15}\]
To estimate \(d_{\text{mean}}\), we assume that the reflected photons follow a Lambertian distribution. The distance between two points in a sphere centered at \((0,0,r)\) is given by \(2r\cos\theta\), where \(\theta\) is the angle between the two points as measured from the sphere's center. Therefore, the average distance between two consecutive reflections inside the sphere is:
\[d_{\text{mean}}=\frac{4r}{3}. \tag{16}\]
Here, \(r\) is the radius of the sphere. Note that the factor \(\nicefrac{{4}}{{3}}\) is the average of the squared cosine of the angle between two points in a sphere with uniform radiance integrated over the hemisphere.
### The effect of the baffle and roughness
To account for the effect of the baffle and the surface's roughness, we replaced the sphere's reflectance, \(R\), in eq. 7 with \(R^{b}\). The \(b\) was obtained by adjusting this factor to the simulation output in the region \(\rho\in[0.8,0.99]\). It was determined to be 1.07 in the air and 1.12 in the liquid.
## Appendix D Correction to the Incident Flux
The incident flux, \(\Phi_{I}\), is given by the equation 8, which depends on the reflectance of the PMT window, \(R_{v}\), and the reflectance of the entrance port, \(R_{e}\). Since both the PMT window and the entrance window of the west port are formed by two interfaces, the reflectance accounts for multiple reflections resulting in a geometric series:
\[R_{v}=Q+\frac{(1-Q)^{2}}{1-Q\cdot R_{ph}}R_{ph},\quad\text{and} \tag{21}\] \[R_{e}=Q+\frac{(1-Q)^{2}}{1-Q\cdot V}V, \tag{22}\]
where \(Q=F\left(0;\frac{n_{\text{SiO}_{2}}}{n_{1}}\right)\) is the reflection probability between the air or liquid inside the sphere and the fused quartz window of the PMT or the entrance port. \(V=F\left(0;n_{\text{SiO}_{2}}\right)\) is the reflection probability between the fused quartz window of the entrance port and the outside of the sphere, which is air.
## Acknowledgements
This work was supported by the Portuguese Foundation for Science and Technology (FCT) under the award numbers PTDC/FIS-PAR/2831/2020, and POCI/01-0145-FEDER-029147, PTDC/FIS-PAR/29147/2017, funded by OE/FCT, Lisboa2020, Compete2020, Portugal2020, FEDER. Claudio Silva was supported by the IF/00877/2015 funded by FCT.
|
2310.07652 | LLM4Vis: Explainable Visualization Recommendation using ChatGPT | Data visualization is a powerful tool for exploring and communicating
insights in various domains. To automate visualization choice for datasets, a
task known as visualization recommendation has been proposed. Various
machine-learning-based approaches have been developed for this purpose, but
they often require a large corpus of dataset-visualization pairs for training
and lack natural explanations for their results. To address this research gap,
we propose LLM4Vis, a novel ChatGPT-based prompting approach to perform
visualization recommendation and return human-like explanations using very few
demonstration examples. Our approach involves feature description,
demonstration example selection, explanation generation, demonstration example
construction, and inference steps. To obtain demonstration examples with
high-quality explanations, we propose a new explanation generation
bootstrapping to iteratively refine generated explanations by considering the
previous generation and template-based hint. Evaluations on the VizML dataset
show that LLM4Vis outperforms or performs similarly to supervised learning
models like Random Forest, Decision Tree, and MLP in both few-shot and
zero-shot settings. The qualitative evaluation also shows the effectiveness of
explanations generated by LLM4Vis. We make our code publicly available at
\href{https://github.com/demoleiwang/LLM4Vis}{https://github.com/demoleiwang/LLM4Vis}. | Lei Wang, Songheng Zhang, Yun Wang, Ee-Peng Lim, Yong Wang | 2023-10-11T16:51:46Z | http://arxiv.org/abs/2310.07652v2 | # LLM4Vis: Explainable Visualization Recommendation using ChatGPT
###### Abstract
Data visualization is a powerful tool for exploring and communicating insights in various domains. To automate visualization choice for datasets, a task known as visualization recommendation has been proposed. Various machine-learning-based approaches have been developed for this purpose, but they often require a large corpus of dataset-visualization pairs for training and lack natural explanations for their results. To address this research gap, we propose LLM4Vis, a novel ChatGPT-based prompting approach to perform visualization recommendation and return human-like explanations using very few demonstration examples. Our approach involves feature description, demonstration example selection, explanation generation, demonstration example construction, and inference steps. To obtain demonstration examples with high-quality explanations, we propose a new explanation generation bootstrapping to iteratively refine generated explanations by considering the previous generation and template-based hint. Evaluations on the VizML dataset show that LLM4Vis outperforms or performs similarly to supervised learning models like Random Forest, Decision Tree, and MLP in both few-shot and zero-shot settings. The qualitative evaluation also shows the effectiveness of explanations generated by LLM4Vis. We make our code publicly available at [https://github.com/demoleiwang/LLM4Vis](https://github.com/demoleiwang/LLM4Vis).
## 1 Introduction
Data visualization is a powerful tool for exploring data, communicating insights, and making informed decisions across various domains, such as business, scientific research, social media and journalism Munzner (2014); Ward et al. (2010). However, creating effective visualizations requires familiarity with data and visualization tools, which can take much time and effort Dibia and Demiralp (2019). A task that automates the choice of visualization for an input dataset, also known as _visualization recommendation_, has been proposed.
So far, visualization recommendation works can be categorized into rule-based and machine learning-based approaches Hu et al. (2019); Li et al. (2021); Zhang et al. (2023). Rule-based approach Mackinlay (1986); Vartak et al. (2015); Demiralp et al. (2017) leverages data characteristics and visualization principles to predict visualizations, but suffers from the limited expressibility and generalizability of rules. Machine learning-based approach Hu et al. (2019); Wongsuphasawat et al. (2015); Zhou et al. (2021) learns machine learning (ML) or deep learning (DL) models from dataset-visualization pairs and these models can offer greater recommendation accuracy and scalability. Existing ML/DL models, however, often need a large corpus of dataset-visualization pairs in their training and they could not provide explanations for the recommendation results. Recently, a machine learning-based work, KG4Vis Li et al. (2021), leverages knowledge graphs to achieve explainable visualization recommendation. Nevertheless, KG4Vis still requires supervised learning using a large data corpus and its explanations are generated based on predefined templates, which constrain the naturalness and flexibility of explanations.
Recently, large language models (LLMs) such as ChatGPT OpenAI (2022) and GPT-4 OpenAI (2023) have demonstrated strong reasoning abilities using in-context learning Brown et al. (2020); Zhang et al. (2022); Chowdhery et al. (2022). The key idea behind this is to use analogical exemplars for learning Dong et al. (2022). Through in-context learning, LLMs can effectively perform complex tasks, including but not limited to mathematical reasoning Wei et al. (2022), visual question answering Yang et al. (2022), and tabular
classification (Hegselmann et al., 2023) without supervised learning. By prompting the pretrained LLM to perform tasks using in-context learning, we avoid the overheads of parameter updates when adapting the LLM to a new task.
Inspired by the excellent performance of Chat-GPT on natural language tasks (Qin et al., 2023; Li et al.; Sun et al., 2023; Gilardi et al., 2023), we explore the possibility of leveraging Chat-GPT for explainable visualization recommendation. Specifically, we propose _LLM4Vis_, a novel ChatGPT-based In-context Learning approach for Visualization recommendation with natural human-like explanations by learning from very few dataset-visualization pairs. LLM4Vis consists of several key steps: feature description, demonstration example selection, explanation generation bootstrapping, prompt construtction, and inference for explainable visualization recommendation. Firstly, feature description is used to quantitatively represent the characteristics of tabular datasets, which makes it easier to analyze and comprehend tabular datasets using ChatGPT. Demonstration example selection is then employed to prevent the input length from exceeding the maximum length of ChatGPT by retrieving \(K\) nearest labeled data examples. Next, we propose a new iterative refinement strategy in terms of the previous generation and hint to obtain a more high-quality recommendation explanation and a score of each visualization type before prompt construction. Finally, the constructed prompt is used to guide ChatGPT to recommend visualization types for a test tabular dataset while providing recommendation scores and human-like explanations.
We evaluate the visualization recommendations of LLM4Vis by comparing its accuracy of visualization with strong machine learning-based baselines from VisML (Hu et al., 2019) like Decision Trees, Random Forests, and MLP. The visualization recommendation results demonstrate that LLM4Vis outperforms all the baselines in few-shot and full-sample training settings. Furthermore, the evaluations conducted by LLM and humans show that the generated explanation of the test data example matches the predicted score. Our contributions are summarized below:
* We present LLM4Vis, a novel ChatGPT-based prompting approach for visualization recommendation, which can achieve accurate visualization recommendations with human-like explanations.
* We propose a new explanation generation bootstrapping method to generate high-quality recommendation explanations and scores for prompt construction.
* Experiment results show the usefulness and effectiveness of LLM4Vis, encouraging further exploration of LLMs for visualization recommendations.
## 2 Related Work
Prior studies on automatic visualization recommendation approaches can be categorized into two groups: unexplainable visualization recommendation approaches and explainable visualization approaches (Wang et al., 2021). Unexplainable visualization recommendation approaches, including Data2vis (Dibia and Demiralp, 2019), VizML (Hu et al., 2019), and Table2Chart (Zhou et al., 2021), can recommend suitable visualizations for an input dataset, but cannot provide the reasoning behind the recommendation to users, making them black box methods. Explainable visualization recommendation approaches provide explanations for their recommendation results, enhancing transparency and user confidence in the recommendations. Most rely on human-defined rules, such as Show Me (Mackinlay et al., 2007) and Voyager (Wongsuphasawat et al., 2015). But rule-based approaches are often time-consuming and resource-intensive, and require visualization experts' manual specifications. To address such limitations, Li et al. (2021) proposed a knowledge graph-based recommendation method (KG4Vis) that learns the rules from existing visualization instances. To provide human-like explanations, this paper proposes to leverage Chat-GPT to recommend appropriate visualizations.
## 3 LLM4Vis Method
### Overview
In this section, we present the proposed approach LLM4Vis. As shown in Figure 1, LLM4Vis consists of several key steps: feature description, demonstration example selection, explanation generation bootstrapping, prompt construction, and inference. To save space, we show the exact wording of all prompts we employ in LLM4Vis in the Appendix.
### Feature Description
Most large language models, such as Chat-GPT [14], are trained based on text corpora. To allow ChatGPT to take a tabular dataset as input, we can first use predefined rules to transform it into sets of data features that quantitatively represent its characteristics. Subsequently, these features can be serialized into a text description.
Following VizML [13] and KG4Vis [11], we extract 80 _cross-column_ data features that capture the relationships between columns and 120 _single-column_ data features that quantify the properties of each column. We categorize the data features related to columns into _Types_, _Values_, and _Names_. Types correspond to the columns' data types, Values capture statistical features such as distribution and outliers, and Names are related to columns' names.
Previous works [10, 15] perform serialization mainly through the use of rules, templates, or language models. In this paper, to ensure grammatical correctness, flexibility, and richness, we follow the LLM serialization method proposed by TabLLM [10]. Specifically, our approach involves providing a prompt that instructs ChatGPT to generate for each tabular dataset a comprehensive text description that analyzes the feature values from both single-column and cross-column perspectives. The feature description is then used to construct concise but informative demonstration examples.
### Demonstration Example Selection
Due to the maximum input length restriction, a ChatGPT prompt could only accommodate a small number of demonstration examples. The selection of good demonstration samples from a large set of labeled data is therefore crucial. Instead of randomly selecting examples that may not be relevant to the target test tabular dataset [11], we first represent each tabular dataset by converting its features to a vector. Then, we use a clustering algorithm to select a representative subset of examples from the labeled set. The clustering algorithm creates \(C\) clusters, and we choose \(R\) representative examples from each cluster, resulting in a subset of size \(M=C\times R\) as the retrieval set. Finally, we retrieve \(K\) training data examples with the highest similarity scores with a target data example based on the cosine similarity scores of their vector representations from the retrieval set.
### Explanation Generation Bootstrapping
Each labeled data example \(X_{i}\) comes with only one ground truth label \(Y_{i}\), but not the explanation required to be used in a demonstration example. We therefore propose a prompt to leverage the built-in knowledge of ChatGPT to recommend the appropriate visualization and the corresponding explanation for each labeled dataset. Our strategy involves instructing ChatGPT to generate a response in a JSON format, where the keys correspond to four possible visualization types \(\{Y_{LC},Y_{SP},Y_{BC},Y_{BP}\}\) (\(LC\): line chart, \(SP\): scatterplot, \(BC\): bar chart, \(BP\): Box plot) and the values are recommendation scores \(\{S_{LC},S_{SP},S_{BC},S_{BP}\}\). Fur
Figure 1: A detailed illustration of LLM4Vis. (a) The process for converting a labeled tabular dataset to a demonstration example of the final prompt, including feature extraction, feature description, and explanation generation bootstrapping. (b) The process for visualization type recommendation of a test tabular dataset, involving demonstration example selection, prompt construction, and inference.
thermore, we prompt ChatGPT to generate explanations \(\{Ex_{LC},Ex_{SP},Ex_{BC},Ex_{BP}\}\) for its prediction of each visualization type in an iterative process.
Specifically, we employ zero-shot prompting with the feature description of a tabular dataset to ask ChatGPT to generate scores \(\{S^{1}_{LC},S^{1}_{SP},S^{1}_{BC},S^{1}_{BP}\}\) for all visualization types and provide explanations \(\{Ex^{1}_{LC},Ex^{1}_{SP},Ex^{1}_{BC},Ex^{1}_{BP}\}\) supporting these scores' assignment to each visualization type. The sum of these scores is required to be 1. Subsequently, these scores and explanations are revised by an iterative refinement process that terminates when the ground truth visualization type \(Y_{i}\) receives the highest score which also exceeds the second-highest score by at least a margin of 0.1. The final explanations and scores are denoted by \(\{Ex^{f}_{LC},Ex^{f}_{SP},Ex^{f}_{BC},Ex^{f}_{BP}\}\) and scores \(\{S^{f}_{LC},S^{f}_{SP},S^{f}_{BC},S^{f}_{BP}\}\). However, if the ground truth visualization type does not meet the aforementioned conditions, we develop a hint and append it to the initial zero-shot prompting to instruct ChatGPT to produce a more accurate output. An example hint template is as follows: _"[a] may be more suitable than [b]. However, the previous scores were [c]"_. The _[a]_ slot is for the ground truth label, the _[b]_ slot is for the incorrect label with the highest score, and the _[c]_ slot is for the previously predicted score for each visualization type. In the Experiment section, we compare two hint strategies, including using ground truth (GT-As) and random labels (Rand-As) as hints. The results can be found in Figure 2.
Through this iterative refinement, we can obtain higher-quality visualization type prediction with scores and corresponding explanations. Note that if the labeled dataset fails to meet the stopping condition within the maximum iteration steps, we will delete this data example from the retrieval set.
### Prompt Construction and Inference
After retrieving \(K\) nearest labeled samples from the retrieval set for a test data sample, along with their feature descriptions, refined explanations, and refined scores, each demonstration example is constructed with the feature description, task instruction, recommended visualization types with scores, and explanations. Then, we incorporate the feature description of a test data example into a pre-defined template. Next, the constructed demonstration examples and the completed template for the test data example are concatenated and fed into ChatGPT to perform visualization type recommendations. Finally, we extract the recommended visualizations and explanations from the ChatGPT output.
## 4 Evaluation
### Evaluation Setup
**Dataset.** We utilize the VizML corpus Hu et al. (2019) to construct our training, validation, and test sets. We select a subset of 100 data-visualization pairs from the corpus to evaluate our model's performance for testing purposes. These pairs comprised 25 line charts, 25 scatter plots, 25 bar charts, and 25 box plots. We employ two different training settings for our experiments. In the first setting, we use the set of 5000 data-visualization pairs from the corpus to train all baseline models. In the second few-shot setting, we employ clustering techniques Pedregosa et al. (2011) to extract \(4\times 15\) data-visualization pairs from the 5000 pairs to build the retrieval set of size (\(M=60\)).
**Large Language Model Setup.** We conduct experiments using the gpt-3.5-turbo-16k version of GPT-3.5, widely known as ChatGPT. We have chosen ChatGPT because it is a publicly available model commonly used to evaluate the performance of large language models in downstream tasks Sun et al. (2023); Qin et al. (2023); Li et al. (2014). To conduct our experiments, we utilize the OpenAI API, which provides access to ChatGPT. Our experiments were done between June 2023 and July 2022, and the maximum number of tokens allowed for genera
\begin{table}
\begin{tabular}{l l|c c c c c} \hline \hline \multirow{2}{*}{**Setting**} & \multirow{2}{*}{**Methods**} & \multicolumn{4}{c}{**Hits0-2**} \\ & & **Line** & **Scatter** & **Bar** & **Box** & **Overall** \\ \hline \multirow{3}{*}{**Full Samples**} & Decision Tree & 57.3 & 60.0 & **100** & 56.0 & 68.3 \\ & Random Forest & 92.0 & **100** & 90.7 & 32.0 & 78.7 \\ & MLP & **97.3** & **100** & 93.3 & 24.0 & 78.7 \\ \hline \multirow{3}{*}{**Few-Shot (4)**} & Decision Tree & 42.7 & 12.0 & 100 & 41.3 & 49.0 \\ & Random Forest & 66.7 & 78.7 & 38.7 & 65.3 & 62.0 \\ & MLP & 70.7 & 85.3 & 44.0 & 45.3 & 61.0 \\ & **LLMMVis** & 53.3 & 80.0 & 84.0 & 93.3 & 77.7 \\ \hline \multirow{3}{*}{**Few-Shot**} & LIM-SP-Random & 36.0 & 86.0 & 96.0 & 46.0 & 66.0 \\ & LIM-SP-Retrieval & 68.0 & 94.0 & 90.0 & 32.0 & 71.0 \\
**Dynamic** & **LLMMVis-Random** & 46.7 & 69.3 & 84.0 & 90.7 & 72.7 \\ & **LLMMVis-Retrieval** & 62.4 & 69.0 & 86.8 & **97.2** & **85.7** \\ \hline \multirow{3}{*}{**Zero-Shot**} & LIM-SP & 64.0 & 84.0 & 86.0 & 64.0 & 65.0 \\ & **LLMMVis** & 64.0 & 88.0 & 76.0 & 89.3 & 79.3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The result of our quantitative evaluation with the best results highlighted in bold. LLM4Vis-random refers to randomly selecting demonstration examples from the retrieval set. Conversely, LLM4Vis-retrieval refers to retrieve \(K\) nearest labeled data examples from the retrieval set. Note that LLM4Vis using 5 demonstrations shows a performance better than machine learning based baselines trained with full samples (5000) and provides human-like explanations that are unattainable with these baselines.
tion is set to be 1024. To enhance the determinism of our generated output, we set the temperature to 0. Due to the input length restriction of ChatGPT (i.e., 16,384 tokens), we limit the number of our in-context demonstrations \(K\) to 8.
**Baselines.** We compare with strong visualization type recommendation baselines from VizML [14]. Specifically, we compare our method with Decision Tree, Random Forest, and MLP baselines, which are implemented using scikit-learn with default settings [12]. With full data training, these strong baselines are expected to outperform few-shot methods. We also compare our method to a simple prompting technique named LLM-SP. In the zero-shot setting, the instruction in the prompting is to ask ChatGPT to recommend visualization type based on extracted features of the given tabular dataset. In the few-shot setting, each demonstration example in the prompt is composed of an instruction, extracted features of a given tabular dataset, and the corresponding labeled visualization type.
**Metrics.** Our proposed method makes two visualization design choices based on the large language models directly. Referring to KG4Vis [11], we employ a commonly used metric to assess the effectiveness of our approach: _Hits@2_, which indicates the proportion of correct visualization design choices among the top two options.
### Main Results
Table 1 shows that our few-shot LLM4Vis outperforms all baselines, including Decision Tree, Random Forest, and MLP, in the full sample training setting, which indicates that LLMs can effectively recommend appropriate visualization types by learning from limited demonstration examples and capitalizing on built-in background knowledge of visualization. Note that even zero-shot LLM4Vis can outperform these strong baselines. Two categories for few-shot settings are: _fixed_ and _dynamic_. In the fixed setting, fixed demonstration examples are chosen for all test examples, LLM4Vis outperforms all baselines. In the dynamic setting, we select relevant demonstration examples for each test example. LLM4Vis with dynamic few-shot settings outperforms randomly selected demonstrations. It indicates that relevant demonstration examples can provide useful information to guide the LLM in recommending a suitable visualization type for the test tabular dataset.
### In-depth Analysis
Effect of each Component of LLM4Vis.Figure 2 presents the comparison results of variants of LLM4Vis, wherein one component is either removed or replaced. The findings reveal that the absence of explanations, feature descriptions, and recommendation scores in the prompt consistently leads to reduced performance in both zero-shot and few-shot settings. With more iterations of explanation refinement, the performance improves. Replacing the proposed hint with the ground truth label or a random label results in a substantial drop in performance. Similarly, using the prediction from the nearest demonstration example as the test example's prediction also leads to significant performance degradation, which indicates that LLM effectively learns from given demonstration examples rather than merely copying them. Overall, all components of the proposed LLM4Vis contribute to recommendation accuracy.
**Effect of the Number of In-context Examples.** We assess the effect of the number of demonstration examples on LLM4Vis's performance. Specifically, we examine LLM4Vis, using different sets of nearest demonstration examples, ranging from 1 to 7 instances. The results, depicted in Figure 3(a), show that more demonstration examples lead to better performance, despite a drop when the number of demonstration examples goes from 3 to 4.
Effect of the Size of Retrieval Set.We quantify the impact of the size of the retrieval set. We test
Figure 2: Effect of each component of LLM4Vis. All methods are evaluated on the same test dataset. **All**: keeping all module unchanged. **Random**: randomly choosing one visualization type as recommendation. **-Ex**: removing explanation in the prompt. **-Des**: removing feature description in the prompt. **-Rank**: predicting visualization type directly. **Nearest**: predicting using the nearest example. **Iter-1**: using explanation without refinement in the prompt. **Iter-2**: using explanation with one step refinement in the prompt. **GT-As**: generating the explanation in the prompt using the ground truth label as the hint. **Rand-As**: generating the explanation in the prompt using the random label as the hint.
LLM4Vis on retrieval sets of varying sizes, ranging from 10 to 60 examples. Figure 3(b) shows that the performance of LLM4Vis improves as the size of the retrieval set increases. This is likely because the larger retrieval set can find more relevant nearest neighbors. It indicates that LLM4Vis can achieve better results by scaling the retrieval set. As the retrieval set size increases from 50 to 60, we observe a decline in the degree of performance improvement. It suggests that the relevant information to test data in the k-nearest demonstration example may not have a proportional increase.
Effect of Base Large Language ModelsWe also evaluate LLM4Vis using various LLMs, including different versions of GPT-3.5. According to official guidelines, ChatGPT has the highest capability, and text-davinci-002 is the least capability model among the three LLMs. As expected, Figure 3(c) illustrates that model performance improves as the model capability increases from text-davinci-002 to ChatGPT. Overall, these results indicate that LLMs of stronger capabilities usually deliver much better recommendation accuracy.
Effect of In-context Example Order.We compare three demonstration orders: random (shuffle \(K\) nearest neighbors), furthest (samples with the least similarity are first selected), and nearest (samples with the most similarity are first selected). The results in Figure 3(d) show that LLM4Vis is sensitive to the order of \(K\) selected demonstrations. Specifically, employing the "furthest" ordering within the framework of LLM4Vis yields the lowest results, whereas the "nearest" ordering yields the strongest performance. It indicates that relevant demonstrations can stabilize in-context learning of LLMs.
Explanation Evaluation.In this section, we assess the consistency between generated explanations and predicted scores of visualization type recommendations in a test tabular dataset. Two evaluation metrics are employed: LLM-based evaluation and human evaluation.
The LLM-based evaluation measures the Pearson correlation between the predicted scores generated by LLM4Vis and scores predicted by ChatGPT based on the explanations generated by LLM4Vis. A higher Pearson correlation signifies stronger consistency between the predicted scores and explanations. We obtain a Pearson correlation of 0.78 for zero-shot LLM4Vis and 0.92 for few-shot LLM4Vis. These findings indicate that the few-shot LLM4Vis exhibits greater consistency between its predicted scores and generated explanations than the zero-shot LLM4Vis.
Besides the LLM-based evaluation, we manually inspect ten correct recommendations to validate the consistency of generated explanations further and predicted scores. Our examination shows that nine out of the ten examples demonstrate consistent alignment between their explanations and predicted scores. The generated explanation and predicted score of one particular instance are inconsistent. This is likely because the predicted score of the ground truth label is low and second highest.
## 5 Conclusion
In this paper, we propose LLM4Vis, a novel ChatGPT-based in-context learning approach for visualization recommendation, which enables the generation of accurate visualization recommendations with human-like explanations by learning from only a few dataset-visualization pairs. Our approach consists of several key steps, including feature extraction, feature description, explanation generation, demonstration example selection, and prompt generation, and inference. Our evaluation of recommendation results and explanation demonstrate the effectiveness and explainability of LLM4Vis, which encourages further exploration of large language models for this task.
LLM-based visualization recommendations can empower many startups and LLM-based applications to advance data analysis, enhance insight communication, and help decision-making. In future
Figure 3: Effect of the number of in-context examples (a), the number of examples in the retrieval set (b), different base large language model (c), and the ordering of K nearest examples as in-context examples (d).
work, we plan to exploring the possibility of deploying LLM4Vis to real-world data analysis and visualization applications, and further demonstrate its effectiveness and usability by data analysts and common visualization users. Also, it is interesting to investigate the use of other large language models with multimodal capabilities, such as GPT-4, for visualization recommendation.
## 6 Acknowledgments
This project is supported by the Ministry of Education, Singapore, under its Academic Research Fund Tier 2 (Proposal ID: T2EP20222-0049). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the Ministry of Education, Singapore.
|
2303.08459 | Forecasting Intraday Power Output by a Set of PV Systems using Recurrent
Neural Networks and Physical Covariates | Accurate intraday forecasts of the power output by PhotoVoltaic (PV) systems
are critical to improve the operation of energy distribution grids. We describe
a neural autoregressive model that aims to perform such intraday forecasts. We
build upon a physical, deterministic PV performance model, the output of which
is used as covariates in the context of the neural model. In addition, our
application data relates to a geographically distributed set of PV systems. We
address all PV sites with a single neural model, which embeds the information
about the PV site in specific covariates. We use a scale-free approach which
relies on the explicit modeling of seasonal effects. Our proposal repurposes a
model initially used in the retail sector and discloses a novel truncated
Gaussian output distribution. An ablation study and a comparison to alternative
architectures from the literature shows that the components in the best
performing proposed model variant work synergistically to reach a skill score
of 15.72% with respect to the physical model, used as a baseline. | Pierrick Bruneau, David Fiorelli, Christian Braun, Daniel Koster | 2023-03-15T09:03:58Z | http://arxiv.org/abs/2303.08459v3 | Hybrid-Physical Probabilistic Forecasting for a Set of Photovoltaic Systems using Recurrent Neural Networks
###### Abstract
Accurate intra-day forecasts of the power output by PhotoVoltaic (PV) systems are critical to improve the operation of energy distribution grids. We describe a hybrid-physical model, which aims at improving deterministic intra-day forecasts, issued by a PV performance model fed by Numerical Weather Predictions (NWP), by using them as covariates in the context of an autoregressive recurrent neural model. Our proposal repurposes a neural model initially used in the retail sector, and discloses a novel truncated Gaussian output distribution. We experimentally compare many model variants to alternatives from the literature, and an ablation study shows that the components in the best performing variant work synergistically to reach a skill score of 7.54% with respect to the NWP-driven PV performance model baseline.
## 1 Introduction
Grids of PV systems have become an inevitable component in the modern and future energy distribution systems. However, due to weather conditions, the magnitude of PV power production is fluctuating, while the supply to consumers requires to be adapted to the demand at each point in time. Distribution system operators (DSOs) have increasing and specific requirements for PV power forecasts. Indeed, fluctuating renewables could cause operational issues (e.g. grid congestion), which call for active grid operation. In this context, _intra-day_ forecasts (i.e., for the whole day to come at a fixed time in day) of PV power are critical in view to facilitate operations. Also, many forecasting models issue point forecasts, but hardly characterize the uncertainty attached to their forecasts, when such information can be critical for a DSO in order to quantify and mitigate risks in an optimal way.
In [10], PV power production is forecasted using a deterministic PV performance model, which involves regional solar irradiance forecasts issued by a Numerical Weather Prediction (NWP) service as inputs. The underlying hypothesis is that solar irradiance is fairly smooth over limited regional areas, and the production curve specific to a PV system will be mainly influenced by how it converts this solar energy to PV power according to its specifications. In [11], authors of the present paper introduced a model which performs intra-day probabilistic forecasts of PV power production. It combines the PV
performance model above with a model based on Long-Short-Term Memory (LSTM) cells [14]. This kind of combination of a model based on a set of physical equations to a statistical model is coined as _hybrid-physical_ approaches [1]. For training and evaluation, it uses real data provided by Electrois, a DSO in Luxembourg. Results show that this new model improves the baseline performance, while coping with local effects such as PV system shading. The former paper rather targets solar energy specialists, with little details unvealed about how the Machine Learning model acting as cornerstone to the approach has been designed and trained. The present paper aims at filling this gap, by providing entirely new material focusing on this complementary view. Specifically, the purpose of the present paper is to focus on neural time series forecasting aspects in the context of this application.
The specific contributions of the present work mainly focus on the design of a model architecture and a training procedure which meets the operational needs of a DSO. Our proposal is based on an existing LSTM-based model [15], which we present in a concise and effective way in order to make this work self-contained. In addition, we design a novel truncated Gaussian output component, which we plug in to the LSTM-based model. In Section 2, we give a structured survey of the related work which positions the problem faced by a DSO, and motivates which existing work could be reused or repurposed to suit our needs. After describing our model proposal in Section 3, we provide a thorough experimental evaluation in Section 4. Several variants of our model proposal are compared to alternative models from the literature, and an ablation study allows to emphasize the specific contribution of each of its components. Finally we recall some qualitative results to underline how local effects, tainting the PV performance model, are mitigated using our approach.
## 2 Related Work
In Section 2.1, we review seminal PV power forecasting methods. Then in Section 2.2, we survey time series forecasting as addressed in the literature on Machine Learning (ML), from the perspective of their repurposing potential to the PV power forecasting application. Section 2.3 reviews existing hybrid-physical models which relate the most closely to our proposal. Finally, Section 2.4 focuses on the peculiarities in terms of forecasting structure and validation which come with ML approaches to time series forecasting.
### PV Power Forecasting
Most approaches in PV power forecasting model the conversion chain of solar irradiance to electrical power in a PV system. Thus they follow a two-step approach: first, forecasting the solar irradiance, then converting this irradiance to PV power forecasts [1, 1]. The most common way to forecast solar irradiance relies on NWP systems such as the European Centre for Medium-Range Weather Forecasts (ECWMF) Ensemble Prediction System [2]. Everyday, it issues hourly regional forecasts for a range of meteorological variables (including solar irradiance) on the 10 days to come, thus including the intra-day range. Intra-day may be defined as up to 6h forecast horizons in the literature [1]. In this paper, we deviate from this defini
tion by considering intra-day as the next 24h, starting at midnight of the same day. In view to improve NWP forecasts, or to avoid having to rely on such systems, solar irradiance can also be forecasted using statistical methods and ML models. The simplest include persistence models, which are often adjusted using clear sky models [11]. [20] also review various ML techniques which have been employed to this purpose, e.g., AutoRegressive (AR) models, Feed-Forward Networks (FFN) and k-Nearest Neighbors. In this range of contributions, [12] address the intra-day hourly prediction of solar irradiance using an ensemble of FFN models. Specifically, they implement rolling forecasts by specializing each model to a current time and a prediction horizon.
PV power forecasting is reviewed in detail by [1]. In this landscape, several approaches aim at modelling directly the series of PV power values, without having to rely on solar irradiance forecasts. [10] propose short-term forecasts (\(<\)30mn) which exploit cross-correlation of PV measurements in a grid of PV systems. They hypothesize that clouds casting over a given PV system have lagged influence on other systems downwind. They optimize associated time lag and cloud motion vector. [1] also consider a spatially distributed set of PV panels. They directly use PV power values, without converting proxy information such as solar irradiance or cloud density. Similarly to [10], they focus on correlations among stations to help accounting for intermittency due to clouds. They report intra-day forecasts are useful for an energy trading strategy, while hour-ahead and intra-hour serve for managing demand response.
[14] present AR approaches to PV power forecasting. They focus on forecasting one and two hours ahead, where NWP models tend to under-perform. Several models are compared, among which are persistence, linear models such as AutoRegressive Integrated Moving Average (ARIMA) [15], and FFN. They found out that FFN perform the best, with improvements brought by the optimization of FFN parameters, input selection and structure using a Genetic Algorithm. They conjecture that binning data according to the associated season, and learning per-bin models should improve forecasting ability overall, even if this approach has been recently criticized [16].
### ML approaches for time series forecasting
Among other related work, Section 2.1 surveyed some contributions which involved ML methods in view to forecast solar irradiance and PV power production. In this section, we generalize this view, by surveying recent work in time series forecasting at large. Methods in this section were generally not applied to the application context considered in the present paper, but could be repurposed _a priori_. Besides neural and ARIMA models, seminal ways to forecast time series include the Croston method, which is an exponential smoothing model dedicated to intermittent demand forecasting [17]. It is notably used as baseline method in [16], along with the Innovation State-Space Model (ISSM) [18].
Modern, so-called _deep_ neural network architectures exploit the structure of data - sequential, in the case of the Long-Short-Term Memory (LSTM) [19] and the Gated Recurrent Unit (GRU) [12], 2D or 3D, in the case of convolutional networks. Even though recurrent models such as the LSTM would appear as outdated, they are still popular in the recent literature thanks to improvements to training and inference procedures carried by modern toolboxes
such as Tensorflow [1] or MXNet [15], as well as the encoder-decoder mechanism, in which a context sequence is encoded and conditions the prediction of the target sequence. It has been initially codified for the GRU, and transferred to the LSTM, leading to continued usage in recent contributions [16, 17]. These models contrast with the seminal Feed-Forward Network (FFN), in which all layers are fully connected to previous and next layers, up to the activation layer [11].
Salinas et al. propose DeepAR, which implements flexible forecasting for univariate time series [17]. Formally, it defines a _context_ interval (a chunk of past values) and a _prediction_ interval (the set of values to be forecasted), and the model is optimized end-to-end w.r.t. prediction interval forecasts. This contrasts with linear models such as ARIMA, which are optimized for one time step ahead forecasts. Also, instead of point forecasts, DeepAR predicts model parameters, which can be used to compute sample paths, and empirical quantiles, which can be highly valuable in our context. In this case, a family of probability distributions has to be chosen so as to fit the time series at hand. The model was initially aimed at retail business applications, but it can be adapted to other types of data just by changing the family of the output probability distribution. It is based on the LSTM model architecture. The model supports the adjunction of covariates, i.e., time series which are available for both context and prediction interval at forecast time. By repeating the same value for the whole intervals, static covariates are also supported. Such flexibility meets all the requirements of our hybrid-physical approach (i.e., NWP covariates and PV system descriptive features).
In retail applications, input data may have highly variable magnitude (e.g., depending on item popularity or time in the year). The authors observe an approximate power law between item magnitude and frequency (i.e., values are less likely as they get large, and reciprocally). They claim that grouping items to learn group-specific models or performing group-specific normalizations, as previously done in the solar and PV power forecasting literature [14, 15] are not good strategies in such case. They propose a simple alternative scheme, where samples are scaled by an item dependent factor computed using the context interval.
[18] propose a time series classification model inspired by Inception-v4 [16]. Basically, they are transferring the multi-scale pattern extraction capability of convolutional neural networks to 1D data such as time series. A convolutional encoder is tested by Wen et al. in the context of their multi-horizon quantile forecaster [21]. Instead of forecasting probabilistic model parameters, this model directly forecasts quantiles in a non-parametric fashion. However, it suffers from the quantile crossing problem: forecasted values may have ranks inconsistent with the quantile they are attached too.
[11] is another alternative to DeepAR. Similarly to [21], it does not rely on probability distribution outputs, and implements conditional quantile functions using regression splines instead. Spline parameters are fit using a neural network directly minimizing the Continuous Ranked Probability Score (CRPS), which is then used as a loss function. This results in a more flexible output distribution, and an alternative to other flexible schemes (e.g. mixture of distributions in the context of [17]). However, it currently1
lacks a publicly available implementation.
Multivariate forecasting consists in modelling and forecasting multiple time series simultaneously, by contrast to univariate forecasting. The seminal way to achieve this, is with the Vector AutoRegression (VAR) model, which is an extension of the linear autoregressive model to multiple variables [14]. As this model has hard time dealing with many variables (e.g., items in the retail domain), neural network-based models such as DeepVAR were designed as an alternative [10]. It can be thought of as a multivariate extension to [11]. DeepVAR models interaction between time series, e.g., as resulting from causality or combined effects. It uses a Copula model, which models interactions between time series, and elegantly copes with time series of varying magnitude, alleviating the need for an explicit scaling mechanism. A single multivariate state variable underlying a LSTM model is used for all time series. Empirical cumulative distributions serve to compute sample paths and quantiles. Only static covariates (i.e., constant in the context and prediction intervals) were considered in this paper. They define a low-rank parametrization, which opens the possibility to deal with a very large number of time series.
Some prior work involved deep learning in the context of PV power forecasting. For example, [1] consider one hour ahead forecasts (instead of intra-day as aimed at in this paper). They used a single LSTM layer without the encoder-decoder mechanism. Also, they consider point forecasts. Data for two PV sites is used for the experiments, with roughly the same power magnitude for both sites (approx. 3.5kW). Models are trained for each site separately. Alternatively, in this paper we address all sites with a single model in a scale free approach, addressing an arbitrary number of sites with little to no model size overhead. Finally, the locations associated to the datasets are subject to a dry climate, which is simpler to predict [2]. Our application testbed is a temperate area, subject to frequent and abrupt changes on a daily basis, therefore much more challenging to predict.
[14] also address PV power forecasting, by decorrelating scale free forecasts using LSTM, from seasonal effects modelled separately using time correlation features and partial daily pattern prediction. However, they focus on forecasts aggregated at a daily scale, when we consider hourly data in this paper. In addition, our approach is end-to-end, with no independent modelling of seasonal effects.
### Hybrid-physical approaches
[1] focus on wind speed and power forecasting. They consider hourly forecasts for 72 hours ahead, using Numerical Weather Prediction (NWP) forecasts as additional inputs, thus proposing an early combination of observation data with NWP covariates. The recurrent model used then (diagonal recurrent neural networks [12]) was superseded by alternative models such as LSTM [10] and GRU [15] in the recent literature as seen in the previous section. Another early work is proposed by [13], who combine NWP covariates and neural networks for hourly and daily solar irradiance forecasting.
[17] present a regional PV power forecasting system, with hourly resolution up to 72h ahead. Their approach combines clustering and numerical optimization, and it is compared to regression methods such as ElasticNet [18], SARIMAX [19], or Random Forests [19]. The CRPS metric is used
for evaluation. Their approach is not autoregressive, rather they directly predict future PV power from solar irradiance and temperature forecasts obtained from a proprietary system which refines NWP forecasts according to local conditions. Alternatively, our approach tries to combine the benefits of using NWP forecasts with an autoregressive model of the PV power observations.
### Forecast structure and validation
Figure 1 distinguishes _regular_ forecasts from _rolling_ forecasts, which are the two main strategies to consider for extracting fixed sized blocks from time series. For simplicity, the figure considers hourly forecasts and 24-hour context and prediction intervals, but the definition is straightforward to generalize. In brief, two consecutive regular forecasts are offset by the size of the prediction interval, when rolling forecasts are offset by the frequency of the time series (hourly, in Figure 1). In other words, with 24-hour prediction and context intervals, regular forecasts happen on the same time every day, whereas rolling forecasts are issued every hour for the whole prediction interval. In this process, forecasts beyond the next hour are refreshed every hour.
Works such as [11] consider regular forecasts, as the forecast time is tied to availability of NWP data. As we use similar NWP covariates, regular forecast are a requirement in our work too. Alternatively, [12] address rolling forecasts by having a distinct model for each possible starting time in the day. Let us note that some models (e.g., [13]) allow to encode seasonal information on the predicted time steps (e.g. hour in day, day in week) as covariates. Therefore, they can be used indistinctively with regular and rolling forecasts, provided an adapted training set is available.
Time series forecasting models are typically evaluated using a variant of the Root-Mean-Square Error (RMSE) metric. When quantiles can be computed, the Continuous Ranked Probability Score (CRPS) rates the compatibility of an observation with a set of quantiles [10]. This metric is generally used for evaluating models which output forecasting quantiles [10, 11, 12].
[12] discuss the problem of cross-validation, and more generally validation, in the context of time series forecasting. Original formulations of cross-validation methods often assume that data items are independent. They cannot be used out of the box with time series, as the sequential structure of the latter invalidates the underlying hypotheses. They recognize that the
Figure 1: Distinction between regular and rolling forecasts. \(t_{0}\) denotes the present time step in the context of a given data sample.
seminal way to validate models with time series is to train a model using the first \(t_{0}\) values, and validate using the last \(T-t_{0}\) values. However, this way is hardly compatible with cross-validation schemes, and yields weak test error estimations. In virtue of the bias-variance tradeoff [1], this issue has moderate impact on models with strong bias such as linear models. However, over-parametrized models, such as most neural models presented in Section 2.2 (e.g. [12, 12, 13, 14]) can be significantly affected, and exhibit a strong tendency to overfit (even if recent theory shows that with careful consideration this problem can be correctly addressed [15]). For mitigation, [1] recommend blocked cross-validation, in which time series segments of size \(\tau<<T\) are used for independent training and validation for model selection and test error computation. As we also use deep learning as a building block in our approach, we carefully consider these recommendations in our experimental design (see Section 4).
## 3 Model Description
The survey of related works in Section 2 led us to choose DeepAR [12] as a framework to develop our hybrid-physical implementation. We adapted the official implementation of the model [1] to suit our needs. Another model which offers the relevant flexibility as well as a public implementation is the model by [17]. We will compare to this model in our experimental section.
Alternatively, PV sites could have been considered as dimensions in a multivariate forecasting problem, thus possibly forecasting all sites at a given time at once using DeepVAR [12]. However, its limitation to static covariates prevents us from implementing the projected hybrid-physical approach. Also, it is unclear how new PV sites, as well as missing values, which are pretty common as PV sites may witness independent breakdowns and interruptions of measurements or data communication, can be handled with such multivariate modelling. Alternatively, we choose to model the PV site using covariates.
### DeepAR model
In the remainder of the paper, for clarity of the derivations, scalar variables are represented in normal font, vector variables in bold font, and matrix variables in capital bold font. This section 3.1 essentially paraphrases [12], but ensures the present paper is self-contained, while introducing the necessary formalism.
Let us assume we have a data set of \(N\) univariate time series, each with fixed size \(T\). Each observed time series in \(\{\mathbf{z}_{n}\}_{n\in 1,\ldots,N}\) may relate to an item in store in the retail context, or to a distinct PV system in the context addressed in this paper. \(t_{0}\in[1\ldots T]\) denotes the present time, i.e. the latest time point for which we assume \(z_{n,t}\) is known when issuing the forecast. \([1\ldots t_{0}]\) is then the _condition_ interval, and \([t_{0}+1\ldots T]\) is the _prediction_ interval. The goal of the model is to forecast \(\mathbf{z}_{n,t_{0}+1:T}=[z_{n,t_{0}+1},\ldots,z_{n,T}]\) with the knowledge of \(\mathbf{z}_{n,0:t_{0}}=[z_{n,0},\ldots,z_{n,t_{0}}]\). We also consider a set of covariates \(\mathbf{X}_{n,1:T}=[\mathbf{x}_{n,1},\ldots,\mathbf{x}_{n,T}]\) which are known for \(t\in 1,\ldots,T\) at time \(t_{0}\).
In this context, the model is defined by the following product of likelihood factors, also summarized in Figure 2:
\[Q_{\Theta} =\prod_{n=1}^{N}\prod_{t=t_{0}+1}^{T}q_{\Theta}(z_{n,t}|\mathbf{z}_{n,1:t-1},\mathbf{X}_{n,1:T})\] \[=\prod_{n=1}^{N}\prod_{t=t_{0}+1}^{T}p(z_{n,t}|\theta(\mathbf{h}_{ n,t},\Theta)) \tag{1}\]
The model is both autoregressive and recurrent, as state variable:
\[\mathbf{h}_{n,t}=\Theta(\mathbf{h}_{n,t-1},z_{n,t-1},\mathbf{x}_{n,t}) \tag{2}\]
is obtained from LSTM model \(\Theta\) in which the state variable and observation of the previous time step are both reinjected. The model also depends on parametrized function \(\theta\), which learns the mapping between the state variable \(\mathbf{h}\) and parameters of the probability distribution \(p\).
In effect, as seen in Figure 2, during training time observations are injected as \(z_{n,t-1}\) in Equation (2). However at test time actual observations are not available for the prediction interval, so we sample \(\tilde{z}_{n,t}\sim p\), and inject them as proxy observations. Doing so yields sample paths, which can be repeated and serve to compute empirical quantiles in the prediction interval, instead of simple point estimates. In this paper, when point estimates \(\hat{z}_{n,t}\) are needed, we take them as the empirical median of a set of sample paths. In Figure 2, we can see that the LSTM _encodes_ the context interval into \(\mathbf{h}\), which is then _decoded_ for the prediction interval. The same LSTM model \(\Theta\) is used for encoding and decoding.
The negative log of expression (1) is used as the loss function for training all parameters in the model in an end-to-end fashion. The form of function \(\theta\) depends on the probabilistic model in expression (1): for example, if \(p\) is chosen as a Gaussian, appropriate functions would be:
Figure 2: Illustration of the DeepAR model. Observed variables are represented as shaded boxes, and latent variables as blank boxes. For the context interval, \(z\) variables are always known. For the prediction interval, the model behaves differently at training and test time. At test time, \(\tilde{z}\) variables are sampled according to \(p\), forming sample paths. Plain lines represent dependencies between random variables, and the dashed line highlights the reinjected sample.
\[\theta_{\mu}(\mathbf{h}_{n,t}) =\mathbf{w}_{\mu}\mathbf{h}_{n,t}+b_{\mu}\] \[\theta_{\sigma}(\mathbf{h}_{n,t}) =\log(1+\exp(\mathbf{w}_{\sigma}\mathbf{h}_{n,t}+b_{\sigma})) \tag{3}\]
We note that the softplus function in (3) ensures \(\sigma\) is mapped as a positive real number. Among possible probabilistic models and mapping functions, the official DeepAR implementation [1], used in the experiments for this paper, features Gaussian, Student, negative binomial, and mixture distributions. The mixture distribution composes several distributions from the same nature using mixture weights, which have their dedicated \(\theta\) function.
### Positive Gaussian likelihood model
As PV power measurements are bound to be non-negative real numbers, a contribution of this paper is to allow for the Gaussian distribution to be truncated from below at 0, referred to as the _positive Gaussian_ distribution in the remainder of this paper. Formally this yields:
\[p(z_{n,t}|\theta_{\mu},\theta_{\sigma})=\frac{1}{\sigma\sqrt{2\pi}}\frac{\exp \left(-\frac{1}{2}\frac{(z_{n,t}-\theta_{\mu})^{2}}{\theta_{\sigma}{}^{2}} \right)}{1-\Phi(-\frac{\theta_{\mu}}{\theta_{\sigma}})} \tag{4}\]
With \(\Phi\) the cumulative distribution function of the standard Gaussian (i.e., with mean 0 and standard deviation 1). Besides adapting the loss function (see Equation (1)) to this new probability distribution function, the same \(\theta_{\sigma}\) function as the Gaussian distribution can be used. To make sure the range of \(\theta_{\mu}\) is also positive, for the positive Gaussian we use:
\[\theta_{\mu}(\mathbf{h}_{n,t})=\log(1+\exp(\mathbf{w}_{\mu}\mathbf{h}_{n,t}+b_ {\mu}))\]
From an application of the Smirnov transformation [1] to the case at hand, samples from a positive Gaussian distribution can be obtained as:
\[\tilde{z}=\Phi^{-1}\Bigg{(}\Phi\big{(}-\frac{\theta_{\mu}}{\theta_{\sigma}} \big{)}+\tilde{u}\bigg{(}1-\Phi\big{(}-\frac{\theta_{\mu}}{\theta_{\sigma}} \big{)}\bigg{)}\Bigg{)}\theta_{\sigma}+\theta_{\mu} \tag{5}\]
where \(\tilde{u}\) is a uniform sample in \([0,1]\).
## 4 Experiments
### Data
Section 3 presented the forecasting model underlying our experiments in general terms, but here we recall that we focus on a specific application and its peculiarities: forecasting the power output of a set of PV systems.
The variable to forecast (\(z\) in Section 3) is the average power output of a PV system during the last hour in Watts. As hypothesized in Section 2.4, it is thus a hourly time series. For our experiments, we used data recorded by 119 PV systems located in Luxembourg between 01/01/2020 and 31/12/2021.
They are dispatched in a relatively small (4 \(\times\) 4 km) area. These PV systems are managed by Electris, a DSO in Luxembourg which collaborated with the authors of this paper in the context of a funded research project. Besides PV power measurements, each time step is associated to intra-day, day-ahead, and 2 days ahead predictions by the physical PV performance model described in [13], which uses ECMWF NWP solar irradiance forecasts as input (referred to as _24h_, _48h_, and _72h_ NWP forecasts, respectively, in the remainder of the paper). We use these tier forecasts as covariates (**X** in Section 3), as they will be available beforehand for the prediction interval.
The model also supports the adjunction of _static_ covariates, which are constant for a given time series. Relating to Section 3, we note that this simply amounts to set associated \(\mathbf{x}_{n,1:T}\) to a constant. In the context of the present work, we consider a _system ID_ categorical feature, which is simply the system ID converted to a categorical feature with 119 modalities. We also consider _system description_ continuous features. Among the set of descriptors provided by system vendors and characteristics of their setup, we retain the following features as they are expected to influence PV power curves and magnitude: the _exposition_ of the system (in degrees), its _inclination_ (in degrees), its nominal _power_ (in Watts) and its _calibration factor_ (unitless, tied to the system on-site setup). As the DeepAR implementation expects normally distributed features, we standardize features so that they have zero mean and unit standard deviation.
As the nomimal power of our PV systems varies over a large range (from 1.4kW to 247kW), a scaling scheme is necessary to properly handle measured values. As mentioned in Section 2, we address this as implemented in DeepAR by dividing all measurements in a given sample by \(\frac{1}{t_{0}}\sum_{1}^{t_{0}}|z_{t}|\). Also, as NWP covariates are expected to be distributed similarly to their associated measurements, they are normalized likewise.
### Loss function and metrics
The sum of negative log-likelihoods of observations \(z_{n,t}\) (on top of Figure 2) is used as a loss function to fit all model parameters \(\{\mathbf{w}_{\mu},\mathbf{w}_{\sigma},\Theta\}\) in an end-to-end fashion. As commonly done in the literature (see Section 2.2), we use a fixed size for the context and prediction intervals in our experiments. As we are interested in intra-day regular forecasts (see Section 2.4), with hourly data this means that the prediction interval has size 24. The midnight run of the ECMWF NWP forecasts for the 3 days to come are broadcasted each day early in the morning, but before the sunrise, so while PV power is still obviously zero. We assume also the associated covariates are instantly available then. For simplicity, we thus choose midnight as the reference time in day for the regular forecasts (i.e., \(t_{0}\) in Section 3). In this context, 24h, 48h and 72h NWP covariates associated to predicted time step \(t_{0}+h\) will have been issued at time steps \(t_{0}\), \(t_{0}-24\) and \(t_{0}-48\), respectively.
As the maximal horizon of the collected NWP forecasts is 72h, and we previously set the intra-day prediction interval as 24h, as a rule of thumb we used 48h as the context interval so that a training sample covers 72h. In practice, preliminary tests showed that using a larger context interval would not bring visible improvements, and using a multiple of the prediction interval size facilitates the creation of train and test data sets.
To measure model performance, RMSE-based metrics are common in energy utility companies, notably as they penalize large errors [16]. We used the normalized Root-Mean-Square Error (nRMSE) defined as:
\[\text{nRMSE}(\hat{\mathbf{Z}},\mathbf{Z})=\sqrt{\sum_{n=1}^{N}\frac{1}{N}\frac{ \frac{1}{T-t_{0}}\sum_{t_{0}+1}^{T}(\hat{z}_{nt}-z_{nt})^{2}}{P_{n}^{2}}} \tag{6}\]
with \(P_{n}\) the nominal power of PV system \(n\), \(\hat{\mathbf{Z}}=\{\hat{z}_{nt}\}\) the estimated point forecast, and \(\mathbf{Z}=\{z_{nt}\}\) the observed power. This nRMSE allows to measure the performance of a point estimate forecast in such way that PV systems with larger nominal power do not dominate the error metric. This is a field requirement, as PV systems have private owners, who have to be treated equally, irrespective of the nominal power of their system. In practice, nRMSE can be interpreted as a percentage of the PV system nominal power. To evaluate the performance of a proposed system w.r.t. a reference, the _skill score_ is derived from RMSE metrics as:
\[\text{Skill score}=1-\frac{\text{nRMSE}(\hat{\mathbf{Z}},\mathbf{Z})}{\text {nRMSE}(\hat{\mathbf{Z}}_{\text{ref}},\mathbf{Z})} \tag{7}\]
As presented in Section 3, the models trained in the context of this work output prediction quantiles. We use the median as the point estimate forecast; in addition, we compute the CRPS metric, commonly used in related work [17, 18, 19], which rates the quality of prediction quantiles as a whole:
\[\text{CRPS}(F^{-1},\mathbf{Z}) =\frac{1}{N(T-t_{0})}\sum_{n=1}^{N}\sum_{t=t_{0}+1}^{T}\int_{0}^{ 1}2\Lambda_{\alpha}(F^{-1}(\alpha),z_{nt})d\alpha \tag{8}\] \[\Lambda_{\alpha}(F^{-1}(\alpha),z_{nt}) =(\alpha-\mathcal{I}_{[z_{nt}<F^{-1}(\alpha)]})(z_{nt}-F^{-1}( \alpha))\]
with \(F^{-1}\) the quantile function of the predictor (which returns the quantile level in Watts associated to a probability \(\alpha\in]0,1[\)), \(\Lambda_{\alpha}\) the _quantile loss_, and \(\mathcal{I}_{[c]}\) the indicator function associated to logical clause \(c\). As discussed in Section 3, the quantile function is estimated empirically using a set of sample paths \(\{\hat{\mathbf{z}}_{n}\}\). Intuitively, the quantile loss gets larger when observations are far from the median as measured by distribution quantiles. This also allows to penalize models which are excessively confident, and somehow reward models which are able to better estimate the expected accuracy of their point forecast. In our experiments, we use 100 paths per sample, which can be used as input to return empirical quantiles.
PV power is naturally zero at night time. Therefore, including these time steps in metric computation is likely to bias nRMSE and CRPS towards 0. To prevent this, we exclude nightly time steps by limiting terms accounted for in Equation (6) and (8) to time steps where the value averaged over the whole data set is significantly different from zero.
### Validation scheme
Following recommendations by [17], we create training samples by cutting the data set in fixed size 72h segments, with \(t_{0}\) in each segment being midnight
24h before the end of the segment. Assuming we extracted the segment at the beginning of the time series, we then shift the offset 24h forward, so that the next segment includes the previous prediction interval in its context interval (see, e.g., top of Figure 1). As we treat all PV systems as independent time series, this results in \(O(CD)\) series, with \(C\) the number of PV systems and \(D\) the number of values \(t_{0}\) can take in the original time series.
Our number of PV systems and temporal collection bounds would yield 86989 samples. However, PV systems may exhibit missing or erroneous measurements due to several reasons (e.g., power outage, bad manipulation, faulty sensor). Figure 3 summarizes how missing values are distributed in the data set. The l.h.s. of Figure 3 shows that missing values are not uniformly distributed across PV systems. Approximately one third has no missing value, another third has a bit less than 20% of missing values, and the last third between 25% and 50%. The under-representation of this last third can be problematic. The r.h.s. of Figure 3 shows that these missing values are not evenly distributed in time: this indicates that a group of system may have been offline for a contiguous time frame during late winter and spring. Actually, most missing values are linked to systems started later than the others in year 2020. The associated periods are therefore under-represented, but we note that any month has at most 30% missing data. In the remainder, we consider that this bias remains in a range which makes uniform sampling w.r.t. time acceptable for building training batches. In order to facilitate processing, and as samples cuts are aligned with day frames, we detect and exclude days matching one of the following patterns: more than two consecutive missing values, measurements blocked to a constant value, visual inspection for aberrant values. This results in 67666 valid day frames.
The PV systems are distributed in a relatively small area in Luxembourg: therefore, it is expected that prediction intervals for different systems but same absolute time attached to \(t_{0}\) will be highly correlated. In order to validate this intuition, we computed all \(D\) intra-day correlation matrices between systems in our data set. Specifically, we defined intra-day time steps (excluding nightly time steps) as observations and PV systems as variables, resulting in \(D\frac{C(C-1)}{2}\) distinct correlation values. We observe that the median of the distribution of these correlation values is 0.95, which confirms a very high correlation between systems for a given day. As a consequence, sampling uniformly training, validation and test sets in the \(O(CD)\) series would result in _data leakage_, i.e., the model will be able to overfit without harming test error as identical samples (up to scale) will be scattered in training, validation and test sets. Let
Figure 3: _l.h.s._: Proportion of missing values per system. _r.h.s._: Proportion of missing data per associated month.
us note an unexpected positive benefit of this strong intra-day correlation: the under-representation of some systems is then much less problematic. The only remaining issue would pertain to estimating the parameters associated to the static categorical modalities of these systems, if using system ID static covariates. We hypothesise that at least 50% of represented day frames is sufficient to perform this estimation.
To prevent the data leakage problem, we first group the time series by the absolute time attached to their respective \(t_{0}\) (hence grouping samples for all PV systems for a given \(t_{0}\)), and sample 60% of the \(D\) time steps as the training set. We use a static temporal pattern, in order to ensure that each month and season is represented fairly evenly. Validation and test sets are uniformly sampled as half of the remaining 40% irrespective of the PV system, which is not an issue as the goal of validation error is to be an estimate of the test error, provided parameters are not explicitly fitted on the former. The validation set is used to implement early stopping, and select the model before it starts to overfit. The test set serves to compute the metrics described in Section 4.2. To choose the cut between validation and test, and ensure validation error is a fair proxy of the test error, we resample cuts until the RMSE and nRMSE between ground truth and intra-day NWP forecasts (which are considered as constant and known in advance, and are the most relevant baseline to compare to) are equal up to a small threshold. Using the cutting procedure defined so far, we obtain 40670 training, 13498 validation and 13498 test samples.
### Hyper-parameters
The models were trained using the Adam optimizer with learning rate \(10^{-3}\), batch size 64, for 200 epochs. Samples are reshuffled at the beginning of each epoch. In the end, we implement a form of early stopping, by selecting the model with best validation error. DeepAR uses 2 LSTM layers by default, we stick to this parametrization. Two free parameters remain then: the LSTM hidden layer size, and the number of components when a mixture distribution output is used. Figure 4 shows the results of a hyper-parameter search of these parameters using the Gaussian distribution as mixture components. Median results from 6 independently trained models are displayed in the graphs, along with \(\pm 1\) standard deviation error bars. First we determine the optimal LSTM hidden layer size using a single component mixture on the l.h.s. of Figure 4. We see that beyond 100 units, the validation performance is only marginally improved: we therefore retain 100 as hidden layer size for all our experiments. The r.h.s. of Figure 4 shows the validation performance w.r.t. the number of mixture components using this layer size. Using 2 components yields the best results, so we retain this mixture size for the remainder of the experiments.
### Model comparison strategy
Besides these hyper-parameters, a large search space of models results from using a given distribution output (Gaussian, Student, Positive Gaussian), one or two mixture components, whether or not using NWP, system ID, or system description covariates. Instead of comparing all possible combinations, we opted for a stepwise approach. We first compared 2-component mixtures of Gaussian, Student, and positive Gaussian components, with NWP covariates,
but without system static covariates (models 3 to 5 in Table 1). We chose this configuration as a middle ground, as using the NWP features implements the hybrid-physical approach advocated in this paper. This first comparison highlighted that the Student distribution output yields significantly inferior results in terms of nRMSE. We thus exclude the Student distribution from further tested combinations. This middle ground can be compared to the alternative quantile regression architecture proposed by [23], with access to equivalent input information (model 13).
Then we performed an ablation study of our middle ground. First we tested the consequence of using a single component instead of mixture (models 1 and 2). We also evaluated performance without NWP covariates, i.e. implementing purely autoregressive models (models 10 and 11). The latter are compared to a FFN model (model 12), which forecasts \(\mathbf{z}_{t_{0}+1:T}\) as a function of \(\mathbf{z}_{1:t_{0}}\) without any form of autoregression or covariates support, so that the value of using LSTM cells is evaluated _per se_.
Finally, the improvement brought by using the system ID (models 6 and 8) and system description features (models 7 and 9) to the middle ground configuration is evaluated. System ID is also used for model 11, as a way to estimate the best performance that can be reached (i.e. using a mixture of the best distribution output _a posteriori_) without NWP features. This can be of practical interest, as access to ECMWF solar irradiance forecasts is free for research purpose, but requires a subscription for industrial applications.
### Results and interpretation
Results are given in Table 1. In [13], skill scores are computed using a 24h persistence model, adjusted according to the clear sky PV power for the day under consideration. This is a common baseline in the solar energy domain [10]. In this paper, we rather consider that the NWP covariates are the baseline against which our results have to be evaluated. So we use 24h NWP covariates as \(\hat{\mathbf{Z}}_{\text{ref}}\) in Equation (7) for computing skill scores presented in Table 1. This is a stronger baseline for skill score computation, as it was shown to significantly outperform persistence forecasts [15]. In this experimental section, this means that a model with a skill score lower than 0 is not able to beat the covariates it is given among its inputs.
First focusing on the nRMSE metric through skill scores, we see that middle ground models with Gaussian and positive Gaussian yield some improvement,
Figure 4: Validation (orange) and test (blue) nRMSE curves for variable LSTM layer sizes (_l.h.s._) and numbers of mixture components (_r.h.s._).
with skill scores 1.04% and 2.23%, respectively. On the other hand, using the Student distribution yields skill score -0.53%. Using the Student distribution is therefore not even able to do better than just copying part of its inputs. This motivated pruning this option beyond the middle ground. We also note that the quantile regression method of [17] (model 13) obtains a skill score of -7.17%, which justifies our choice of DeepAR as the framework for our proposal.
Using a single Gaussian does also not match the baseline. On the other hand, a single positive Gaussian component yields a skill score of 0.50%. Using a mixture of distributions therefore contributes to improving the performance. Then, combining the system ID covariate to the mixture of Gaussian and positive Gaussian components yields skill scores 2.16% and 7.54%, respectively. We see that an important performance gap results from the joint usage of the system ID covariate and the positive Gaussian component. Using the system description covariates, these scores drop to 1.89% and 5.12%. Using the system ID is therefore the best option, but the system description features have the advantage to generalize to new systems without having to fully retrain the model, which can be useful in production conditions. We note that the improvement brought by using the system ID as a covariate provides anecdotal evidence supporting the hypothesis formulated in Section 4.3 regarding the imbalance of systems representation in the dataset. Using the system description covariates is beneficial (e.g., causes skill score increase by 2.89 points with the mixture of positive Gaussian components), but to a lesser extent than using the system ID. We can relate this to the fact that they do not fully reflect some local effects which already taint the PV performance model (see Section 1).
If ignoring NWP covariates, the performance of models is very significantly degraded. The mixtures of Gaussian and positive Gaussian components get 26.3% and 23.0% negative skill scores, respectively. Both models are still doing better than the alternative FFN architecture (28.4%), but the fallback consisting of not using NWP features comes at a high cost in terms of performance. We note that the gap between model 10 and 11 (2.7%) is not as large as the gap between their counterparts using NWP features (model 3 and 8, 6.5%). This con
\begin{table}
\begin{tabular}{c l l l l l l} \hline \multicolumn{2}{c}{**ID**} & \multicolumn{1}{c}{**Output**} & \multicolumn{1}{c}{**NWP**} & \multicolumn{1}{c}{**Static**} & \multicolumn{1}{c}{**nRMSE (\%)**} & \multicolumn{1}{c}{**Skill (\%)**} & \multicolumn{1}{c}{**CRPS (-)**} \\ \hline \multicolumn{2}{c}{} & \multicolumn{5}{c}{_Baseline_} & \multicolumn{1}{c}{} \\ \hline \multicolumn{2}{c}{-} & PV perf. model & Yes & - & 9.651 & - & - \\ \hline \multicolumn{2}{c}{_Single-component DeepAR_} & \multicolumn{1}{c}{} \\ \hline
1 & Gaussian & Yes & - & 9.697(\(\pm\)0.035) & -0.48 & 0.639(\(\pm\)0.004) \\
2 & Positive & Yes & - & 9.603(\(\pm\)0.041) & 0.50 & 0.650(\(\pm\)0.015) \\ \hline \multicolumn{2}{c}{_Mixture DeepAR_} & \multicolumn{1}{c}{} \\ \hline
3 & Gaussian & Yes & - & 9.551(\(\pm\)0.089) & 1.04 & 0.626(\(\pm\)0.002) \\
4 & Student & Yes & - & 9.702(\(\pm\)0.023) & -0.53 & 0.629(\(\pm\)0.004) \\
5 & Positive & Yes & - & 9.436(\(\pm\)0.049) & 2.23 & 0.618(\(\pm\)0.005) \\
6 & Gaussian & Yes & System ID & 9.443(\(\pm\)0.090) & 2.16 & 0.605(\(\pm\)0.004) \\
7 & Gaussian & Yes & System descr. & 9.469(\(\pm\)0.070) & 1.89 & 0.620(\(\pm\)0.004) \\
8 & Positive & Yes & System ID & **8.923**(\(\pm\)**0.044) & **7.54** & **0.579(\(\pm\)0.007)** \\
9 & Positive & Yes & System descr. & 9.157(\(\pm\)0.041) & 5.12 & 0.596(\(\pm\)0.004) \\
10 & Gaussian & No & - & 12.185(\(\pm\)0.055) & -26.3 & 0.831(\(\pm\)0.008) \\
11 & Positive & No & System ID & 11.873(\(\pm\)0.036) & -23.0 & 0.811(\(\pm\)0.006) \\ \hline \multicolumn{2}{c}{_Alternative models_} & \multicolumn{1}{c}{} \\ \hline
12 & FFN & No & - & 12.392(\(\pm\)0.089) & -28.4 & 0.879(\(\pm\)0.017) \\
13 & Wen et al. & Yes & - & 10.343(\(\pm\)0.070) & -7.17 & 0.686(\(\pm\)0.009) \\ \hline \end{tabular}
\end{table}
Table 1: nRMSE and CRPS test metrics for the range of compared models. The results of the best performing model are bold-faced. Median results and standard deviations are estimated from 6 models trained independently.
firms a synergy between the elements composing of the best-performing model (i.e. mixture of positive Gaussian components, NWP covariates, and system ID).
Figure 5 illustrates the relationship between nRMSE and CRPS metrics. We see that they are strongly correlated, so in the context of our experiments, models with the best skill scores generally yield the best prediction intervals. Models significantly below the linear regression fit will tend to provide better prediction intervals. This is the case with model 4 (mixture of Student components). However, its CRPS is still not as good as that of the other middle ground models (3 and 5). On the other hand, model 2 (single positive Gaussian component) has significantly degraded CRPS. All other models are quite close to the regression line. This includes models based on the mixture of positive Gaussian components: using a mixture seems to fix the discrepancy observed with model 2.
As alternative designs, we considered using unscaled NWP features (i.e. not scaling them along with \(\mathbf{z}\) values as described in Section 4.1), and using the weighted sampling scheme described in [13], which samples training examples with frequency proportional to the magnitude of \(\mathbf{z}\) values. We did not report results with these alternative designs as they brought systematic degradation to the performance. We note that the poor performance of weighted sampling w.r.t. the nRMSE metric is expected, as the latter balances the leverage of systems with large nominal power. We also tried to use both static covariates (i.e., system ID and description) simultaneously, but this led to a slight degradation compared to the respective model using only the system ID covariate. This is expected, as the system ID alone already encodes system di
Figure 5: Illustration of the relationship between nRMSE and CRPS metrics. The line is the result of a linear regression of the points in the graph. Glyph shapes and colors recall characteristics of the respective models. For better legibility, outlying models (i.e. those not using ECWF covariates) are excluded, even though they were also used for fitting the linear regression.
versity. In addition, as we already saw above, system description features may reflect the system setup in a biased way - even though using them is better than using no covariates at all.
### Discussion and qualitative examples
In the previous section we evaluated our proposed models using global metrics. In this section, we aim at providing more detailed insight into our results by analyzing model performance at the system level. The displayed examples were obtained using the best performing model identified in the previous section (i.e. model 8). First we compute per-sample nRMSE metrics for the test set, group them according to their associated system ID, and rank the systems according to the difference between model 8 and baseline nRMSE. In other words, the higher a system is in this ranking, the more DeepAR outperforms the PV performance model for this system. For all but 7 systems over 118, model 8 performs better than the baseline. As a worst case example, we first consider system 115 which comes last in this ranking.
On the l.h.s. of Figure 6, we display the sample of this system associated to
Figure 6: Two test samples associated to system 115. Confidence bounds are displayed for the prediction interval using green shades. The observations and NWP forecasts of the context interval are prepended.
Figure 7: Two test samples associated to system 44. Confidence bounds are displayed for the prediction interval using green shades. The observations and NWP forecasts of the context interval are prepended.
the lowest nRMSE with the DeepAR model. This is a typical clear day example, where the prediction is fairly easy for the neural model. We note that for this instance, forecasts stick more closely to the observations curve than the baseline. The confidence intervals are naturally tight, reflecting the high confidence of the neural model in this case. On the r.h.s. of Figure 6, for the same system, we display a sample for which the difference between the two models is among the largest. In this case, DeepAR is not able to keep up with the sudden peak of PV power. The 24h NWP covariates somehow informed on this peak, but this information was not used by model 8, which acted conservatively regarding the observations in the context interval.
In Figure 7, we consider samples from system 44. This system is the second best of our ranking, and has been identified as problematic for the PV performance model because of a double-pitched roof, not reflected by the system description features [13]. On both sides of Figure 7, the systematic shift of the PV performance model is clearly visible. We also see that model 8 is able to completely ignore this shift, and come up with sensible forecasts. The figure also shows how the confidence interval is tighter when the PV production curve was straightforward to forecast (l.h.s.), and broader when the day ahead is harder to forecast (r.h.s.).
## 5 Conclusion
Eventually, we are able to improve power forecasts obtained from an already strong PV performance model. By comparing many model variants, our experiments highlight the best working configuration, which uses the PV performance model forecasts as covariates, a mixture of positive Gaussians as output distribution, and a static categorical covariate reflecting the associated system ID. The positive Gaussian output allows to deal effectively with the _bell-shaped_ data profile typical of solar energy applications, and the system ID feature allows to model local effects which went previously unnoticed with the PV performance model alone.
In future work, we plan to refine and explore novel neural model designs. For example, quantile regression methods more recent than [14] will be explored. Also, we will further investigate how to deal with novel systems being added to the grid without having to retrain the full model. We saw that using system description features is an effective fallback, but these features do not account for local effects such as a double-pitched roof, so they remain suboptimal. We will also consider longer prediction intervals (e.g. day-ahead and 2 days ahead).
Despite visible success, models trained for this work tended to overfit, and relied critically on early stopping. This is mostly due to our measures taken to prevent data leakage: when segmenting 2 years of data at the day scale, despite all our measures, training and test sets are unlikely to be identically distributed. We addressed this problem in the most straightforward and conservative way, but it seems related to the domain shift problem characterized by the domain adaptation literature [15]. Adapting contributions from this area to the peculiarities of our application is left for future work.
Acknowledgements
This work was supported by the Luxembourg National Research Fund (FNR) in the framework of the FNR BRIDGES Project _CombiCast_
(BRIDGES18/IS/12705349/Combi-Cast). Furthermore, the authors would like to thank our partner Electirs (a brand of Hoffmann Freres Energie et Bois s.a r.l.), for their trust, the very supportive partnership throughout the whole project duration, and their contribution to the common project, financially as well as in terms of manpower and data.
|
2310.08561 | An extensively validated C/H/O/N chemical network for hot exoplanet
disequilibrium chemistry | We aimed to build a new and updated C0-C2 chemical network to study the CHON
disequilibrium chemistry of warm and hot exoplanet atmospheres that relies on
extensively validated and recent state-of-the-art combustion networks. The
reliability range of this network was aimed for conditions between 500 - 2500 K
and 100 - 10^-6 bar. We compared the predictions of seven networks over a large
set of experiments, covering a wide range of conditions (pressures,
temperatures, and initial compositions). To examine the consequences of this
new chemical network on exoplanets atmospheric studies, we generated abundances
profiles for GJ 436 b, GJ 1214 b, HD 189733 b, and HD 209458 b, using the 1D
kinetic model FRECKLL and calculated the corresponding transmission spectra
using TauREx 3.1. These spectra and abundance profiles have been compared with
results obtained with our previous chemical network. Our new kinetic network is
composed of 174 species and 1293 reactions mostly reversible. This network
proves to be more accurate than our previous one for the tested experimental
conditions. The nitrogen chemistry update is found to be impactful on the
abundance profiles, particularly for HCN, with differences up to four orders of
magnitude. The CO2 profiles are also significantly affected, with important
repercussions on the transmission spectrum of GJ 436 b. These effects highlight
the importance of using extensively validated chemical networks to gain
confidence in our models predictions. As shown with CH2NH, the coupling between
carbon and nitrogen chemistry combined with radicals produced by photolysis can
have huge effects impacting the transmission spectra. | R. Veillet, O. Venot, B. Sirjean, R. Bounaceur, P-A. Glaude, A. Al-Refaie, E. Hébrard | 2023-10-12T17:51:59Z | http://arxiv.org/abs/2310.08561v1 | # An extensively validated C/H/O/N chemical network for hot exoplanet disequilibrium chemistry
###### Abstract
Context:The reliability of one-dimensional disequilibrium chemistry models in hot exoplanet atmospheres depends on the chemical network used. To develop robust networks, we can rely on combustion studies that provide C/H/O/N chemical networks validated by vast amount of experimental data generated by the extensive research that has been done on hydrocarbon combustion and NO\({}_{\mathrm{x}}\) formation in the last decades.
Aims:We aimed to build a new and updated C\({}_{0}\)-C\({}_{2}\) chemical network to study the C/H/O/N disequilibrium chemistry of warm and hot exoplanet atmospheres that relies on extensively validated and recent state-of-the-art combustion networks. The reliability range of this network was aimed for conditions between 500 - 2500 K and 100 - 10\({}^{-6}\) bar, with cautious extrapolation at lower temperature values.
Methods:We compared the predictions of seven networks over a large set of experiments, covering a wide range of conditions (pressure, temperatures, and initial compositions). To examine the consequences of this new chemical network on exoplanets atmospheric studies, we generated abundances profiles for GJ 436 b, GJ 1214 b, HD 189733 b, and HD 209458 b, using the 1D kinetic model FRECKLL and calculated the corresponding transmission spectra using TauRex 3.1. These spectra and abundance profiles have been compared with results obtained with our previous chemical network.
Results:Our new kinetic network is composed of 174 species and 1293 reactions mostly reversible. This network proves to be more accurate than our previous one for the tested experimental conditions. The nitrogen chemistry update is found to be very impactful on the abundance profiles, particularly for HCN, with differences up to four orders of magnitude. The CO\({}_{2}\) profiles are also significantly affected, with important repercussions on the transmission spectrum of GJ 436 b.
Conclusions:These effects highlight the importance of using extensively validated chemical networks to gain confidence in our models predictions. As shown with CH\({}_{2}\)NH, the coupling between carbon and nitrogen chemistry combined with radicals produced by photolysis can have huge effects impacting the transmission spectra. This should be kept in mind when adding new elements like sulfur, as only adding a sub-mechanism neglects these coupling effects.
Conclusions:
## 1 Introduction
Over recent decades, and still remaining relevant today, the characterization of the atmospheric composition of exoplanets has only been possible for massive hydrogen-dominated exoplanets close to their star. Because of the detection biases of the transit method and its technical difficulty for exoplanets with a shallow transit depth, the range of masses and semi-major axis that can be probed by spectrometric means remains blind to colder, Earth-like exoplanets. The proximity of these observable exoplanets to their star results in highly irradiated atmospheres (Linsky et al., 2013), which implies both a high temperature profile that activates endothermic reactions and an intense UV flux that photodissociates the majority of species in the upper atmosphere, resulting in the creation of a high quantity of radicals (Heays et al., 2017). This proximity also causes huge tidal forces that probably results in these exoplanets to be tidally-locked, which further intensifies the horizontal and vertical temperature gradients in the atmosphere, causing intense advection and strong steady winds (Menou, 2022; Charnay et al., 2015). We also know that this advection coupled to photolysis in the upper atmosphere should maintain the chemical species abundance profiles in a steady-state out of equilibrium (Moses et al., 2011; Roudier et al., 2021; Stevenson et al., 2010). To take into account the dynamical timescale, it is therefore necessary to accurately describe both the atmospheric advection and the chemical kinetics of the reactions taking place in the atmosphere (Drummond et al., 2020; Zamyatina et al., 2023). Accurately reproducing the chemistry in these conditions requires a detailed kinetic network, which describes chemistry in sets of elementary reversible reactions that form, consume, and propagate radicals. These reactions then form a parameterized chemical network that can be used to model the chemical kinetics in exoplanet atmospheres. However, the parameters that characterize the kinetic properties of each reaction can be difficult to estimate and their determination is subject to an entire field of research in combustion kinetics (Wang and Sheen, 2015; Curran, 2019). In the combustion domain, the detailed kinetic networks are validated against experimental data measured in 0D or 1D reactors close to ideal reactors and designed to characterize only the chemical kinetics.
Such data can include the evolution of combustion products and intermediates as a function of time or temperature, auto-ignition delay times, or laminar flame studies (Battin-Leclerc et al., 2011).
For atmospheric studies of exoplanets, various detailed kinetic networks have already been developed (Moses et al., 2011; Tsai et al., 2017, 2021; Venot et al., 2012; Venot et al., 2015; Venot et al., 2020; Rimmer & Helling, 2016). Most of these chemical networks were built by grouping reactions with available parameters from databases and/or computed with quantum mechanics calculations. Venot et al. (2012); Venot et al. (2015); Venot et al. (2020) networks are the only ones based on networks validated by experiments. Venot et al. (2012) was the first one to be developed, and was extended to species bearing up to six carbon atoms in Venot et al. (2015). Additional corrections to the methanol chemistry were later introduced (Venot et al., 2020; hereafter Venot, 2020). These networks usually describe only the kinetics of carbon-, hydrogen-, oxygen-, and nitrogen-bearing species, and are such labeled C/H/O/N chemical networks. In this present work, we aim to develop a new C/H/O/N network for exoplanet atmospheric chemistry based on extensive validations against experimental data, totally revisiting the C/H/O/N chemistry and basing it on two new state-of-the-art combustion networks for C/H/O and N chemistry, respectively, from Burke et al. (2016) and Glarborg et al. (2018).
To accurately reproduce very different conditions from warm sub-Neptunes to very hot Jupiters, with potential applications to warm super-Earths, the new chemical network is a detailed network suitable for a wide range of pressures and temperatures. The validity domain of the network must therefore be, in principle, from 500 to 2500 K and from 100 to \(10^{-6}\) bar. The network is also required to accurately describe the kinetics of all C/H/O/N species with fewer than two atoms of carbon in order to correctly model the overall chemistry of every major species observed and potentially visible in exoplanet spectra (H\({}_{2}\)O, CH\({}_{4}\), NH\({}_{3}\), CO, CO\({}_{2}\), HCN, C\({}_{2}\)H\({}_{2}\), C\({}_{2}\)H\({}_{4}\), C\({}_{2}\)H\({}_{6}\)...). Although the chemical network is aimed at studying hydrogen dominated atmospheres, it should remain valid at even very high metallicity and for every possible C/H/O/N atomic abundance. This implies that it should accurately describe all the reaction kinetics ranging from oxygen-poor, carbon-, and hydrogen-dominated atmospheres for pyrolysis, up to oxygen-rich environments, more favorable to oxidation reactions. Due to limitations in the available computational resources, it is mainly intended for 1D simulations.
Section 2, discusses how we selected the combustion networks with which we developed our new chemical network, the extensive validation that came along with it and the additions and modifications made to the original networks. Then, in Sect. 3, we apply this network to the study of exoplanet atmospheres. We studied four planets: GJ 436 b, GJ 1214 b, HD 189733 b, and HD 209458 b, and we compared our results with those obtained with the chemical network Venot 2020. We also investigated the differences between the two networks to highlight new chemical pathways, in addition to discussing potential repercussions on the transmission spectrum and their implications on the observability and reliability of current models to interpret JWST observations. Finally, we conclude in Sect. 4 and discuss potential future improvements on this work.
## 2 Detailed combustion network selection
### Considered combustion networks
Seven networks validated on combustion experiments have been compared: NUIGMech1.1, AramcoMech3.0, Burke 2016, Exgas 2014, Konnov 2005, Glarborg 2018, and Venot 2020. The first three networks, NUIGMech1.1, AramcoMech3.0, and Burke 2016, have been developed by Curran et al. at the National University of Ireland in Galway, which led the improvements of combustion kinetics in the last years.
**NUIGMech1.1:**: Currently, NUIGMech1.1 (Wu et al., 2021) is the state-of-the-art kinetic network for C/H/O combustion. This network, that has been extensively validated against experimental data, describes the combustion kinetics of species up to molecules containing seven carbon atoms (C\({}_{7}\)). It also contains nitrogen reactions for the chemistry of NO\({}_{x}\), which are regulated pollutants in combustion processes. This level of details to capture the chemistry of C\({}_{7}\)-C\({}_{7}\) species is achieved at the cost of a very large network size (2746 species and 11279 reactions).
Because of the large size of NUIGMech1.1, which makes it impractical for 1D calculations, two smaller C/H/O networks from the same team were also considered: AramcoMech3.0 and Burke 2016.
**AramcoMech3.0:**: AramcoMech3.0 (Zhou et al., 2018) is a C/H/O C\({}_{4}\) network of 581 species and 3037 reactions that focuses on improving the simulations of Polycyclic Aromatic Hydrocarbon formation.
**Burke 2016:**: Burke 2016 is a C/H/O C\({}_{3}\) network of 173 species and 1011 reactions (Burke et al., 2016), which aimed to better reproduce the combustion of methanol, involved in the combustion of biofuels.
To verify the performances of these networks, we included another network of the literature for C/H/O chemistry in our comparisons: Exgas 2014.
**Exgas 2014:**: Exgas 2014 (Bounacev et al., 2015) is a C/H/O C\({}_{3}\) network of 209 species and 1472 reactions generated with Exgas (Warth et al., 2000), a software that automatically generates combustion detailed kinetic networks. It was used to predict auto-ignition temperatures and delays for gas turbine applications.
Because all these networks besides NUIGMech1.1 lacked nitrogen chemistry, we included three other networks of the literature on C/H/O/N chemistry : Konnov 2005, Venot 2020, and Glarborg 2018.
**Konnov 2005:**: Konnov 2005 is a C/H/O/N C\({}_{2}\) network of 127 species and 1213 reactions (Konnov et al., 2005) designed to study the oxidation of NO into NO\({}_{2}\) in a medium containing ethane and was part of the research effort to reduce NO\({}_{x}\) emissions from car engines due to toxicity and pollution concerns.
**Venot 2020:**: Venot 2020 is the Venot et al. (2020) chemical network, which is an updated version of the Venot et al. (2012) network from which the methanol chemistry was reevaluated. It was especially designed for the study of exoplanet disequilibrium chemistry. It is a C/H/O/N C\({}_{2}\) network of 112 species and 944 reactions, also derived from four experimentally validated combustion networks (Bounacev et al., 2010; Konnov, 2009; Dagaut et al., 2008a; Burke et al., 2016).
**Glarborg 2018:**: Glarborg 2018 is a C/H/O/N C\({}_{3}\) network (Glarborg et al. 2018) that aimed at improving the precision of nitrogen chemistry, especially NO\({}_{\rm s}\) formation. It is a very comprehensive and widely used network for the modelling of nitrogen chemistry in combustion.
For clarity, all the networks used for comparison are listed in Table 1.
### Experimental data
In order to select the best chemical network for our requirements, we gathered 1618 combustion experimental data points, tested the seven different networks over conditions detailed in Table A.1 in the appendix using the Ansys software Chemkin-Pro (Kee et al. 2006), and, finally, we compared them to the experimental data. For a large majority, these data consisted in molar fraction measurements of different species (reactants and products, 1558 measurements out of 1618), but also in the measurement of auto ignition delay times (IDT, 60 measurements out of 1618). This delay corresponds to the time it takes for a fuel mixture to spontaneously ignite at a given temperature and pressure. In experiments, it is measured as the time between the moment when the gas is brought to temperature and pressure conditions and the moment when the ignition is detected. It is most often detected by a pressure or concentration peak of excited OH or CH radicals. In our simulations, the ignition delay time is chosen to be the time at the maximum concentration of the OH radical. These experimental data were collected over 21 different publications in total, from a wide range of conditions fully described in the appendix. The first eight experimental conditions considered were taken from those used in Venot et al. (2020), to determine how the other chemical networks compare to it on the original data used for its validation. The data collected for the first six experimental conditions consisted in the temporal evolution of the abundances of major species at play at the start and end of reaction (CH\({}_{3}\)OH, O\({}_{2}\), CO, CO\({}_{2}\), H\({}_{2}\)O, HCHO, H\({}_{2}\)...) in three different reactor types (closed reactor, plug flow reactor, shock tube). The seventh consisted in the auto-ignition delay time measurement in a shock tube at an initial pressure of 10 and 50 bar, at 10 and 5 different initial temperatures, respectively, over a range of 1000 to 1300 K. For the eighth, it consisted in the evolution with temperature of abundances of major species at the exit of a perfectly-stirred reactor. The rest of the experimental conditions (9 to 21 in Table A.1) were focused on exploring a wider range of initial species and conditions by varying equivalence ratios, from very oxygen-rich combustion to pyrolysis, in addition to varying the fuel type, which consisted in combustion of H\({}_{2}\), of HCN, pyrolysis of CH\({}_{4}\), of C\({}_{2}\)H\({}_{5}\)OH, as well as reactions of nitrogen species like N\({}_{2}\)O, NO, or NH\({}_{3}\). Like the eight first ones, these data consisted in auto-ignition delay times, abundances over time or abundances at steady state over temperature, and sometimes with a parameter study on equivalence ratio, pressure, or different initial species. The species concerned by these abundance data can be reactants (H\({}_{2}\), CH\({}_{4}\), HCN, C\({}_{2}\)H\({}_{5}\)OH, O\({}_{2}\)...), products (H\({}_{2}\)O, CO\({}_{2}\), CO, C\({}_{2}\)H\({}_{2}\), C\({}_{2}\)H\({}_{4}\), C\({}_{2}\)H\({}_{6}\), CH\({}_{3}\)CHO, HCHO...) or appear as both in the data set depending on the conditions (CH\({}_{4}\), H\({}_{2}\)).
In combustion conditions, the parameter describing the abundance of fuel to oxidizer is the equivalence ratio:
\[\phi=\frac{n_{fuel}/n_{ox}}{[n_{fuel}/n_{ox}]_{sto}},\]
with \(n_{fuel}\) as the fuel quantity, \(n_{ox}\) as the oxidizer quantity, and \([n_{fuel}/n_{ox}]_{sto}\) as the ratio of these values in a stoichiometric mixture. As the equivalence ratio grows larger, the fuel proportion gets higher and the combustion conditions get closer to pyrolysis conditions. Pyrolysis corresponds to high temperature conditions in a reducing medium with no oxygen, while combustion refers to high temperature conditions in an oxidizing medium, usually oxygen. Covering a wide range of equivalence ratios in our dataset allows us to test the kinetic networks on different compositions to ensure their ability to accurately model the chemistry occurring on exoplanets with very different elemental abundances. Pyrolysis and a high equivalence ratio combustion corresponds to low metallicities with high C/O, N/O, and H/O ratios, while a low equivalence ratio combustion corresponds to high metallicities with low C/O, N/O, and H/O ratios. In total, the full experimental data set spanned equivalence ratios from 0.05 to 5 and pyrolysis conditions, pressures from 0.2 to 50 bar and temperatures from 800 to 2400 K.
### Error calculations
To test the agreement of the different chemical networks with the experimental data, network predictions were plotted against experimental points and compared. This resulted in over 500 plots, which is too much to be shown here. Therefore, we will focus on the distribution of errors for each chemical network, compiled in histograms of Fig. 1, and discuss the main tendencies visible in the overall dataset.
To sum up these numerous plots into a statistical distribution of errors shown in Fig. 1, we chose to compute these errors using the following formula:
\[y_{error}=\frac{y_{mod}-y_{exp}}{y_{max}},\]
with \(y_{exp}\) being the values of each experimental point in our dataset, \(y_{mod}\) being the network prediction at that point and \(y_{max}\) being the value of the highest experimental point over the experimental range. Each experimental point corresponds to a measurement of the molar fraction of a species (either product or reactant, for 1558 measurements out of 1618), but also of the IDT of a mixture (60 measurements out of 1618). Depending on the reactor type, for a given experimental range of measurements, pressure, temperature, or reaction time can change. This range depends on the type of data, and corresponds to the temperature range of the original measurements for temperature studies, and to the time range of the original measurements for mole fraction over time studies. This choice is done to give a relative error that can be compared between different experiments, while avoiding non-representative errors due to data points close to zero and experimental and pointing noise causing diverging relative errors.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline
**Name** & **Species** & **Reactions** & **Size** & **Atoms** \\ \hline NUHGMech1.1 & 2746 & 11279 & C\({}_{7}\) & C/H/O/N \\ AramccMech3.0 & 581 & 3037 & C\({}_{4}\) & C/H/O \\ Exgas 2014 & 209 & 1472 & C\({}_{3}\) & C/H/O \\ Burke 2016 & 173 & 1011 & C\({}_{3}\) & C/H/O \\ Glarborg 2018 & 151 & 1397 & C\({}_{3}\) & C/H/O/N \\ Konnov 2005 & 127 & 1213 & C\({}_{2}\) & C/H/O/N \\ Venot 2020 & 112 & 944 & C\({}_{2}\) & C/H/O/N \\ \hline \end{tabular}
\end{table}
Table 1: Characteristics of the chemical networks considered and compared in this study. Size corresponds to the heavier reactant included in the network.
The assumed network prediction corresponds to the linear interpolation between the two closest computed points. For temperature studies, the computed points were evenly distributed over the experimental range, compromising between the density of the distribution and the computational time. For time studies, the computed points were determined by the software used for the calculations.
In these histograms, NUIGMech1.1, Burke 2016, and Glarborg 2018, display the models with the best prediction accuracy over the dataset. In the following, the focus is more on in-depth descriptions of the causes underlying these results. In total, over the 1618 experimental points, about 50 were beyond 100% calculated error for all models, with maximum values reaching around 2500%. These high deviations are found with all the chemical networks and were not plotted in error distribu
Figure 1: Statistical distribution of the relative error over every experimental point in the 1618 points data set for each studied chemical network. Points are grouped in colors corresponding to a different type of initial conditions. AramcoMech3.0 is not shown here as it is almost identical to Burke 2016.
tions. They mainly come from conditions 16, 17, and 18 in Table 1 for ethanol and methane pyrolysis, the important discrepancies observed between experiments and simulations appearing for plug flow experiments. For these specific experiments, important shifts in time (or temperature) are observed which dramatically affect the \(y_{error}\) calculated. There is also an abnormal distribution of errors around -70% seen in Fig. 1 for all models, mainly coming from conditions 19 of Table 1 and concerning C\({}_{2}\)H\({}_{4}\) and H\({}_{2}\)O. These errors could be due to issues in the experimental points, for example, due to ethanol reacting before entering the reactor.
### Auto ignition delay time
When first comparing the different plots for each network on methanol combustion (conditions 1-9 in Table 1), the first thing that stood out was that the chemical networks based on the work of Curran and co-authors (AramcoMech3.0, NUIG-Mech1.1, and Burke 2016), were all way better at describing auto ignition delay times. They agree with the auto ignition data of methanol shown in Fig. 1(a) and 1(b) with a mean error of 5% at 10 bar and within 10% at 50 bar, with almost no visible difference between them. On the contrary, the Exgas network severely overestimates this delay, with almost an order of magnitude difference. The Konnov network does not reproduce the temperature dependence: delays are underestimated at low temperatures (under 1050 K) and overestimated at high temperatures (over 1050 K) by around 150%. For the Glarborg network, the delay is overestimated at both pressures by around 30%, and for the Venot 2020, this delay is too short by around -40%. The ability of each network to accurately describe the auto ignition delay time in given initial conditions has major consequences on kinetic simulations of mole fraction over time. When a network underestimates the IDT, fuel consumption will tend to be overestimated. This correlation is clearly visible in our dataset. Fig. 1(a) and 1(b) shows that Venot 2020 underestimates the IDT while Glarborg 2018 overestimates it. This is related to Fig. 1(c), where CH\({}_{3}\)OH consumption is overestimated for Venot 2020 and underestimated for Glarborg 2018. This impact is clearly seen in combustion conditions, as shown in Fig. 1, leading to CH\({}_{3}\)OH error distribution of Venot 2020 to be mostly between -100% and 0% error, and between 0 and 100% error for Glarborg 2018. Figure 1(d) shows that in consequence, products like CO tend to be overestimated around ignition time for networks with underestimated IDT, as in Venot 2020, and, conversely, for networks with overestimated IDT like Glarborg 2018. As the oxidation of intermediate species like CO is not directly linked to IDT, other
Figure 2: Ignition delay time of methanol at 10 bar (a) and 50 bar (b) in condition 1 and mole fraction of CH\({}_{3}\)OH (c) and CO (d) over time in condition 2 of Table 1 for all tested chemical networks. AramcoMech3.0 is not shown here as it is almost identical to Burke 2016.
parameters may control CO consumption, as visible with Konnov 2005.
### Combustion and pyrolysis of methanol, ethanol, formaldehyde, and acetaldehyde
Focusing on data related to methanol thermal decomposition, we can see in Fig. 3 that Exgas, Glarborg, and Konnov networks overestimate CH\({}_{3}\)OH abundances profiles, while Venot 2020 tends to underestimate methanol abundances. Eventually, for these networks, their best performances on methanol points are for data in perfectly stirred reactors or plug flow reactors.
In addition, ethanol results are also displayed in Fig. 3. The experimental conditions concerning this species include only ethanol pyrolysis, with temperature- and pressure-dependent species profiles. CH\({}_{3}\)CHO data come exclusively from these conditions (18 and 19 of Table 1), whereas CH\({}_{2}\)O errors also include methanol combustion experiments (2, 6, 7, and 8 of Table 1). The Curran-based networks give quite similar results for these species, except for NUIGMech1.1 that is significantly better on methanol. This is probably due to a better representation of the growth mechanism towards heavier molecules occurring under pyrolysis conditions. Overall, these networks are similar over these species. The Glarborg network is accurate for methanol pyrolysis, but is less effective for methanol combustion. For C\({}_{2}\)H\({}_{5}\)OH, its performances are less accurate than the previous networks, and for CH\({}_{3}\)CHO, experimental abundance is underestimated by about -75%. Venot 2020 also reproduce these experimental points quite badly, especially on CH\({}_{2}\)O and CH\({}_{3}\)CHO.
### Main products and reactants in combustion and pyrolysis
Main species are shown in Fig. 4. This histogram gathers all computed errors of model predictions on experimental measurements (mole fraction and IDT) of H\({}_{2}\), CH\({}_{4}\), H\({}_{2}\)O, O\({}_{2}\), CO, and CO\({}_{2}\) regardless of their role in the experiment (either product or reactant). In condition of combustion of H\({}_{2}\), all the networks were within 5% or 10% error. For H\({}_{2}\) mole fraction measurements coming from methane pyrolysis experiments however, almost every network underestimated its production compared to the experimental points, although the Curran-based networks were the closest to experiments. In ethanol pyrolysis, on the contrary, the H\({}_{2}\) production was severely overestimated by all the networks, especially at high temperatures, with errors up to 200% at 1300 K with usually reliable networks like NUIG-Mech1.1.
For methane pyrolysis, the Curran-based networks were under 5% error, while other networks like Exgas or Venot 2020
Figure 3: Statistical distribution of the relative error over the 449 experimental data points for intermediate products (CH\({}_{3}\)CHO, CH\({}_{2}\)O) in combustion and pyrolysis of C\({}_{2}\)H\({}_{3}\)OH and CH\({}_{3}\)OH. Each color gathers all molar fraction measurements of the corresponding species. Contribution from combustion and pyrolysis data are shown separately in Figs. 1 and 2.
overestimated CH\({}_{4}\) abundances by around 20%. For methane combustion, almost all networks were under the 5% error range except for the Konnov network whose temperature dependence was totally off. On ethanol pyrolysis conditions, methane production were underestimated by all networks.
For water, results were good for all the networks, except in the ethanol pyrolysis conditions, where H\({}_{2}\)O production was severely underestimated by all the networks by around 75%, which is shown as red bars in Fig. 2.
For O\({}_{2}\) consumption in methanol or hydrogen burning conditions, the best networks were the Curran-based networks, with the same problems as noted previously for others, which were due to bad methanol ignition delay time predictions.
One species that displayed significant gains in accuracy with the Curran-based networks is CO, which has a wide range of errors with such networks as Exgas 2014, Glarborg 2018, Konnov 2005, and Venot 2020. However, for CO\({}_{2}\), we do not see a significant improvement over our dataset in relation to these experimental data.
### Network base choice and C\({}_{2}\) reduction
To derive our C/H/O/N chemical network from these combustion networks, multiple options were considered. The first one was to simply take the Glarborg 2018 network, as it is already a C/H/O/N network. However, as seen in the corresponding error distributions in Figs. 1 and 3, this network performance, although better than older networks like Venot 2020 or Konnov 2005, is surpassed by the oxygenated species and alcohol combustion conditions of recent methanol-focused networks, such as Burke 2016, or the generic state-of-the-art network NUIG-Mech1.1. However, with respect to the nitrogen chemistry, it is the most state-of-the-art network, although the difference with NUIGMeCh1.1 and Konnov 2005 was not shown very clearly in our dataset. In the end, as both nitrogen and C/H/O chemistry are equally important for exoplanets, we decided to fuse the Glarborg 2018 network with the Burke 2016 network, keeping both the most state-of-the-art chemistry network with the best performing reasonable sized network on our data, while making sure that the methanol chemistry is accurate. To further reduce our network size, we first removed 81 out of 91 species from the C\({}_{3}\) sub-mechanism of Burke 2016 and their reactions, but kept the last 10 that were necessary to preserve the accuracy on some C\({}_{2}\) species. This reduction was made because for exoplanets, C\({}_{3}\) species abundances are usually low and their interest is limited in comparison to the increase in computation time they require due to the higher number of possible isomers. In addition, limiting calculation times allows for future additions of other species
Figure 4: Statistical distribution of the relative error over the 823 experimental data points for the main pyrolysis products (H\({}_{2}\), CH\({}_{4}\)), combustion products (H\({}_{2}\)O, CO, CO\({}_{2}\)), and reactants (O\({}_{2}\)) in combustion and pyrolysis of CH\({}_{2}\)OH, CH\({}_{4}\), H\({}_{2}\), HCN, and C\({}_{2}\)H\({}_{2}\)OH. For some points, CO and H\({}_{2}\)O are reactants (condition 13 of Table 1). Each color gathers all molar fraction measurements of the corresponding species. Contribution from combustion and pyrolysis data are shown separately in Figs. 3 and 4.
such as sulfur and its use in retrievals using TauREx with the FRECKLL plugin.
### Additional modifications to the network
While applying our chemical network to exoplanet studies, we noted that NH\({}_{3}\) formation at high altitudes was primarily driven by a reversed globalized reaction: CH + NH\({}_{3}\)\(\longrightarrow\) H\({}_{2}\)CN + H + H. This reaction was assumed to be combination of two reactions: CH + NH\({}_{3}\)\(\longrightarrow\) CH\({}_{2}\)NH + H and CH\({}_{2}\)NH \(\longrightarrow\) H\({}_{2}\)CN + H, but was written in this compact form in the original network of Glarborg 2018, implicitly assuming that the latter reaction would always happen only after the former. This simplification (while certainly reasonable in nitrogen combustion chemistry) is not suited to exoplanetary conditions, especially in the upper atmosphere, where photolysis combined with low density conditions maintains a really high concentration of hydrogen radicals that heavily favors the reverse reaction, resulting in an unphysical NH\({}_{3}\) production pathway. Hence, we decided to rewrite the reversible reaction CH + NH\({}_{3}\)\(\longrightarrow\) H\({}_{2}\)CN + H + H into two others: CH + NH\({}_{3}\)\(\longrightarrow\) CH\({}_{2}\)NH + H, for which we kept the parameters of the CH + NH\({}_{3}\)\(\longrightarrow\) H\({}_{2}\)CN + H + reaction, and H\({}_{2}\)CN + H \(\longrightarrow\) CH\({}_{2}\)NH. For this second reaction, the choice of parameters was based on the reaction NH\({}_{2}\) + H \(\longrightarrow\) NH\({}_{3}\) by analogy. Both reactions are indeed the recombination of a nitrogen radical with a hydrogen atom, and therefore occur with no activation energy. They also should share a similar pre-exponential factor, with no temperature dependence. This factor was hence estimated at \(1.6\times 10^{14}\) cm\({}^{3}\) mol\({}^{-1}\) s\({}^{-1}\). However, this value is only accurate in the high pressure limit because at low pressures, this reaction needs a third body to stabilize the product, causing a strong pressure dependence of the rate constant. Further work is needed to correctly take into account this pressure dependence, using advanced Variational Reaction Coordinate-VTST and Master Equation methods (Klippenstein 1992; Georgievskii & Klippenstein 2003a,b). We discuss the impact of this approximation in Section 3.2.2.
In addition, we disabled the reversibility of 24 other globalized reactions yielding three products, to prevent similar unexpected chemical pathways from occurring. However, we did not find any conditions resulting in the reverse direction of these reactions to be favored.
To further improve the reliability of the chemical network in the upper atmosphere, we searched for possibly missing radical reactions in the network that could significantly impact chemistry for these specific conditions. We listed all the major species and radicals typically encountered in exoplanets or produced by photolysis and checked for their reactions with N and NH radicals. Many reactions of N and NH are negligible in usual combustion conditions; hence, the coupling between all radicals is not systematic, especially with radical compounds such as NH. In exoplanets, the photochemistry of NH\({}_{3}\) produces a lot of N and NH radicals, in a medium where radicals are especially abundant, which causes them to mainly react between each other through pathways that may be usually neglected. While searching for these kinds of reactions, we identified six potentially missing reactions and determined their parameters by analogy with other reactions. These added reactions were also checked to ensure that they do not exceed the theoretical collision limit (see Table 2). We compared the chemical network results on our combustion data set before and after these modifications, confirming that it did not affect the network performances. This scheme can be downloaded from the KInetic Database for Astrochemistry (Wakelaam et al. 2012)1 and also from the ANR EXACT website2.
Footnote 1: [https://kida.astrochem-tools.org/](https://kida.astrochem-tools.org/)
Footnote 2: [https://www.anr-exact.cnrs.fr/fr/chemical-schemes/](https://www.anr-exact.cnrs.fr/fr/chemical-schemes/)
## 3 Application to exoplanetary atmospheres
### Models and data sources
Our prime motivation for this extensive work on combustion networks was to develop a very robust scheme for the study of exoplanetary atmospheres. Thus, in this section, we now apply this new scheme to model the atmospheric chemical composition of various exoplanets. Our new C/H/O/N scheme was tested against multiple exoplanet cases and compared to the one published in Venot et al. (2020). In the following, we refer to the Venot 2020 chemical network as V20 and to our update as V23.
In order to span different type of hydrogen-dominated atmospheres, we chose to model GJ 436 b and GJ 1214 b (warm Neptunes) as well as HD 209458 b and HD189733 b (hot Jupiters) using the same thermal profiles, initial conditions, and parameters given in Venot et al. (2020) (Table 3). For each planet, we compared the abundances obtained with both V23 and V20. We calculated the chemical abundance profiles using FRECKLL (Al-Refaie et al. 2022), which is the Python version of the code used in Venot et al. (2020). The results obtained with this code
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{**Reaction**} & \multicolumn{1}{|c|}{**A**} & \multicolumn{1}{|c|}{**n**} & \multicolumn{1}{|c|}{**E**} & \multicolumn{1}{|c|}{**Analogy for A and n**} & \multicolumn{1}{|c|}{**Source for E**} \\ \hline NH\({}_{2}\)\(\rightarrow\)M & NH + H & \(5.6\times 10^{15}\) & 0 & 96600 & CH\({}_{2}\)\(\rightarrow\)M & CH + H (Bauerle et al. 1995) & \(\Delta_{r}H\) \\ C + NH & CN + H & \(5.0\times 10^{13}\) & 0 & 0 & C + OH & CO + H (Glarborg et al. 1986) & N/A \\ N + H & +M & NH & \(4.7\times 10^{18}\) & -1 & 0 & O + H & OH (Tsang \& Hampson 1986) & N/A \\ CN & +M & C + N & \(1.5\times 10^{16}\) & 0 & 180260 & C\({}_{2}\)\(\rightarrow\)M & C + C (Kruse \& Roth 1997) & \(\Delta_{r}H\) \\ NO & +M & N + O & \(1.5\times 10^{16}\) & 0 & 150920 & C\({}_{2}\)\(\rightarrow\)M & C + C (Kruse \& Roth 1997) & \(\Delta_{r}H\) \\ N\({}_{2}\)\(\rightarrow\)M & N + N & \(1.5\times 10^{16}\) & 0 & 225940 & C\({}_{2}\)\(\rightarrow\)M & C + C (Kruse \& Roth 1997) & \(\Delta_{r}H\) \\ \hline \end{tabular}
\end{table}
Table 2: Different added reactions to the network and their parameters. Values are in mol, cm\({}^{3}\), cal, s. Also, **A, n,** and **E** are the parameters of the modified Arrhenius equation, while \(\Delta_{r}H\) is the enthalpy of formation of the reaction, whose values are taken from NIST. As C + NH and N + H are radical-radical combinations, they are barrierless reactions (**E** = 0). +M indicates low pressure limit reactions.
are identical, but the computational time has been greatly improved. The thermal profiles are discretized in a 130-layers grid, evenly distributed in pressure log space. We assumed a solar metallicity for HD 189733 b and HD 209458 b, a 100x solar metallicity for GJ 1214 b and both metallicities for GJ 436 b. Elemental abundances were based on Lodders (2010), with 20% less oxygen to account for sequestration in refractory elements.
We updated the photodissociation data (cross-sections and branching ratios, Table 1), compared to that used in V20. To discriminate the changes due to this update and to chemistry, we first compared the abundance profiles of each exoplanet model for some of the major species (H\({}_{2}\), H\({}_{2}\)O, CH\({}_{4}\), CO, N\({}_{2}\), NH\({}_{3}\), CO\({}_{2}\), HCN, and H) between the old photolysis and the new photolysis data for the V20 chemical network with FRECKLL. This update turned out to have little impact on photochemistry for most species on hot Jupiters (HD 189733 b and HD 209458 b) and for all species on warm Neptunes (GJ 436 b and GJ 1214 b). However, for HCN, the addition of two new photodissociation pathways of NH\({}_{3}\) into NH (NH\({}_{3}\)\(\longrightarrow\) NH + H + H and NH\({}_{3}\)\(\longrightarrow\) NH + H\({}_{2}\)) creates differences of up to one order of magnitude in the upper atmosphere of HD 189733 b and HD 209458 b between 10\({}^{-6}\) and 10\({}^{-7}\) bar. The consequences of this photolysis update are summarized in Fig. 1.
In the following, we compare the chemical abundances obtained with V20 and V23 for each planet case, using only this most recent UV cross-section data and branching ratios. We also investigate on the reasons explaining the observed differences and identify the main chemical pathways at play in each network. To evaluate the impact on observables, we generated the transmission spectrum of every planet with TauREX 3.1 (Al-Refaie et al. 2021), using a spectral resolution of 50 and opacities data from ExoMol (Tennyson & Yurchenko 2012) for HCN and from Al-Refaie et al. (2022) for CH\({}_{4}\), CO, CO\({}_{2}\), H\({}_{2}\)O, and NH\({}_{3}\). Rayleigh diffusion for CH\({}_{4}\), CO, CO\({}_{2}\), H\({}_{2}\), H\({}_{2}\)O, He, N\({}_{2}\), and NH\({}_{3}\) as well as collision-induced absorption from HITRAN (Gordon et al. 2022) for H\({}_{2}\)-H\({}_{2}\) and H\({}_{2}\)-He were also included.
### Results for GJ 436 b
For the warm Neptune GJ 436 b, we first simulated the 1D chemical abundance profiles assuming a solar metallicity and a constant eddy diffusion coefficient of 10\({}^{9}\) cm\({}^{2}\) s\({}^{-1}\). While some of the main species (H\({}_{2}\)O, CH\({}_{4}\), NH\({}_{3}\), N\({}_{2}\), CO, and H) are found to have similar abundance profiles (Fig. 5) with both networks, we observe that two species differ by various orders of magnitude: CO\({}_{2}\) below 0.1 bar, and HCN on the whole pressure profile (100 to 10\({}^{-7}\) bar). In the upper atmosphere, the molar fraction of CO\({}_{2}\) is higher with V23 than in V20, with a difference up to three orders of magnitude around 10\({}^{-6}\) bar. For HCN, its molar fraction is lower in V23 than in V20 for pressures under 10\({}^{-2}\) bar, with a difference of up to four orders of magnitude around 10\({}^{-6}\) bar. For pressures higher than 10\({}^{-2}\) bar, HCN molar fraction is higher in V23 than in V20, with a difference of up to two orders of magnitude around 10 bar. In the following, we discuss the origin of the differences for these two species.
#### 3.2.1 Co\({}_{2}\) differences
Upon investigating the reasons of this discrepancy, we found that between V23 and V20, the total CO\({}_{2}\) reactions rate profile was different. Figure 6 shows the total rate of CO\({}_{2}\) formation and destruction in each layer, which are equal when including vertical mixing because the profiles are at steady state. We see that these total reaction rates are larger below 1 bar with V23 than with V20.
Figure 7 presents the main contributions of each reaction to this total rate, the sum of positive contributions, and the sum of negative contribution both being equal to the total reactions rate when accounting for vertical mixing because of the steady state. For V23, Fig. 7 shows that the reaction CO + OH \(\longrightarrow\) CO\({}_{2}\) + H is always the main CO\({}_{2}\) production reaction above 1 bar and the main CO\({}_{2}\) destruction reaction below 1 bar. Vertical mixing mainly transports the CO\({}_{2}\) produced in the middle atmosphere (1 - 10\({}^{-3}\) bar) towards the lower atmosphere (below 1 bar), where it is destroyed into CO and OH through the CO\({}_{2}\) + H \(\longrightarrow\) CO + OH reaction. For V20, this reaction is not the main destruction pathway of CO\({}_{2}\) in the lower atmosphere and needs the vertical mixing to compensate for the destruction of CO\({}_{2}\) through the N(\({}^{4}\)S) + CO\({}_{2}\)\(\longrightarrow\) NO + CO reaction, although it also remains the main CO\({}_{2}\) production reaction for pressures lower than 1 bar. At the peak of CO\({}_{2}\) abundance around 10\({}^{-6}\) bar, the main production reaction is CO + OH \(\longrightarrow\) CO\({}_{2}\) + H and the main loss mechanism is by photodissociation
Figure 5: Abundance profiles of GJ 436 b for solar metallicity and a constant eddy diffusion coefficient of 10\({}^{9}\) cm\({}^{2}\) s\({}^{-1}\). Dashed lines are for V20, while solid lines are for V23. H\({}_{2}\) is not shown to focus on other species, but its abundance profile in V23 is almost identical to V20.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
**Planet name** & **Planet type** & **Star type** & **D (UA)** & **R (\(R_{J}\))** & **T (K)** & \(K_{zz}\) (cm\({}^{2}\)/s)** & **M (solar)** \\ \hline GJ 436 b & Warm Neptune & M3V & 0.029 & 0.38 & 1094 & 10\({}^{9}\) & 1 \\ GJ 436 b & Warm Neptune & M3V & 0.029 & 0.38 & 1094 & 10\({}^{9}\) & 100 \\ GJ 1214 b & Warm Neptune & M4.5V & 0.014 & 0.24 & 1054 & \(3\times 10^{7}\times P^{-0.4}\) & 100 \\ HD 189733 b & Hot Jupiter & K2V & 0.031 & 1.14 & 1470 & profile & 1 \\ HD 209458 b & Hot Jupiter & F9V & 0.047 & 1.38 & 1671 & profile & 1 \\ \hline \end{tabular}
\end{table}
Table 3: Table of the exoplanets simulated with V20 and V23 and the input parameters used. Here, **D** is the distance to the host star, **R** the planet radius, **T** the temperature at 1 bar, \(K_{zz}\) the eddy diffusion coefficient, and **M** the metallicity relative to solar abundances.
into CO through CO2 \(\longrightarrow\) CO + O(\({}^{1}\)D), for both V20 and V23. Photodissociation rate being proportional to the CO2 concentration, its rate increase is directly linked to the higher CO2 levels in V23. The UV cross-sections used in V23 and V20 being the same in these simulations, there is no difference in the main loss reaction parameters at this pressure between the two chemical networks, thus, this difference must come from the production reaction CO + OH \(\longrightarrow\) CO2 + H. In V20, this production reaction is taken from Baulch et al. (1994) with the pre-exponential factor divided by 6, resulting in a production rate of around 5 molecule cm\({}^{-3}\) s\({}^{-1}\) around \(10^{-6}\) bar. In V23, these reaction parameters are taken from Joshi and Wang (2006), where it is treated as a sum of two modified Arrhenius equations, both having opposite temperature dependence. This results in a total production rate of around 2500 molecule cm\({}^{-3}\) s\({}^{-1}\), which is three orders of magnitude greater than V20. This difference is directly observed in the rate constant of this reaction for the two networks and is fully attributable to the very different parameters used for modeling this reaction, as shown in Fig. 8.
This difference has already been pointed out by Tsai et al. (2021) for temperatures in the 250-2000 K range (their Figure 41). As for other combustion networks, such as Konnov 2005 or NUHGMech1.1, they are on a similar order of magnitude as V23 and also three orders of magnitude greater than V20, even using different sources for the reaction parameters. NUHGMech1.1 is particularly close to V23 for pressures around 100 bar and temperatures above 500 K, despite using data from Senoisian et al. (2005) and a single reaction with pressure dependent parameters. For temperatures below this value, some care should be taken as reaction rates differences between chemical networks raise above one order of magnitude around 250K, where the validity of these networks is no longer ensured.
#### 3.2.2 HCN differences
We focus in this part in the differences observed between V23 and V20 for HCN. The total reaction rate profile for HCN production and consumption is shown in Fig. 9, and the respective contributions of major reactions are shown in Fig. 10.
As expected, the HCN total reaction rate profile roughly matches the differences in HCN abundance, with the total reaction rate in V20 being up to six orders of magnitude higher than in V23 around \(10^{-7}\) bar, and up to one order of magnitude lower between 100 and \(10^{-3}\) bar. In both networks, the main production reaction between \(10^{-2}\) and \(10^{-6}\) bar is the reaction HCN +
Figure 8: Reaction rates of the reaction CO + OH \(\longrightarrow\) CO2 + H in V20 compared to V23, NUHGMech1.1, and Konnov 2005. In V23, this reaction rate is expressed as the sum of two modified Arrhenius equations (Eqs. 1 and 2). In NUHGMech1.1, the rate constant is pressure-dependent.
Figure 6: Total reaction rate profile for CO2 in GJ 436 b with a solar metallicity with V20 (dashed lines) and V23 (solid lines).
Figure 7: Contribution profile of most major production and loss reactions for CO2 in GJ 436 b. Positive values are production contributions and negative values are loss contributions. Black lines correspond to vertical mixing compensation, such as the sum in each layer is always zero due to steady state. Dashed lines are for V20, while solid lines are for V23. The contribution of photodissociation pathways to O(\({}^{3}\)P) and O(\({}^{1}\)D) are combined, O(\({}^{3}\)P) being favored above \(10^{-5}\) bar and O(\({}^{1}\)D) being favored for lower pressures. The third column in the legend indicates the reaction type. “Photo” corresponds to photodissociations and “no M” corresponds to reactions without pressure dependence. The last column indicates which model includes this reaction.
H\(\;\)\(\;\)\(\;\)\(\;\)\(\;\)H\({}_{2}\)CN. This reaction is implemented in V23 with a pressure dependent rate, with parameters interpolated in log space (PLOG) between 3 pressure values at 10, 1, and 0.1 bar. In V20, this reaction is described as two separate reactions: one for the high pressure limit, and one for the low pressure limit, requiring a third body. The parameters of the high pressure limit reaction are close to the values in the V23 PLOG reaction for 0.1 bar, and the low pressure limit contribution decreases with altitude, becoming negligible for pressures below 0.1 bar. Therefore, these differences cannot explain those between the two HCN abundance profiles. It is also important to note that this description likely overestimates the HCN + H\(\;\)\(\;\)\(\;\)\(\;\)H\({}_{2}\)CN reaction rate for pressures below 0.1 bar, for both networks. A PLOG implementation on the full pressure range (1000 - 10\({}^{-8}\) bar) would be necessary to describe the full pressure dependence. In both networks, the main HCN production reaction is H\({}_{2}\)CN + H\(\;\)\(\;\)\(\;\)HCN + H\({}_{2}\) between 0.1 and 10\({}^{-6}\) bar. The parameters for this reaction are identical between the two networks, therefore it is not responsible for the differences between HCN abundance profiles. The other major contributing reaction, HCN + H\(\;\)\(\;\)\(\;\)\(\;\)H\(\;\)\(\;\)\(\;\)HCNH is similar to the reaction HCN + H\(\;\)\(\;\)\(\;\)H\({}_{2}\)CN, but results in the formation of HCNH, an isomer of H\({}_{2}\)CN. V23 uses similar values to V20 for this reaction, although the pressure dependence is described between 0.1 and 10 bars with the PLOG formalism. Another difference between V20 and V23 is the inclusion of HNC in V23 and its isomerization reaction HNC\(\;\)\(\;\)\(\;\)HCN. However, this reaction does not impact HCN profiles, which we confirmed by running the simulation with V23 without this reaction. The combination of the reactions HCN + H\(\;\)\(\;\)\(\;\)\(\;\)H\({}_{2}\)CN and H\({}_{2}\)CN + H\(\;\)\(\;\)H\({}_{2}\)CN + H\(\;\)\(\;\)H\({}_{2}\) results in an equilibrium between the species HCN and its radical H\({}_{2}\)CN. Hence, HCN differences between V23 and V20 are directly driven by production and consumption rate differences of H\({}_{2}\)CN, which are shown in Fig. 11. As expected, we observe a similar reaction rate profile to HCN, with a total reaction rate up to six orders of magnitude above V20 for V23 at 10\({}^{-7}\) bar, and up to two orders between 10 and 10\({}^{-3}\) bar. This indicates that HCN abundance is mainly controlled by H\({}_{2}\)CN abundance and its associated consumption and production reactions. Figure 12 shows a few differences on the major reactions of H\({}_{2}\)CN. Firstly, in V20, the main consumption reaction on the whole pressure range is the reaction H\({}_{2}\)CN + H\(\;\)\(\;\)\(\;\)\(\;\)HCN + H\({}_{2}\), previously mentioned as the main HCN production reaction in both networks that controls the HCN/H\({}_{2}\)CN equilibrium. For pressures above 10\({}^{-1}\) bar, H\({}_{2}\)CN consumption isn't local anymore, and the vertical mixing advects H\({}_{2}\)CN to be consumed in the upper layers by the reaction H\({}_{2}\)CN + H\(\;\)\(\;\)\(\;\)HCN + H\({}_{2}\). Conversely, in V23, this reaction is negligible, and H\({}_{2}\)CN consumption is entirely driven by the reaction H\({}_{2}\)CN + H\(\;\)\(\;\)\(\;\)\(\;\)CH\({}_{2}\)NH, discussed in Sect. 2.8. This difference is crucial, because the CH\({}_{2}\)NH species and its linked reactions are absent from V20. Secondly, in V20, in the pressure range 10\({}^{-4}\) to 10\({}^{-6}\) bar, the main H\({}_{2}\)CN production reaction is HCN + H\(\;\)\(\;\)\(\;\)H\({}_{2}\)CN, which is the second mentioned reaction controlling the HCN/H\({}_{2}\)CN equilibrium. In V23, the main reaction in this range is the reaction CH\({}_{3}\) + N(\({}^{4}\)S)\(\;\)\(\;\)\(\;\)H\({}_{2}\)CN + H, that is included in both networks using the same parameters. Finally, in V23 at around 1 bar, the reaction CH\({}_{2}\)NH + H\(\;\)\(\;\)
Figure 10: Contribution profile of most major production and loss reactions for HCN in GJ 436 b. Positive values are production contributions and negative values are loss contributions. Black lines correspond to vertical mixing compensation, such as the sum in each layer is always zero due to steady state. Dashed lines are for V20, while solid lines are for V23. The third column in the legend indicates the reaction type. “no M” corresponds to reactions without pressure dependence, “PLOG” to full pressure dependence and fall off description with PLOG formalism, “M only” to pressure dependence without fall off nor high pressure limit and “decay” to reversible, pressure dependent unimolecular reactions such as isomerization or electronic decay. The last column indicates which model includes this reaction.
Figure 9: Total reaction rate profile for HCN in GJ 436 b with a solar metallicity with V20 (dashed lines) and V23 (solid lines).
that disabling the reaction H\({}_{2}\)CN + H \(\longrightarrow\) CH\({}_{2}\)NH results in a HCN profile almost identical to V20 (dashed and dash-dotted lines) for pressures lower than 10\({}^{-1}\). For pressures above this value however, no visible difference with V23 was observed. We disabled the reaction CH\({}_{2}\)NH + H \(\longrightarrow\) H\({}_{2}\)CN + H\({}_{2}\), but other reactions such as CH\({}_{2}\)NH + CH\({}_{3}\) \(\longrightarrow\) H\({}_{2}\)CN + CH\({}_{4}\) and CH\({}_{2}\)NH + NH\({}_{2}\)\(\longrightarrow\) H\({}_{2}\)CN + NH\({}_{3}\) would replace its function in the HCN formation pathway, leading to smaller but very significant differences. Thus, we disabled all the H-abstraction reactions of CH\({}_{2}\)NH in addition to the reaction H\({}_{2}\)CN + H \(\longrightarrow\) CH\({}_{2}\)NH, and found almost the same profile as V20. As disabling these reactions without disabling the reaction H\({}_{2}\)CN + H \(\longrightarrow\) CH\({}_{2}\)NH almost does not alter the V23 abundance profile, both of these reactions seem to be required to explain the differences between V23 and V20. The remaining difference was mainly located under 100 bar, where HCN and H\({}_{2}\)CN abundances approach chemical equilibrium. Because the thermochemical data for V23 are different from V20 (Fig. 10), we ran V23 with V20 thermochemical data for H\({}_{2}\)CN, HCNH and HCN, and found a perfect match in this pressure range. Thus, we conclude that the differences between V23 and V20 observed in HCN abundance profiles for GJ 436 b at 1x solar metallicity are caused by the addition of the species CH\({}_{2}\)NH to the network. The upper atmosphere differences are caused by the reaction H\({}_{2}\)CN + H \(\longrightarrow\) CH\({}_{2}\)NH, and the lower atmosphere differences are caused mainly by CH\({}_{2}\)NH + H \(\longrightarrow\) H\({}_{2}\)CN + H\({}_{2}\), but also to a lower extent by others H-abstraction reactions of CH\({}_{2}\)NH with CH\({}_{2}\)NH, or OH.
Because of the importance of the reaction H\({}_{2}\)CN + H \(\longrightarrow\) CH\({}_{2}\)NH in these simulations and the large uncertainty in its reaction parameters (discussed in Sect. 2.8), we investigated its impact on the simulations with a sensitivity analysis. Fig. 14 shows a sensitivity analysis on the pre-exponential factor \(A\) of
Figure 11: Total reaction rate profiles for H\({}_{2}\)CN in GJ 436 b with a solar metallicity with V20 (dashed lines) and V23 (solid lines).
Figure 12: Contribution profile of most major production and loss reactions for H\({}_{2}\)CN in GJ 436 b. Positive values are production contributions and negative values are loss contributions. Black lines correspond to vertical mixing compensation, such as the sum in each layer is always zero due to steady state. Dashed lines are for V20, while solid lines are for V23. The third column in the legend indicates the reaction type. “no M” corresponds to reactions without pressure dependence, “PLOG” to full pressure dependence and fall off description with PLOG formalism and “M only” to pressure dependence without fall off nor high pressure limit. The last column indicates which model includes this reaction.
Figure 13: Abundance profile of N\({}_{2}\), HCN and H\({}_{2}\)CN with V23 (solid lines), V20 (dashed lines), V23 without the reaction H\({}_{2}\)CN + H \(\longrightarrow\) CH\({}_{2}\)NH (dashdot lines) and V23 without H\({}_{2}\)CN + H \(\longrightarrow\) CH\({}_{2}\)NH, CH\({}_{2}\)NH + H \(\longrightarrow\) H\({}_{2}\)CN + H\({}_{2}\)CN + CH\({}_{2}\)NH + NH\({}_{2}\)CN + NH\({}_{3}\), CH\({}_{2}\)NH + OH \(\longrightarrow\) H\({}_{2}\)CO and with V20 thermochemical data for H\({}_{2}\)CN, HCNH and HCN (dotted lines).
this reaction.
Multiple simulations were run with different pre-exponential factors for this reaction, up to a division factor of 100. Significant differences in HCN abundance of up to two orders of magnitude are found between values of \(A\) and of \(A\)/100, especially around 10\({}^{-5}\) bar. At lower pressures however, the abundance is shown to be quite insensitive to changes in \(A\), showing that the presence of the reaction remains impactful even with low estimates for this pre-exponential factor.
Given this complete shift in the major N-bearing species above 10\({}^{-5}\) bar, we could expect CH\({}_{2}\)NH abundance to be quite high. However, as shown in Fig. 15, the CH\({}_{2}\)NH abundance profile remains two to four orders of magnitude lower than HCN abundance for pressures higher than 10\({}^{-4}\) bar, and stays similar around 10\({}^{-5}\) bar.
For the N\({}_{2}\) abundance profile, we can see that its abundance increases with V23 in comparison to V20 around 10\({}^{-5}\) bar, as this species becomes the main N-bearing species instead of HCN. The species CH\({}_{3}\)NH\({}_{2}\) is also plotted, as we could expect the CH\({}_{2}\)NH double bond to be saturated by H\({}_{2}\) and H atoms, but the main reaction producing CH\({}_{3}\)NH\({}_{2}\) is the reaction CH\({}_{3}\) + NH\({}_{2}\)\(\longrightarrow\) CH\({}_{3}\)NH\({}_{2}\). This reaction uses a PLOG description between 0.1 and 10 bar, hence its contribution for very low pressures is likely to be heavily overestimated. However, a detailed treatment of its pressure dependence raises the same problems as the reaction H\({}_{2}\)CN + H \(\longrightarrow\) CH\({}_{2}\)NH because it is a barrierless, pressure dependent reaction. Therefore, we conclude that CH\({}_{2}\)NH is an intermediate species that links HCN and N\({}_{2}\) abundances. The full mechanism linking these two species around 10\({}^{-5}\) is detailed in Fig. 16.
CH\({}_{2}\)NH is mainly hydrogenated through the reaction CH\({}_{2}\)NH + H \(\longrightarrow\) CH\({}_{2}\)NH\({}_{2}\). The CH\({}_{2}\)NH\({}_{2}\) radical then gets its C-N bond broken by the addition of another H atom through the reaction CH\({}_{2}\)NH\({}_{2}\) + H \(\longrightarrow\) CH\({}_{3}\) + NH\({}_{2}\). These two reactions and the species CH\({}_{2}\)NH and CH\({}_{2}\)NH\({}_{2}\) are only included in V23 and not in V20. The NH\({}_{2}\) radical formed then gets destroyed by N(\({}^{4}\)S) atoms through the reaction NH\({}_{2}\) + N(\({}^{4}\)S) \(\longrightarrow\) N\({}_{2}\) + H + H. This reaction has the same parameters between V23 and V20, although it is reversible in V20 and not in V23. Despite being reversible, the enthalpy difference between the reactants and the products is too high for it to be significantly reversed in V20. This is important because this reaction is an implicit combination of two other reactions, NH\({}_{2}\) + N(\({}^{4}\)S) \(\longrightarrow\) NNH + H and NNH \(\longrightarrow\) N\({}_{2}\) + H, and reversing the resulting combination NH\({}_{2}\) + N(\({}^{4}\)S) \(\longrightarrow\) N\({}_{2}\) + H + H would be unphysical. In addition, N(\({}^{4}\)S) atoms are produced from NH\({}_{2}\) through the reactions NH\({}_{2}\) + H \(\longrightarrow\) NH + H\({}_{2}\) and NH + H \(\longrightarrow\) N(\({}^{4}\)S) + H\({}_{2}\). While the parameters for this second reaction are really similar between V20 and V23, for the first one, they are very different. Both consider this reac
Figure 16: HCN formation mechanism in V23 around 10\({}^{-5}\) bar. The blue path is exclusive to V23 and absent from V20. The red reactions are dominant in V20 for the production of the species and minor but included in V23. The green reactions are dominant in V23 and minor but included in V20. The black reactions are dominant in both networks.
Figure 14: Sensitivity analysis of the pre-exponential factor \(A\) of the reaction H\({}_{2}\)CN + H \(\longrightarrow\) CH\({}_{3}\)NH. Abundances profiles are calculated with the full V23 network (solid lines), with \(A\) divided by 10 (dashdot lines), with \(A\) divided by 100 (dotted lines) and without this reaction (dashed lines).
Figure 15: Abundance profiles of CH\({}_{2}\)NH and CH\({}_{3}\)NH\({}_{2}\) compared with major N-bearing species for GJ 436 b with solar metallicity in V20 (dashed lines) and V23 (solid lines). CH\({}_{2}\)NH and CH\({}_{3}\)NH\({}_{2}\) are not included in V20.
tion as NH + H2 \(\longrightarrow\) NH\({}_{2}\) + H and use a temperature exponent of zero, but in V20 the activation energy is close to 20 kcal/mol while it is close to 15 kcal/mol in V23. The pre-exponential factor is also different, with a value of 10\({}^{14}\) cm\({}^{3}\) mol\({}^{-1}\) s\({}^{-1}\) in V20 and 2.1 \(\times\) 10\({}^{13}\) cm\({}^{3}\) mol\({}^{-1}\) s\({}^{-1}\) in V23, 5 times lower. The reaction H2CN + N(\({}^{4}\)S) \(\longrightarrow\) N\({}_{2}\) + \({}^{3}\)CH\({}_{2}\), which is the main formation reaction of N\({}_{2}\) in V20 has similar parameters between V20 and V23, although they differ on the pre-exponential factor by a factor of 3.
#### 3.2.3 Consequences on transmission spectra
These differences in the abundance profiles are also expressed in the synthetic transmission spectra showed in Fig. 17.
The higher CO\({}_{2}\) abundance obtained with V23 in the upper atmosphere increases the apparent radius of GJ 436 b around 4.2 \(\upmu\)m, leading to the apparition of a new CO\({}_{2}\) feature with an amplitude of about 100 ppm. This happens because the transmission spectrum contribution of CO\({}_{2}\) approaches that of CH\({}_{4}\) and NH\({}_{3}\), which dominate at this wavelength for the abundance predicted with V20 (Fig. 18). Another major change in the spectrum is the disappearance of the HCN feature around 13 \(\upmu\)m compared to an amplitude of about 200 ppm with V20, due to the drop in its abundance.
#### 3.2.4 Case of 100 times solar metallicity
We also simulated the atmosphere of GJ 436 b assuming a higher metallicity (100x solar) (10\({}^{9}\) cm\({}^{2}\) s\({}^{-1}\)), but keeping the same PT profile. In this case, variations between V20 and V23 are also observed, but to a lower extent (Fig. 19). Compared to the previous case with solar metallicity, the amplitude of HCN differences is lower, because the profile is strongly quenched, due to the higher abundance of related species that causes a higher flux of species. However, for pressures from 10 to 10\({}^{-5}\) bar, HCN abundance in V23 is still almost two orders of magnitude above that of V20. For pressures around 10\({}^{-6}\) bar, the HCN abundance in V20 is almost four orders of magnitude greater than in V23. These changes directly relate to the network differences discussed for the solar metallicity case in Sect. 3.2.2, and particularly the reaction H2CN + H \(\longrightarrow\) CH\({}_{2}\)NH for the upper atmosphere differences, and the reaction CH\({}_{2}\)NH + H \(\longrightarrow\) H\({}_{2}\)CN + H\({}_{2}\) for the lower atmosphere. We also observe differences in the thermochemical equilibrium region, which are due to discussed differences in the thermochemical data. This was verified through the same method described earlier, by disabling these specific reactions to see how they impact HCN abundance profile. In addition, CO\({}_{2}\) is more abundant in V23 than in V20 around 10\({}^{-6}\) bar by almost one order of magnitude, due to differences in the CO + OH \(\longrightarrow\) CO\({}_{2}\) + H reaction rates (Sect. 3.2.1). NH\({}_{3}\) abundance profile is also slightly higher in V23 than V20, because the quenching point seems to happen at slightly higher pressures. For this metallicity case, these variations in abundances have very little impact on the transmission spectrum (Fig. 20). The slight change in NH\({}_{3}\) abundance profiles barely causes some features to undergo an amplitude change, but HCN clearly does not show any impact on the spectrum, because its contribution is well under the contributions of CH\({}_{4}\) and NH\({}_{3}\) (Fig. E.1).
Figure 19: Abundance profiles of GJ 436 b for 100 times solar metallicity and a constant eddy diffusion coefficient of 10\({}^{9}\) cm\({}^{2}\) s\({}^{-1}\). Dashed lines are for V20, while solid lines are for V23.
Figure 17: Synthetic transmission spectra of GJ 436 b with a solar metallicity, at a resolution of 50, corresponding to the atmospheric compositions calculated with V23 (in red) and V20 (in blue).
Figure 18: Contributions of major species to the total synthetic transmission spectra of GJ 436 b with a solar metallicity and \(K_{zz}=10^{9}\) cm\({}^{2}\) s\({}^{-1}\). Dashed lines are for V20, while solid lines are for V23. In the middle, we see the CO\({}_{2}\) contribution that leads to a new feature in the spectrum with V23.
### Case of GJ 1214 b
For network comparisons with the warm Neptune GJ 1214 b, we used a pressure dependent eddy diffusion coefficient profile calculated with the formula \(K_{zz}=3\times 10^{7}\times P^{-0.4}\) cm\({}^{2}\) s\({}^{-1}\) given in Charany et al. (2015). As this planet is expected to have a high metallicity (Desert et al. 2011; Bean et al. 2011; Gao et al. 2023; Kempton et al. 2023), we chose to model this planet with 100 times solar metallicity. The PT profile was taken from Venot et al. (2020) and the UV flux used was that of GJ 436. The resulting abundance profiles (Fig. 21) do not show a lot of difference between the two chemical networks. HCN abundance profile is still above the value predicted by V20 up to a factor of 100 around 1 bar, while other species are mildly affected, except for very low pressure regions around \(10^{-6}\) bar, which are one order of magnitude above for CO and CO\({}_{2}\). Contrary to the previous cases of GJ 1214 b at 1x and 100x solar metallicity, the HCN abundance profile of V23 is never lower than that of V20, expect at the very limit of the P-T profile, around \(10^{-7}\) bar. This means that the reaction H\({}_{2}\)CN + H \(\longrightarrow\) CH\({}_{2}\)NH has very little impact on HCN in the upper atmosphere of this planet, and it was indeed verified by disabling the reaction. Similarly to previous cases and with the same method, for the remaining differences in the lower atmosphere, we identified the same responsible reactions to be H\({}_{2}\)CN + H \(\longrightarrow\) CH\({}_{2}\)NH, CH\({}_{2}\)NH + H \(\longrightarrow\) H\({}_{2}\)CN + H\({}_{2}\) and the related reactions previously discussed in Sect. 3.2.2. For the CO\({}_{2}\) abundance profile, the differences stem from the reaction CO + OH \(\longrightarrow\) CO\({}_{2}\) + H, as discussed in Sect. 3.2.1. The CO differences come from the reaction C + OH \(\longrightarrow\) CO + H, which is the only major production reaction at those pressures in V23. This reaction is also included in V20 with the same parameters. Then, CH\({}_{4}\) and H\({}_{2}\)O follow the opposite trend, resulting (respectively) from a higher loss contribution of the reaction CH\({}_{4}\) + CH \(\longrightarrow\) C\({}_{2}\)H\({}_{4}\) + H and the reaction H\({}_{2}\)O + CH \(\longrightarrow\) H\({}_{2}\)CO + H which is exclusive to V23.
Due to the low amplitude of these changes, the corresponding synthetic spectra (Fig. 22) shows no new features, with only a few minor variations in the amplitude of existing features, largely under observable values.
### Case of HD 189733 b and HD 209458 b
While we observe that for warm Neptunes the most significant impacts of our new chemical scheme are found for low metallicity atmospheres, we examined the effect on two hot Jupiters: HD 189733 b and HD 209458 b. We used the same P-T profiles, UV fluxes, eddy diffusion coefficients, and metallicities as in Venot et al. (2020). The abundances profiles obtained using the two chemical schemes V20 and V23 are shown in Figs. 23 and 24.
For HD 189733 b, the only differences are for species lower than \(10^{-5}\) abundance. CO\({}_{2}\) is more abundant in V23 around \(10^{-6}\) bar than in V20, due to the CO + OH \(\longrightarrow\) CO\({}_{2}\) + H reaction (Sect. 3.2.1). The abundances of NH\({}_{3}\) and HCN are slightly less than one order of magnitude higher in V23 than in V20 between 10 and \(10^{-3}\) bar. The larger NH\({}_{3}\) difference between \(10^{-4}\) and \(10^{-6}\) bar is due to the reaction CH\({}_{2}\)NH\({}_{2}\) + H \(\longrightarrow\) 'CH\({}_{2}\) + NH\({}_{3}\) which is analogous to the reaction CH\({}_{2}\)NH\({}_{2}\) + H \(\longrightarrow\) CH\({}_{4}\) + NH\({}_{2}\) discussed in Sect. 3.2.2. The differences between 10 and \(10^{-3}\) bar are explained by two reactions : NH\({}_{3}\) + NH\({}_{2}\) \(\longrightarrow\) N\({}_{2}\)H\({}_{3}\) + H\({}_{2}\) and NH\({}_{2}\) + NH\({}_{2}\) \(\longrightarrow\) N\({}_{2}\)H\({}_{2}\) + H\({}_{2}\). The first one is not included in V23, while in V20 it contributes to NH\({}_{3}\) con
Figure 21: Abundance profiles of GJ 1214 b for 100 times solar metallicity and a pressure-dependent eddy diffusion coefficient. Dashed lines are for V20, while solid lines are for V23.
Figure 20: Synthetic transmission spectra of GJ 436 b with 100 times solar metallicity, at a resolution of 50, corresponding to the atmospheric compositions calculated with V23 (in red) and V20 (in blue).
sumption. The second one is included in both networks, but the parameters used in the reactions are different, especially for the activation energy, which is 10 kcal/mol higher in V23 than in V20. When combined with the changes in the thermochemical data, the differences in these rate constants explain the differences between the HCN abundance profiles for pressures higher than 10\({}^{-5}\) bar. For lower pressures, the reaction H\({}_{2}\)CN + H \(\longrightarrow\) CH\({}_{3}\)NH increases HCN consumption in V23 in comparison to V20 as previously discussed, leading to over one order of magnitude less HCN in V23 than in V20. A very detailed comparison between the chemical schemes of Venot et al. (2012) and Moses et al. (2011) (which we name V12 and M11 in the following, respectively) has been performed by Moses (2014), taking HD 189733 b as a case study, with the parameters used in Venot et al. (2012). Thus, it is interesting to evaluate how the results obtained with our new scheme compare with these two schemes. The highlighted differences concerned the species NH\({}_{3}\), CH\({}_{4}\), and HCN.
With M11, NH\({}_{3}\) was one order of magnitude above V12 in the range 10 to 10\({}^{-3}\) bar and one order of magnitude under V12 in the range 10\({}^{-5}\) to 10\({}^{-7}\) bar. The V23 NH\({}_{3}\) abundance profile halves this gap between M11 and V12 in the range 10 to 10\({}^{-3}\) bar, being nearly half an order of magnitude above V12 and half an order of magnitude under M11. However, in the pressure range 10\({}^{-5}\) to 10\({}^{-7}\) bar, the V23 NH\({}_{3}\) abundance is much higher than in both chemical networks, reaching a difference of seven orders of magnitude around 10\({}^{-5}\) bar.
For CH\({}_{4}\) the differences mainly concerned the pressure range 1 to 10\({}^{-3}\) bar, with the M11 profiles being around half an order of magnitude above V12. The corresponding V23 profile slightly approaches the M11 profiles, also halving the gap between M11 and V12 for CH\({}_{4}\).
For HCN, the differences concerned the range 100 to 10\({}^{-7}\) bar, the M11 profile being an order of magnitude above V12. The V23 profile comes closer to the M11 profile in comparison to both V12 and V20 for the range 100 to 10\({}^{-5}\) bar, but the HCN abundance drops for V23 around 10\({}^{-6}\) bar, resulting in a difference of three orders of magnitude with M11 at this pressure.
Overall, these results bring the abundances a bit closer to M11, but enhances differences in the upper atmosphere where new CH\({}_{2}\)NH chemical pathways begin to take effect.
We calculated the synthetic transmission spectrum corresponding to the abundances obtained with V23 and V20 and observed some differences (see Fig. 25). The main effect is an increase of the transit depth between 10 and 20 \(\upmu\)m, with an amplitude of up to 50 ppm. No new feature is created in the spectrum, but the differences generated are above the instrumental precision. Thus, the change of chemical scheme could impact the interpretations of the transmission spectrum and the retrieval of NH\({}_{3}\) abundance.
For the hotter HD 209458 b, the differences are even smaller than what we observed for HD189733b, with an amplitude lower than one order of magnitude. The NH\({}_{3}\) and HCN abundances are still a bit higher in V23 as with all previous exoplanets, except for GJ 436 b with a solar metallicity. The reactions causing these differences are the same as for the case of HD 189733 b. Another minor difference is that CO\({}_{2}\), H\({}_{2}\)O and H\({}_{2}\) are more abundant in V23 for pressures lower than 10\({}^{-6}\) bar, unlike H atoms. The formation pathways of these species at this pressure being the same in V20 and V23, the differences must come from slight differences in the parameters of the reactions.
Figure 24: Abundance profiles of HD 209458 b for solar metallicity and a pressure-dependent eddy diffusion coefficient. Dashed lines are for V20, while solid lines are for V23.
Figure 25: Synthetic transmission spectra of HD 189733 b, at a resolution of 50, corresponding to the atmospheric compositions calculated with V23 (in red) and V20 (in blue).
Figure 23: Abundance profiles of HD 189733 b for solar metallicity and a pressure-dependent eddy diffusion coefficient. Dashed lines are for V20, while solid lines are for V23.
Overall, the variations of abundances observed with V23 for this planet in comparison to V20 are very small, and concern mainly species with low abundances (\(<\)10\({}^{-6}\)). As a consequence, the synthetic transmission spectra calculated for both networks (Fig. 26) are very similar.
## 4 Conclusion
In this work, we developed a new C\({}_{2}\) C/H/O/N detailed chemical scheme to model exoplanet disequilibrium chemistry. It was derived from experimentally tested combustion networks, extensively validated on 1618 experimental measurements on a wide range of conditions and compared to other chemical networks performances, such as V20 through a statistical study. Verifications, additions, and detailing of possibly missing radical reactions were also performed, resulting in a much more reliable network than Venot 2020. This network was then used to model two warm Neptunes and two hot Jupiters using the kinetic model FRECKLL, and the results were compared to those obtained with our previous chemical scheme, V20. The chemistry differences were analyzed and new chemical pathways were found, such as the importance of CH\({}_{3}\)NH and its formation through H\({}_{2}\)CN to couple the nitrogen chemistry to CH\({}_{3}\) radicals. This effect has been highlighted in a solar metallicity warm Neptune, and is such expected to mainly impact warm exoplanets with a low metallicity. Transmission spectra were also simulated using TauREx 3.1 and the resulting changes in abundances were found to significantly impact the spectrum for GJ 436 b, with differences around 100 ppm for CO\({}_{2}\) at 4.2 \(\upmu\)m and 200 ppm for HCN at 13 \(\upmu\)m. The amplitude of these features is within the detection capabilities of JWST, which confirms that disequilibrium chemistry model accuracy is crucial to draw correct conclusions from observations. In the context of ongoing and future missions (e.g., JWST and Ariel), disequilibrium chemistry modelling will become increasingly important as we get access to higher precision observations. Improvements are still awaited in the development of experimentally validated sulfur chemical scheme: a compound such as SO\({}_{2}\) has been recently been detected with the JWST and is typically a product of photochemistry (Tsai et al. 2022). Expanding our network to sulfur species and their coupling to carbon and nitrogen species will be the next step toward a more complete chemical scheme addressing modern problematics in the exoplanet chemistry field. More in-depth insights on the critical reactions for each species through sensitivity analysis or other methods in a wide range of exoplanetary conditions could also help to further improve the reliability of these networks, enabling to identify the key reactions in the mechanism and could help to focus the community's efforts and reduce the associated uncertainty through more accurate but computationally intensive ab initio calculations such as VRC-TST for barrierless reactions and RRKM/ME for pressure dependence. As in situ experimentation with probes is impossible in the field of exoplanet chemistry, the use of chemical networks validated on combustion experiments remains the only way to validate our kinetic models to this day.
###### Acknowledgements.
This project is fund by the ANR project 'EXACT' (ANR-21-CE49-0008-01). In addition, O.V. acknowledges funding from the Centre National d'Etudes Spatiales (CNES), and from the CNRS/INSU Programme National de Planetologie (PNP).
|
2305.13491 | Nonparanormal Graph Quilting with Applications to Calcium Imaging | Probabilistic graphical models have become an important unsupervised learning
tool for detecting network structures for a variety of problems, including the
estimation of functional neuronal connectivity from two-photon calcium imaging
data. However, in the context of calcium imaging, technological limitations
only allow for partially overlapping layers of neurons in a brain region of
interest to be jointly recorded. In this case, graph estimation for the full
data requires inference for edge selection when many pairs of neurons have no
simultaneous observations. This leads to the Graph Quilting problem, which
seeks to estimate a graph in the presence of block-missingness in the empirical
covariance matrix. Solutions for the Graph Quilting problem have previously
been studied for Gaussian graphical models; however, neural activity data from
calcium imaging are often non-Gaussian, thereby requiring a more flexible
modeling approach. Thus, in our work, we study two approaches for nonparanormal
Graph Quilting based on the Gaussian copula graphical model, namely a maximum
likelihood procedure and a low-rank based framework. We provide theoretical
guarantees on edge recovery for the former approach under similar conditions to
those previously developed for the Gaussian setting, and we investigate the
empirical performance of both methods using simulations as well as real data
calcium imaging data. Our approaches yield more scientifically meaningful
functional connectivity estimates compared to existing Gaussian graph quilting
methods for this calcium imaging data set. | Andersen Chang, Lili Zheng, Gautam Dasarthy, Genevera I. Allen | 2023-05-22T21:16:01Z | http://arxiv.org/abs/2305.13491v1 | # Nonparanormal Graph Quilting with Applications to Calcium Imaging
###### Abstract
Probabilistic graphical models have become an important unsupervised learning tool for detecting network structures for a variety of problems, including the estimation of functional neuronal connectivity from two-photon calcium imaging data. However, in the context of calcium imaging, technological limitations only allow for partially overlapping layers of neurons in a brain region of interest to be jointly recorded. In this case, graph estimation for the full data requires inference for edge selection when many pairs of neurons have no simultaneous observations. This leads to the Graph Quilting problem, which seeks to estimate a graph in the presence of block-missingness in the empirical covariance matrix. Solutions for the Graph Quilting problem have previously been studied for Gaussian graphical models; however, neural activity data from calcium imaging are often non-Gaussian, thereby requiring a more flexible modeling approach. Thus, in our work, we study two approaches for nonparanormal Graph Quilting based on the Gaussian copula graphical model, namely a maximum likelihood procedure and a low-rank based framework. We provide theoretical guarantees on edge recovery for the former approach under similar conditions to those previously developed for the Gaussian setting, and we investigate the empirical performance of both methods using simulations as well as real data calcium imaging data. Our approaches yield more scientifically meaningful functional connectivity estimates compared to existing Gaussian graph quilting methods for this calcium imaging data set.
_Index terms--_ Graphical models; Graph quilting; Nonparanormal graphical models; Rank-based correlation; Functional connectivity; Covariance completion
## 1 Introduction
Probabilistic graphical models are a popular unsupervised learning technique for inference and sparse edge selection in network estimation, and are an important tool for understanding dependency structures in high-dimensional data. Graphical modeling approaches have been employed for data analysis in a wide variety of areas, including neuroscience (Yatsenko et al., 2015; Carrillo-Reid et al., 2021; Subramaniyan et al., 2018), genomics (Allen and Liu, 2013; Hartemink et al., 2000), and sensor networks (Chen et al., 2016; Dasarathy et al., 2016). One particular research problem where graphical models are applied is in the study of functional neuronal connectivity, defined as statistical relationships between the activities of neurons in the brain, from two-photon calcium imaging data (Horwitz, 2003; Fingelkurts et al., 2005). Functional neuronal connectivity is of particular interest in the realm of neuroscience as a mechanism for describing how neuronal circuits in the brain are organized and for finding patterns in neuronal activity that underlies how information is passed between different regions of the brain (Feldt et al., 2011); it may also serve well as a proxy for deriving the structural synaptic connectivity between individual neurons in the brain, as well as provide insight into the relationship between the two (Deco et al., 2014).
Due to technological limitations, in many calcium imaging experiments, the full set of neurons in a brain region of interest are never simultaneously observed. Instead, scans of functional activity are recorded in sequential, partially overlapping layers, each containing only a subset of the population of neurons (Grienberger and Konnerth, 2012; Berens et al., 2017; Pnevmatikakis et al., 2016). This data collection scheme leads to block-missingness in the ensuing computation of the empirical covariance matrix for the functional recording data on all neurons, as the joint activity of many pairs are never observed; we demonstrate this visually in Figures 1a and 1b. Therefore, in order to estimate a graphical model for functional neuronal connectivity for the full set of observed neurons, the edge structure in the missing portion must be inferred using the information from the existing contemporaneous joint observations. The estimation of a graph in the presence of block missing entries in the covariance matrix is known as the Graph Quilting problem (Vinci et al., 2019).
Previously, several approaches have been developed in order to address the Graph Quilting problem. (Vinci et al., 2019) originally proposed the Maximum Determinant (MAD\({}_{\text{GQ}}\)) algorithm, which first finds an \(\ell_{1}\)-regularized maximum likelihood estimate of the graph on the observed portion of the covariance matrix with the constraint that no edges are affiliated with unobserved elements, then applies thresholding and Schur complements on the result in order to identify graph edges and a minimal superset of edges. Later, (Chang et al., 2022) introduced the Low Rank Graph Quilting (LRGQ) approach, which utilizes a two-step procedure of covariance imputation under the assumption of the existence of a low-rank representation of the underlying covariance matrix, followed by the application of the graphical Lasso. Both of the aforementioned Graph Quilting procedures have shown positive results for graph imputation and edge recovery in the presence of block-wise missingness in the entries of the full covariance matrices. However, their scope is currently restricted to the case where the full underlying data follow a multivariate Gaussian distribution. In particular, as the sample covariance is the sufficient statistic for Gaussian graphical models, these methods and their theory are both developed with a focus on accurately estimating or imputing the covariance in the Gaussian setting. The empirical studies in these papers for validating their methods also focused only on data fit to standard Gaussian graphical models. Another related work, (Massa and Chiogna, 2013), also proposes and empirically studies an approach of combining multiple graphical models of subsets of variables to a full graph, which are in similar spirit to the MAD\({}_{\text{GQ}}\) algorithm in (Vinci et al., 2019), with a focus on the Gaussian setting.
Functional activity data from calcium imaging, on the other hand, tends to be highly non-Gaussian. In particular, as shown in Figures 1c and 1d, the distribution of the fluorescence traces, which represent the firing activity of each individual neuron from calcium imaging data, is heavily right skewed with extreme positive outliers. To address this problem in the realm of graphical models, approaches which assume that the data follow a parametric distribution other than the Gaussian have been developed. For example, many have explored elliptical distributions (Vogel and Fried, 2011; Finegold and Drton, 2011) and general exponential family distributions (Yang et al., 2015, 2018) when the Gaussian assumption may not be appropriate. For the specific case of estimating functional neuronal connectivity, the most common alternative to a Gaussian-based method is the Poisson graphical model (Yang et al., 2013), which assumes that the number of spikes of each neuron across time bins follows a Poisson distribution (Xue et al., 2014; Vinci et al., 2018). The Poisson distribution is also the basis for a variety of other approaches for estimating functional connectivity outside of the graphical modeling paradigm, such as the linear-nonlinear model (Pillow et al., 2008; Stevenson et al., 2008) and time series modeling with inter-spike intervals (Masud and Borisyuk, 2011). More recently, (Chang and Allen, 2021) proposed a different class of graphical models specifically for functional neuronal connectivity based on the Subbotin distribution in order to capture conditional dependencies between extreme values in the activity traces, which directly represent neuronal activity. Nonlinear correlation methods, which employ information theory metrics such as mutual information and joint entropy, have also been used to derive functional neuronal connectivity from calcium imaging data (Garofalo et al., 2009; Stetter et al., 2012).
While the aforementioned non-Gaussian functional neuron connectivity models have shown encouraging results for the calcium imaging application, they may not be naturally suitable for the Graph Quilting problem. In order to leverage existing ideas (Vinci et al., 2019; Chang et al., 2022) for solving this problem, we still hope to operate on some pairwise similarity matrix like the empirical covariance calculated from the observations rather than the raw data itself. Therefore, we consider one particular alternative to the Gaussian graphical model that can be applied to the Graph Quilting problem in the non-Gaussian setting and which has been used for the analysis of functional neuronal connectivity from calcium imaging data:
the nonparanormal graphical model (Liu et al., 2009; Dobra and Lenkoski, 2011). This model assumes a Gaussian copula for the joint distribution of all features while allowing arbitrary marginal distributions for each feature. Although some prior methods (Liu et al., 2009) estimate a univariate transform for each feature before graph estimation (Zhao et al., 2012), it has also been shown that nonparanormal graphical models can be estimated by applying the graphical Lasso (Yuan and Lin, 2007) on a transformed rank-based correlation matrix (Liu et al., 2012; Harris and Drton, 2013; He et al., 2017; Xue and Zou, 2012); thus, this particular procedure could be utilized in the Graph Quilting setting.
In this paper, we study two potential Graph Quilting techniques for nonparanormal graphical models for graph estimation with non-Gaussian data under block-missingness in the pairwise observation set. Our methods utilize the MAD\({}_{\text{GQ}}\) and LRGQ approaches of (Vinci et al., 2019) and (Chang et al., 2022), as applied to rank-based correlation matrices; we call the nonparanormal adaptations MAD\({}_{\text{GQ}}\)-NPN and LRGQ-NPN, respectively. While these previous works have studied the nonparanormal graphical model using rank-based correlation matrices, such as those mentioned above, ours is the first to consider the performance of these methods in the presence of block-missingness in the empirical covariance matrix. In particular, because the missingness pattern is highly structured rather than random and the number of observed entries is relatively sparse, the performance of nonparanormal Graph Quilting will not follow directly from prior results on nonparanormal graphical models. Therefore, we explore below the nonparanormal Graph Quilting approaches both from a theoretical and empirical perspective in order to ascertain whether they are appropriate for the Graph Quilting setting.
The rest of this paper is structured as follows. In Section 2, we describe the MAD\({}_{\text{GQ}}\)-NPN and LRGQ-NPN procedures for nonparanormal Graph Quilting. We then show in Section 3 the conditions under which the MAD\({}_{\text{GQ}}\)-NPN achieves exact graph recovery for observed node pairs and minimal superset recovery for unobserved node pairs. In Section 4, we present simulation studies which compare the performances of the MAD\({}_{\text{GQ}}\)-NPN procedure and one of the LRGQ-NPN algorithms on data from non-Gaussian parametric dis
Figure 1: **(a)**: An example of a typical schema which requires Graph Quilting. Here, we have four subsets of features in a full feature set, observed across different sessions; each rectangle represents the features and observations in a particular block. **(b)**: The corresponding incomplete empirical covariance matrix for the same four patches of nodes from (a). The parts of the covariance matrix not covered by any block are never jointly observed; graph edges in this part of the covariance must be inferred from existing entries. **(c)**: Example of raw fluorescence trace for one neuron recorded in a calcium imaging experiment, showing neuronal activity over time. **(d)**: Resulting empirical density of distribution of trace values.
tributions. Lastly, in Section 5, we investigate the efficacy of the nonparanormal Graph Quilting methods for estimating functional neuronal connectivity networks on real-world calcium imaging data sets in comparison to each other as well as to the Graph Quilting methods with Gaussian assumptions.
## 2 Nonparanormal Graph Quilting
### Problem Set-up
We consider the nonparanormal graphical model (Liu et al., 2009), where each sample vector \(X_{i}\in\mathbb{R}^{p}\) follows a nonparanormal distribution \(\mathrm{NPN}_{p}(f,\Sigma)\), formally defined in Definition 1.
**Definition 1**.: _Let \(f=(f_{1},\ldots,f_{p})\) be an ordered list of monotone univariate functions and let \(\Sigma\in\mathbb{R}^{p\times p}\) be a positive-definite correlation matrix. Then a random vector \(X=(X_{1},\ldots,X_{p})^{\top}\) follows the nonparanormal distribution \(\mathrm{NPN}_{p}(f,\Sigma)\) if \((f_{1}(X_{1}),\ldots,f_{p}(X_{p}))^{\top}\sim\mathcal{N}(0,\Sigma)\)._
Define the precision matrix of the latent Gaussian vector \((f_{1}(X_{1}),\ldots,f_{p}(X_{p}))\) as \(\Theta=\Sigma^{-1}\); from this, we consider the following the graph structure that encodes the conditional dependence relationship among \(X\): \(\mathcal{G}=(V,E)\), \(V=[p]\),\(E=\{(j,k):j,k\in[p],\Theta_{j,k}\neq 0\}\). The primary interest is to recover the unknown graph structure, or equivalently, the non-zero patterns in the off-diagonal elements of \(\Theta\). When one observes i.i.d. samples \(X_{1},\ldots,X_{n}\) from this nonparanormal model, rank-based methods (Liu et al., 2012; Harris and Drton, 2013) have been proposed to learn the graph structure with selection consistency guarantees.
In the Graph Quilting setting, however, we do not have access to all features for each sample. Instead, we observe \(K\) partially overlapping blocks \(V_{1},\ldots,V_{K}\), each of size \(p_{k}=|V_{k}|<p\). The corresponding observed data matrix is denoted by \(X^{(k)}\in\mathbb{R}^{n_{k}\times p_{k}}\) where \(n_{k}\) is the sample size for block \(k\), and our goal is to learn the graph structure of all \(p\) features from \(\{X^{(k)}\}_{k=1}^{K}\). We define the jointly observed feature pairs as \(O=\{(i,j):\exists 1\leq k\leq K,i,j\in V_{k}\}\subset[p]\times[p]\), and let \(O^{c}=[p]\times[p]\backslash O\) denote the feature pairs that has no joint measurement. Thus, we need to infer the graph structure of the missing portion \(O^{c}\) of the corresponding covariance matrix from the known pairwise observations in \(O\).
### Nonparanormal Graph Quilting Methods
In this section, we extend two prior graph quilting approaches, the MAD\({}_{\mathrm{GQ}}\) (MAximum Determinant graph-quilting) approach (Vinci et al., 2019) and the LRGQ (Low-rank Graph Quilting) approach (Chang et al., 2022), from the Gaussian graphical model setting to the nonparanormal setting. We first note that both approaches take an estimate of the covariance/correlation \(\Sigma_{i,j}\) of the Gaussian variable pairs \((i,j)\) in \(O\subset[p]\times[p]\) as the input. In the nonparanormal setting, however, we need the covariance of the latent Gaussian variables \(f_{1}(X_{1}),\ldots,f_{p}(X_{p})\), which can be estimated using rank-based correlations. For our work, we consider two rank-based correlations: Spearman's rho and Kendall's tau. For each block \(1\leq k\leq K\), let \(r_{i,j}^{(k)}\) be the rank of \(X_{i,j}^{(k)}\) among \(X_{1,j}^{(k)},\ldots,X_{n_{k},j}^{(k)}\). Also, let \(\bar{r}_{j}^{(k)}=\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}r_{i,j}^{(k)}=\frac{n_{k}+1 }{2}\). From these, we compute Spearman's rho \(\widehat{\rho}^{(k)}\in\mathbb{R}^{p_{k}\times p_{k}}\) and Kendall's tau \(\widehat{\tau}^{(k)}\in\mathbb{R}^{p_{k}\times p_{k}}\) correlations as follows:
\[\begin{split}\widehat{\rho}^{(k)}_{j,l}&=\frac{\sum _{i=1}^{n_{k}}(r_{i,j}^{(k)}-\bar{r}_{j}^{(k)})(r_{i,l}^{(k)}-\bar{r}_{l}^{(k) })}{\sqrt{\sum_{i=1}^{n_{k}}(r_{i,j}^{(k)}-\bar{r}_{j}^{(k)})^{2}\sum_{i=1}^{n_ {k}}(r_{i,l}^{(k)}-\bar{r}_{l}^{(k)})^{2}}},\\ \widehat{\tau}^{(k)}_{j,l}&=\frac{2}{n_{k}(n_{k}-1)} \sum_{1\leq i<i^{\prime}\leq n_{k}}\mathrm{sign}((r_{i,j}^{(k)}-r_{i^{\prime},j }^{(k)})(r_{i,l}^{(k)}-r_{i^{\prime},l}^{(k)})).\end{split} \tag{1}\]
To obtain a \(p\times p\) correlation matrix, we combine the \(K\) rank correlations \(\widehat{\rho}^{(k)}\), \(\widehat{\tau}^{(k)}\) together. Specifically, for any index \(j\in[p]\), if \(j\in V_{k}\), let \(j_{k}\) be its corresponding index in \(V_{k}\). That is, \((V_{k})_{j_{k}}=j\). Then we formally define \(\widehat{\rho},\widehat{\tau}\in\mathbb{R}^{p\times p}\) as follow: for \((j,l)\in O\),
\[\widehat{\rho}_{j,l}=\frac{\sum_{k=1}^{K}\mathbb{1}_{\{j,l\in V_{k}\}}\widehat{ \rho}^{(k)}_{j_{k},l_{k}}}{\sum_{k=1}^{K}\mathbb{1}_{\{j,l\in V_{k}\}}},\quad \widehat{\tau}_{j,l}=\frac{\sum_{k=1}^{K}\mathbb{1}_{\{j,l\in V_{k}\}}\widehat{ \tau}^{(k)}_{j_{k},l_{k}}}{\sum_{k=1}^{K}\mathbb{1}_{\{j,l\in V_{k}\}}}; \tag{2}\]
otherwise, \(\widehat{\rho}_{j,l}=\widehat{\tau}_{j,l}=0\). As in (Liu et al., 2012), since both Spearman's rho and Kendall's tau rank correlations are biased for the population correlation, we apply elementwise sine function transformations to \(\widehat{\rho}\) and \(\widehat{\tau}\): for any \((j,l)\in[p]\times[p]\),
\[\widehat{\Sigma}_{j,l}^{(\rho)}=2\sin\left(\frac{\pi}{6}\widehat{\rho}_{j,l} \right),\quad\widehat{\Sigma}_{j,l}^{(\tau)}=\sin\left(\frac{\pi}{2}\widehat{ \tau}_{j,l}\right). \tag{3}\]
Both initial observed correlation matrices take the value of zero in \(O^{c}\), and \(\widehat{\Sigma}_{O}^{(\rho)},\widehat{\Sigma}_{O}^{(\tau)}\) serve as estimates for the population correlation \(\Sigma_{O}\). In the following, we describe the MAD\({}_{\rm{GQ}}\)-NPN and LRGQ-NPN approaches based on \(\widehat{\Sigma}_{O}^{(\rho)}\) or \(\widehat{\Sigma}_{O}^{(\tau)}\).
Mad\({}_{\rm{GQ}}\)-npn approachThis approach is based on a partially observed likelihood, following the MAD\({}_{\rm{GQ}}\) method in (Vinci et al., 2019). In a nutshell, the algorithm consists of two steps: (i) first estimating the edge set within \(O\) by minimizing an \(\ell_{1}\)-regularized log-likelihood loss under the constraint of having no edges in \(O^{c}\), followed by a thresholding step that eliminates the bias effect of this artificial constraint; (ii) then using the Schur complement to estimate a superset of the edges in \(O^{c}\), based on potential graph distortions in \(O\) caused by edges in \(O^{c}\). The full procedure is summarized in Algorithm 1. Similar to the original MAD\({}_{\rm{GQ}}\) method, Algorithm 1 involves three tuning parameters: the \(\ell_{1}\) regularization parameter \(\Lambda\in\mathbb{R}^{p\times p}\) in (4), and two thresholding parameters \(\tau_{1}\) and \(\tau_{2}\) for obtaining edge sets in \(O\) and \(O^{c}\). For \(\Lambda\), one can either simply let \(\Lambda_{j,l}=\lambda\) for all \(1\leq j,\,l\leq p\), or choose \(\Lambda_{j,l}=C_{0}\sqrt{\frac{\log p}{n_{j,l}}}\) where \(n_{j,l}\) is the joint sample size of node \(j,\,l\) together. The specific value of \(\lambda\) or the scaling factor \(C_{0}\) can then be chosen based on the extended Bayesian information criterion (Foygel and Drton, 2010; Gao et al., 2012), which is commonly used in the graphical model literature and computationally efficient. For the first thresholding parameter \(\tau_{1}\) that defines the edge set \(\widehat{E}_{O}\subset O\), we can use the stability selection approach (Liu et al., 2010) by examining the stability of \(\widehat{E}_{O}\) when running step 1 and 2 on randomly subsampled data. For the second thresholding parameter \(\tau_{2}\), motivated by our theoretical results presented in Section 3, we can let \(\tau_{2}=c\sqrt{\frac{\log p}{n_{k}}}\) for a small constant \(c>0\). Throughout our empirical studies in this paper, we use \(c=0.05\) which turns out to work reasonably well in different settings.
Under certain assumptions on the graph structure and edge strength, (Vinci et al., 2019) shows that, in the Gaussian setting, the original MAD\({}_{\rm{GQ}}\) approach with sample covariance as the input is guaranteed to recover the graph structure in \(O\) and find a minimum edge superset in \(O^{c}\) (see Definition 2). As we will show in Section 3, such results can also be extended to our Algorithm 1 in the nonparanormal setting.
Lrgq-npn approachAnother approach we consider is extending the LRGQ method proposed by (Chang et al., 2022), which makes use of the potential low-rankness in covariance matrices. This approach is motivated by the commonly seen approximate low-rankness in many neuroscience data sets and exhibits promising performance in these applications. The LRGQ approach is a two-step procedure that first completes the covariance estimate from the partially observed sample covariance in \(O\) using appropriate low-rank matrix completion methods, then uses the completed covariance matrix as input to the graphical Lasso algorithm to produce a graph estimate. For the imputation step, (Chang et al., 2022) introduces three different imputation methods, including a block-wise SVD method (BSVDgq), a nuclear norm minimization method (NNMgq), and a non-convex gradient descent method for low-rank factorization (LRFgq). In the nonparanormal setting, we propose to apply the aforementioned two-step approach on a rank-based correlation matrix instead of the sample covariance; we summarize the full procedure in Algorithm 2. Below, throughout our empirical investigation for the LRGQ-NPN approach, we focus on the BSVDgq approach in (Chang et al., 2022) for the imputation step. More details on its implementation are included in the Appendix. Algorithm 2 also involves two tuning parameters: the rank \(r\) and the regularization parameter \(\Lambda\). Similar to (Chang et al., 2022) and other prior works on low-rank matrix completion, one can choose the appropriate rank \(r\) using the Bayesian information criterion (BIC) (Burnham and Anderson, 2004). While for \(\Lambda\), same as the MAD\({}_{\rm{GQ}}\) procedure, we can let each entry \(\Lambda_{j,k}\) be the same or scale proportionally to \(\sqrt{\frac{\log p}{n_{j,k}}}\). The specific values of \(\Lambda\) can then be chosen based on the stability criterion (Liu et al., 2010).
As has been pointed out in many prior works on graphical models (Ravikumar et al., 2011; Liu et al., 2012), the consistency of graphical Lasso hinges on sufficiently accurate covariance estimation in terms of each
entry, which implies that the two-step procedure is guaranteed to give a consistent graph selection provided that the imputation step leads to small \(\|\widetilde{\Sigma}^{\text{LR}}-\Sigma\|_{\infty}\). The BSVDgq method in particular has been shown in (Chang et al., 2022) to achieve sufficient imputation accuracy that eventually leads to graph selection consistency in the Gaussian setting. However, such theoretical results are based on delicate spectral analysis of sample covariance matrices, which is extremely challenging to be extended to the non-linear rank-based correlations. We leave the theoretical investigation for the LRGQ-NPN approach as future work and instead focus on empirical validation here instead.
## 3 Edge Recovery of MAD\({}_{\text{GQ}}\)-Npn
Although the idea of using rank-based correlations to substitute the sample covariance / Pearson correlation matrix is straightforward, it is not clear if this idea would succeed in the nonparanormal graph quilting setting. In this section, we examine the theoretical properties of the MAD\({}_{\text{GQ}}\)-NPN approach (Algorithm 1), giving an affirmative answer to this question by showing similar theoretical guarantees as those in (Vinci et al., 2019) established for Gaussian data. Specifically, we will show that under similar assumptions, Algorithm 1 exactly recovers the edge set in \(O\) and a minimal superset of edges in \(O^{c}\). Recall the MAD\({}_{\text{GQlasso}}\)-NPN
solution and its Schur complements defined in Algorithm 1, here let's first define their population versions as follows:
\[\widetilde{\Theta}=\operatorname*{arg\,max}_{\Theta\succ 0,\Theta_{O^{c}}=0} \log\det\Theta-\sum_{(i,j)\in O}\Theta_{i,j}\Sigma_{i,j}, \tag{7}\]
\(\widetilde{\Sigma}=\widetilde{\Theta}^{-1}\), and for \(k=1,\ldots,K\),
\[\widetilde{\Theta}^{(k)}=\widetilde{\Theta}_{V_{k},V_{k}}-\widetilde{\Theta}_{ V_{k},V_{k}^{c}}\widetilde{\Theta}_{V_{k}^{-1},V_{k}^{c}}^{-1}\widetilde{\Theta}_{V_ {k}^{c},V_{k}}. \tag{8}\]
(Vinci et al., 2019) has established nice theoretical properties for graph selection if one has access to \(\widetilde{\Theta}\) and \(\widetilde{\Theta}^{(k)}\). These results form the basis of our theory, and much of our analysis is devoted to showing the proximity of our finite sample solutions \(\widetilde{\Theta}\) in (4) to its population counterpart \(\widetilde{\Theta}\), and \(\widetilde{\widetilde{\Theta}}^{(k)}\) in (5) to \(\widetilde{\Theta}^{(k)}\). In the following, we will follow the terminology and assumptions developed for the population theory in (Vinci et al., 2019).
As has been shown in (Vinci et al., 2019), hard thresholding \(\widetilde{\Theta}_{O}\) can lead to the graph recovery in \(O\) under certain assumptions on graph signals. While for the edges in \(O^{c}\), the exact edge set \(E_{O^{c}}\) is not identifiable in the graph quilting setting. However, it is possible to recover a superset of \(E_{O^{c}}\) based on the distortion in the Schur complements of \(\widetilde{\Theta}_{V_{k},V_{k}}\) for each block \(k\), created by the out-of-block edges. Let the distortion created by the out-of-block edges in the block \(k\) be \(\delta_{i,j}^{(k)}=\Theta_{i,j}-\widetilde{\Theta}_{i,j}^{(k)}\). Given the observed covariance \(\Sigma_{O}\) and these distortion information, we define the following minimal superset of \(E_{O^{c}}\) as the smallest possible set that include all possible edges which cannot be ruled out without further information.
**Definition 2** (Minimal Superset of \(E_{o^{c}}\)).: _Let_
\[\mathcal{D}_{\mathrm{off}}(\Sigma,O)=\left\{(i,j,k):\delta_{ij}^{(k)}\neq 0,i \neq j\right\} \tag{9}\]
_be the set of known distortions over the off-diagonal elements of the Schur complements \(\tilde{\Theta}^{(1)},...,\tilde{\Theta}^{(K)}\), and let_
\[\mathcal{A}_{\mathrm{off}}(\Sigma,O):=\{\Sigma^{\prime}\succ 0:\;\Sigma^{ \prime}_{O}=\Sigma_{O},\;\mathcal{D}_{\mathrm{off}}(\Sigma^{\prime},O)= \mathcal{D}_{\mathrm{off}}(\Sigma,O)\} \tag{10}\]
_be the set of all positive definite covariance matrices that agree with the observed \(\Sigma_{O}\) and distortions \(\mathcal{D}_{\mathrm{off}}(\Sigma,O)\). A set \(\mathcal{S}_{\mathrm{off}}\subseteq O^{c}\) is the minimal superset of \(E_{O^{c}}\) with respect to \(\Sigma_{O}\) and \(\mathcal{D}_{\mathrm{off}}(\Sigma,O)\) if it satisfies the following properties:_
1. \(\forall\Sigma^{\prime}\in\mathcal{A}(\Sigma,O,Q)\) _we have_ \(E^{\prime}_{O^{c}}\subseteq\mathcal{S}_{\mathrm{off}}\)_;_
2. \(\forall\mathcal{S}^{\prime}\subsetneq\mathcal{S}_{\mathrm{off}}\)_,_ \(\exists\Sigma^{\prime}\in\mathcal{A}_{\mathrm{off}}(\Sigma,O)\) _such that_ \(E^{\prime}_{O^{c}}\cap(\mathcal{S}_{\mathrm{off}}\setminus\mathcal{S}^{\prime})\neq\emptyset\)_._
We also define the following quantities that will be useful in our theory. Let \(\nu:=\min_{(i,j)\in E_{O}}|\Theta_{i,j}|\) be the minimum signal in \(E_{O}\), \(\delta:=\max_{(i,j)\in O,i\neq j}|\Theta_{i,j}-\widetilde{\Theta}_{i,j}|\) be the maximum distortion induced by constraining \(\widetilde{\Theta}_{O^{c}}=0\). Also let \(\psi:=\min_{(i,j,k):0<|\tilde{\Theta}_{i,j}^{(k)}|<\delta}\min\{|\tilde{ \Theta}_{i,j}^{(k)}|,\delta-|\tilde{\Theta}_{i,j}^{(k)}|\}\), \(d=\max_{i}\|\Theta_{i,:}\|_{0}\), \(\widetilde{d}=\max_{i}\|\widetilde{\Theta}_{i,:}\|_{0}\), \(\widetilde{s}=\|\widetilde{\Theta}\|_{0,\mathrm{off}}\) be the number of non-zero off-diagonal elements of \(\widetilde{\Theta}\). Some other technical quantities include \(\widetilde{\kappa}=\frac{\lambda_{\max}(\widetilde{\Theta})}{\lambda_{\min}( \widetilde{\Theta})}\), \(\kappa_{\widetilde{\Sigma}}=\left\|\widetilde{\Sigma}\right\|_{\infty}=\max_ {j}\sum_{k=1}^{p}|\widetilde{\Sigma}_{j,k}|\), \(\kappa_{\widetilde{\Gamma}}=\left\|\left(\widetilde{\Gamma}_{S,S})^{-1}\right\| \right\|_{\infty}\), where \(S\) is the support set of \(\widetilde{\Theta}\). Let \(H_{i}=\{j:(i,j)\in O^{c}\}\), \(N_{H_{i}}(i)=N(i)\cap H_{i}\) be the neighborhood of \(i\) in \(H_{i}\). We say that two nodes \(i\) and \(j\) are \(V\)-connected if there is a path in \(V\) connecting \(i\) and \(j\). We also require the following assumptions:
**Assumption 1** (Weak distortion compared to signal).: _We assume that the maximum off-diagonal distortion of the MAD\({}_{\mathrm{GQ}}\) solution is smaller than half the signal strength in the original precision matrix: \(\delta<\frac{\nu}{2}\)._
As has been proven in Theorem 3.1 in (Vinci et al., 2019), Assumption 1 can be satisfied as long as \(\|\Theta_{O^{c}}\|_{\infty}\) is bounded above by a function of the edge weights within \(\Theta_{O}\); this assumption is likely to hold as long as the pairwise observation set includes all edges with strong signal strength.
**Assumption 2**.: _For every node \(i\in V\) with \(N_{H_{i}}(i)\neq\emptyset\), we have that for every \(k\) such that \(i\in V_{k}\), there exists at least one node \(j\in V_{k}\setminus\{i\}\) that is \((H_{i}\cup\{j\})\)-connected to some node in \(N_{H_{i}}(i)\)._
Assumption 2 requires that if any node \(i\) has an edge in \(O^{c}\), then for any block it belongs to, there exists a path that starts from \(i\) and an edge in \(O^{c}\), and eventually back to this block. This assumption ensures that any edge \((i,j)\) in \(O^{c}\) would cause certain distortions in the blocks \(i\) and \(j\) belong to. Therefore, given the distortion set \(\mathcal{D}_{\text{off}}(\Sigma,O)\), it is possible to identify the node set with edges in \(O^{c}\). However, we can't directly compute the distortion \(\delta^{(k)}_{i,j}\), a discrepancy between the unknown \(\Theta_{i,j}\) and \(\widetilde{\Theta}_{i,j}\). Instead, we can only assume that such discrepancy leads to a small non-zero \(\widetilde{\Theta}_{i,j}\). This motivates our following assumption:
**Assumption 3**.: _If \(\delta^{(k)}_{i,\setminus i}\neq 0\), then there exists \(j\neq i\) such that \(0<|\tilde{\Theta}^{(k)}_{ij}|<\delta\)._
**Assumption 4** (Incoherence condition).: _Let \(\Gamma=\widetilde{\Sigma}\otimes\widetilde{\Sigma}\), \(S=\{(j,l):\widetilde{\Theta}_{j,l}\neq 0\}\). We assume \(\max_{e\in O\cap S^{c}}\|\Gamma_{e,S}\Gamma_{S,S}^{-1}\|_{1}\leq 1-\alpha\) for some \(0<\alpha\leq 1\)._
Similar incoherence conditions are commonly assumed in the literature of graphical models (Ravikumar et al., 2011).
**Assumption 5** (Sufficient block measurements).: _The \(K\) blocks cover all nodes: \(\cup_{k=1}^{K}V_{k}=[p]\), and at least one off-diagonal element: \(|O|>p\)._
This is a mild assumption on the block measurements, which is typically satisfied by our motivating neuroscience applications.
**Assumption 6** (Regularization parameter).: \(\Lambda_{j,l}=\frac{C_{0}}{\alpha}\sqrt{\frac{\log p}{\min_{k}n_{k}}}\) _for all \((j,l)\in O\) and some universal constant \(C_{0}>0\)._
Assumptions 1-6 also appear in (Vinci et al., 2019) in the Gaussian graphical model setting. Note that we do not require any additional assumptions when extending the theoretical guarantees to the nonparanormal setting, while accommodating for strictly weaker assumptions on the joint distribution.
**Theorem 1**.: _Suppose that Assumptions 1-6 hold, and there exist at least one edge in the graph encoded by \(\Theta\) and \(\widetilde{\Theta}\): \(d,\widetilde{d}>2\). Then we have the following guarantees for Algorithm 1 with probability at least \(1-\sum_{k}p_{k}^{-10}\):_
* **Exact recovery in \(O\)**_. If_ \[n_{k}>\left[\frac{C_{0}}{4}\kappa_{\widetilde{\Gamma}}\left(1+\frac{8}{ \alpha}\right)\left(\left(\frac{\nu}{2}-\delta\right)^{-1}+3\left(1+\frac{8}{ \alpha}\right)(\kappa_{\widetilde{\Sigma}}+\kappa_{\widetilde{\Sigma}}^{3} \kappa_{\widetilde{\Gamma}})\widetilde{d}\right)\right]^{2}\log p_{k},\] (11) \[\delta+\varepsilon_{1}\leq\tau_{1}<\nu-\delta-\varepsilon_{1}\text{, where }\varepsilon_{1}=\tfrac{C_{0}}{4}\kappa_{\widetilde{\Gamma}}(1+\tfrac{8}{\alpha})\max_{ k}\sqrt{\tfrac{\log p_{k}}{n_{k}}}\text{, then }\widehat{E}_{O}=E_{O}\text{.}\]
* **Minimal superset recovery in \(O^{c}\)**_. If_ \[n_{k}>C_{0}\kappa_{\widetilde{\Gamma}}^{2}\left(1+\frac{8}{\alpha}\right)^{2} \left[\frac{9\widetilde{\kappa}^{4}}{4\psi^{2}}+\frac{1}{4\lambda_{\min}( \widetilde{\Theta})^{2}}\right]\min\{p+\widetilde{s},\widetilde{d}^{2}\}\log p _{k},\] (12) \[\varepsilon_{2}\leq\tau_{2}<\psi-\varepsilon_{2}\text{, }\delta- \varepsilon_{2}<\tau_{1}\leq\nu-\varepsilon_{2}\text{, where }\varepsilon_{2}=\tfrac{3C_{0}}{4}\kappa_{\widetilde{\Gamma}}(1+\tfrac{8}{\alpha}) \widetilde{\kappa}^{2}\min\{\sqrt{p+\widetilde{s}},\widetilde{d}\}\max_{k} \sqrt{\tfrac{\log p_{k}}{n_{k}}}\text{, then }\widehat{E}_{O^{c}}=\mathcal{S}_{\text{off}}\text{.}\]
The proof of Theorem 1 can be found in the Appendix. Theorem 1 suggests that under the same population assumptions as the Gaussian graph quilting setting in (Vinci et al., 2019), as long as the sample size for each block is sufficiently large: \(n_{k}=\Omega(\widetilde{d}^{2}\log p_{k})\), and the thresholding parameters are appropriately chosen: \(\tau_{1}\in[\delta+C\max_{k}\sqrt{\tfrac{\log p_{k}}{n_{k}}},\nu-\delta-C\max_ {k}\sqrt{\tfrac{\log p_{k}}{n_{k}}})\), \(\tau_{2}\asymp\widetilde{d}\max_{k}\sqrt{\tfrac{\log p_{k}}{n_{k}}}\), we can achieve exact recovery for edges in \(O\) and construct a minimal superset for edges in \(O^{c}\). This result is comparable to the main theory in (Vinci et al., 2019) for Gaussian graphical model, although we are considering a strictly broader distribution family. To prove Theorem 1, we first extend the existing error bounds for rank-based correlation matrices (Liu et al., 2012) in the full data setting to our modified correlation estimates defined in 2.2, which is computed from \(K\) semi-overlapping blocks of measurements. We then utilize this error bound to show the proximity of \(\widehat{\widetilde{\Theta}}\) and its Schur complements \(\widehat{\widetilde{\Theta}}^{(k)}\) to their population counterparts, which eventually leads to Theorem 1 when combined with the population theory developed in (Vinci et al., 2019).
## 4 Simulation Studies
We now investigate the empirical performance of the nonparanormal Graph Quilting procedures outlined in Section 2 on simulated data. For each of the simulation trials, we create synthetic block observation patterns by randomly ordering the features and assigning them to \(K\) partially overlapping blocks of size \(o\). Data is generated for each block from multivariate Gaussian data from a sparse inverse covariance matrix with a small-world structure and then perform copula transform on each column to a non-Gaussian distribution. Rank-based correlations are calculated entry-wise for each pair of features in the observation set, with correlations averaged for pairs observed in multiple blocks. Our goal is to recover the nonzero entries in the population sparse inverse covariance matrix of the unobserved Gaussian variable. Below, we compare the MAD\({}_{\text{GQ}}\)-NPN approach and one of the LRGQ-NPN procedures, namely the block SVD (BSVDgq-NPN) approach, along with a basic zero imputation procedure in which the missing entries of the correlation matrix are imputed as 0 before applying the graphical Lasso algorithm. The methods are evaluated using the the true positive rate (TPR) and false discovery proportion (FDP) of their respective resulting graph estimates as compared to the true underlying graph; in particular, we run 50 replications on each set of simulation parameters, with new random block assignment in each replication, and show the average and standard deviation of the TPR and FDP of edge selection for each method. Hyperparameter selection is performed via optimal F-1 score tuning with respect to the true graph.
We first test the nonparanormal Graph Quilting methods with Spearman correlation matrices calculated on data containing 100 features generated from a Gamma distribution with shape parameter 5 and scale parameter 1. For each block, 2000 observations are generated per feature. Figures 1(a) and 1(b) show the TPR and FDP of each method compared to the true underlying graph when the size of the block \(o\) is 52, 56, 60, 64, and 68 while the number of blocks is held constant at 2, and Figures 1(c) and 1(d) show the TPR and FDP for \(K=\) 2, 3, 4, 5 and 6 blocks while keeping the total number of observed node pairs across all blocks (\(K\times o\)) constant at 120. From these results, we see that both methods achieve a consistently high true positive rate for edge recovery, and generally outperforms the zero imputation method. Notably, even though the low-rank assumption inherent to the BSVDgq-NPN method is not met here, the method is still able to recover the true edges of the graph at a decently high rate. Comparing the two nonparanormal Graph Quilting methods, we see that the MAD\({}_{\text{GQ}}\)-NPN approach consistently has a higher TPR and FDP compared to BSVDgq-NPN;
Figure 2: Performance of MAD\({}_{\text{GQ}}\)-NPN, BSVDgq-NPN, and zero imputation on simulated data from a Gamma distribution. **(a)** TPR, changing block size. **(b)** FDP, changing block size. **(c)** TPR, number of blocks. **(d)** FDP, changing number of blocks.
this result follows what we expect, as the former approach is designed to construct a superset of the true edges for the unobserved entries in the covariance matrix and thus will return an estimate with both more true positives and false positives compared to BSVDgq-NPN. Across different simulation parameters, we generally observe that TPR is higher and FDP is lower for the estimates from both methods when there are fewer blocks and when each block is larger, which we would expect to see as these conditions effectively provide more samples for estimation. Additionally, the difference in performance between the two nonparanormal Graph Quilting methods appear to be fairly consistent across the different block sizes and number of blocks.
We also evaluate the nonparanormal Graph Quilting methods on data simulated from a Cauchy distribution with mean 0 and scale parameter 3 with 2000 observations and 100 features, from which a Kendall correlation matrix is calculated and used as the input to the graphical Lasso. For this particular experiment, we consider the case where the underlying covariance matrix is approximately low-rank; this is generated via the spiked covariance model (Johnstone, 2001), and we also enforce a small-world graph structure in its inverse. Figures 3a and 3b compare the MAD\({}_{\text{GQ}}\)-NPN and BSVDgq-NPN methods in terms of TPR and FDR for edge selection for \(K=2\) blocks and varying block size \(o\) is 52, 56, 60, 64, and 68, and Figures 3c and 3d compare the same methods for \(K=2\), 3, 4, 5 and 6 blocks with a constant total number of observations (\(K\times o\)) of 120. Both nonparanormal Graph Quilting methods perform well in terms of true positive rate for edge recovery here as well, and both considerably outperform the zero imputation approach. As opposed to the previous simulation study in which the MAD\({}_{\text{GQ}}\)-NPN method outperformed the BSVDgq-NPN method in terms of selecting true edges in the graph, we see in this case that the latter outperforms the former for both the TPR and FDP metrics. The relative performance of the two nonparanormal Graph Quilting methods matches what we would expect from this particular simulation setting, as the structure of the full true underlying covariance matrix more closely match the model set-up of the BSVDgq-NPN method, and also aligns with comparative results seen from (Chang et al., 2022) comparing Graph Quilting methods under a Gaussian assumption. Even in this case, though, the MAD\({}_{\text{GQ}}\)-NPN method still recovers the true edges of the underlying graph reasonably well. These results show that the choice of which of the nonparanormal Graph Quilting approach to apply for a problem will depend whether a low-rank assumption makes sense in the particular scientific context. Additionally, as above, we observe that performance generally improves with larger blocks and with a fewer total number of blocks.
Figure 3: Performance of MAD\({}_{\text{GQ}}\)-NPN, BSVDgq-NPN, and zero imputation on simulated data from a Cauchy distribution with low-rank covariance. **(a)** TPR, changing block size. **(b)** FDP, changing block size. **(c)** TPR, changing number of blocks. **(d)** FDP, changing number of blocks.
## 5 Calcium Imaging Example
We now study the nonparanormal Graph Quilting procedures on a real-world calcium imaging data set in order to assess the applicability of these methods for estimating functional neuronal connectivity. The data come from the Allen Institute (de Vries et al., 2016) and contains functional activity recordings for 227 neurons in a mouse V1 cortex during spontaneous activity across approximately 9000 time points. For this analysis, we compare the performances of the MAD\({}_{\text{GQ}}\)-NPN and BSVDgq-NPN nonparanormal Graph Quilting methods to each other, as well as to a zero imputation approach, in a similar fashion to the methodology in Section 4. We also compare the two nonparanormal Graph Quilting methods to their analogous Gaussian-based procedures.
To do the former, we measure the performances of the MAD\({}_{\text{GQ}}\)-NPN, BSVDgq-NPN, and zero imputation procedure by how well the graph estimates from each nonparanormal Graph Quilting method on rank-based correlation matrices with synthetic block-missingness recovers the edge structure of the graph estimated using the nonparanormal graphical model on the same rank-based correlation matrix with all pairwise entries observed. Specifically, we calculate the Spearman correlation matrix for all observed neurons using binned spike counts and apply the graphical Lasso to the fully observed covariance, the result of which we consider as the true underlying graph structure. We create artificial block-missingness in the empirical covariance matrix by randomly assigning features to \(K\) partially overlapping blocks of size \(o\) and masking all pairwise entries in the covariance which are not contained in any block. The masked Spearman correlation matrix is then used as input for the nonparanormal Graph Quilting methods. We show the average and standard deviation of the TPR and FDR for recovering the graph estimate on the full Spearman covariance matrix for each method across 50 replications on each set of parameters, with new random block assignments each replication. Hyperparameter selection is performed using optimal F-1 score with respect to the graph estimated from the graphical Lasso fit on the fully observed data with the rank-based correlation matrix.
Figures 3(a) and 3(b) show the TPR and FDP of each method compared to the true underlying graph when the size of the block \(o\) is 130, 140, 150, 160, and 170 while the number of blocks is held constant at 2, and Figures 3(c) and 3(d) show the TPR and FDP for \(K=2\), 3, 4, 5 and 6 blocks while keeping the total number of observations across all blocks (\(K\times o\)) constant at 300. In general, both nonparanormal Graph Quilting methods are able to recover most of edges as the graph estimated with the nonparanormal graphical
Figure 4: Performance of MAD\({}_{\text{GQ}}\)-NPN, BSVDgq-NPN, and zero imputation on Allen Institute data. **(a)** TPR, changing block size. **(b)** FDP, changing block size. **(c)** TPR, changing number of blocks. **(d)** FDP, changing number of blocks.
model when all features are observed simultaneously, which shows that both methods can reliably recover the edges in functional neuronal connectivity networks derived from calcium imaging data that would be present if all neurons were observed simultaneously. However, we also observe a relatively high FDP for both methods, which seems to show that both methods tend to slightly overselect the total number of edges in the underlying full graph. The BSVDgq-NPN method outperforms MAD\({}_{\text{GQ}}\)-NPN for edge recovery in terms of both TPR and FDP; this can likely be attributed to the approximate low-rank structure often found in the empirical covariance matrices from calcium imaging data (Stringer et al., 2019). Also, as seen in Section 4, all methods perform better with fewer blocks and with larger block sizes.
We then compare the functional neuronal connectivity graphs from the nonparanormal Graph Quilting methods to those found using Graph Quilting with a Gaussian assumption. Specifically, we use the MAD\({}_{\text{GQ}}\) and BSVDgq Graph Quilting methods to obtain graph estimates on the same data set with the same artificial block-missingness pattern as above, and compare the selected edges to functional connectivity graphs estimated by MAD\({}_{\text{GQ}}\)-NPN and BSVDgq-NPN using neural properties from provided metadata. We first assess each of the estimated functional connectivity networks by the proportion of edges that connect pairs of neurons with the same visual angular tuning category. In the neuroscience literature, it has been hypothesized that neurons are tuned such that they fire in the presence of specific stimuli (Sakia and Miyashita, 1994) and that neurons with the similar tunings are more likely to be functionally connected (Stevenson et al., 2012). Thus, we expect a sizable proportion of the edges in the estimated functional connectivity networks to link pairs of neurons within the same tuning category. For this particular calcium imaging data set, neural angular tuning is categorized into 8 different bins, each comprising 45 degree intervals. Hyperparameter tuning for the BSVDgq and BSVDgq-NPN methods is performed using the extended Bayesian information criterion with respect to the original data; for the MAD\({}_{\text{GQ}}\) and MAD\({}_{\text{GQ}}\)-NPN methods, in order to crate a fair comparison between methods, we tune the thresholding parameters such that the number of edges is
Figure 5: Estimated functional connectivity networks from Graph Quilting methods on Allen Institute data. Nodes are colored by visual angular tuning category. **(a)** BSVDgq. **(b)** BSVDgq-NPN. **(c)** MAD\({}_{\text{GQ}}\). **(d)** MAD\({}_{\text{GQ}}\)-NPN.
approximately similar to the estimates from BSVDgq and BSVDgq-NPN.
Figures (a)a and (b)b show examples of estimated functional connectivity networks from the BSVDgq and BSVDgq-NPN methods, and Figures (c)c and (d)d show estimated functional connectivity networks from the MAD\({}_{\text{GQ}}\) and MAD\({}_{\text{GQ}}\)-NPN methods, respectively. Structurally, we see that the graph estimates from the nonparanormal Graph Quilting methods are much more likely to be comprised of edges between neurons with the same angular tuning category. Specifically, across different replications with synthetic block-missingness, the BSVDgq-NPN graph estimates contain an average of 38.9% of edges that link neurons with the same tuning category, compared to 21.4% of edges in the BSVDgq estimates. Similarly, we see that 46.8% of edges in the MAD\({}_{\text{GQ}}\)-NPN graph estimates are between pairs of neurons in the same tuning bin, as opposed to just 28.7% in the MAD\({}_{\text{GQ}}\) graph estimates.
We also compare the recorded firing activity of one particular example neuron and its selected edge neighbors in the the functional connectivity graphs estimated from the BSVDgq and BSVDgq-NPN methods in terms of how closely the neural firing patterns match one another. Specifically, for the problem of functional neuronal connectivity, our goal is to find neurons with consistent synchronous firing activity across time, which is represented by contemporaneous large positive spikes in the fluorescence traces (Smetters et al., 1996; Turaga et al., 2013). In Figures (a)a and (b)b, we visualize the fluorescence trace of the selected neuron, overlayed with the fluorescence traces neurons which are edge neighbors unique to the BSVDgq graph; we also do the same in Figures (c)c and (d)d for neurons that are edge neighbors unique to the BSVDgq-NPN graph. The top 10 periods of spiking activity of the example selected neuron are represented via blue dotted lines in the plots. From the results, we see that the firing activity of the selected neuron seems to match relatively closely with the edge neighbors selected only in the graph estimate from the BSVDgq-NPN algorithm. On the other hand, the edge neighbors selected only by the BSVDgq graph do not appear to have the same firing pattern. Quantitatively, the top 10 firing times for the selected neuron and its edge neighbors match 72.4% of the time for the BSVDgq-NPN functional connectivity graph estimates, as opposed to just 24.5% for the BSVDgq functional connectivity graphs. Overall, from this real-world calcium imaging data study, we see that nonparanormal Graph Quilting provides more logical functional connectivity estimates in the neuroscience context compared to the ordinary Graph Quilting procedures.
Figure 6: Fluorescence traces of one particular neuron (in grey) and edge neighbors (in red) from functional connectivity graphs estimated via BSVDgq (**(a, b)**) and BSVDgq-NPN (**(c, d)**).
Discussion
In this work, we have presented two potential approaches to nonparanormal Graph Quilting, MAD\({}_{\text{GQ}}\)-NPN and DSVDgq-NPN, which broaden the scope of Graph Quilting procedures to be applicable in the nonparanormal graphical model setting. We demonstrate theoretical properties of the MAD\({}_{\text{GQ}}\)-NPN method, showing criterion for exact edge recovery in the observed portion and minimum superset recovery in the missing portion of the graph. Through our empirical studies, we demonstrate that both nonparanormal Graph Quilting methods can be effective for edge selection for non-Gaussian data depending on the structure of the underlying covariance matrix. Through our real-world calcium imaging data example, we show that the nonparanormal Graph Quilting methods can be used to recover the same functional neuronal connectivity network edges from calcium imaging data in the presence of non-simultaneous observations for the full population of neurons as would be found if all neurons are observed concurrently, and that these methods can be applied to estimate more appropriate functional neuronal connectivity networks from calcium imaging data compared to Gaussian methods.
There are many potential directions for future research that can be taken from our work. While we have characterized the theoretical performance of the MAD\({}_{\text{GQ}}\)-NPN approach, we do not currently have any guarantees for the LRGQ-NPN methods. Methodologically, extensions to the nonparanormal Graph Quilting procedure to account for potential other data effects such as latent variables, autocorrelations, or covariates could improve graph estimation and edge selection accuracy. Also, the nonparanormal graphical models could be applied to research problems in other fields where joint observations may be missing, such as RNA-seq in genomics and signal processing in power systems. In conclusion, our work has helped to extend graph inference for nonparanormal graphical models in the presence of block-missingness in the observed covariance matrix, with theoretical guarantees for performance and promising empirical results for calcium imaging data.
## Acknowledgements
The authors gratefully acknowledge support by NSF NeuroNex-1707400, NIH 1R01GM140468, and NSF DMS-2210837.
|
2309.02649 | Controllability Backbone in Networks | This paper studies the controllability backbone problem in dynamical networks
defined over graphs. The main idea of the controllability backbone is to
identify a small subset of edges in a given network such that any subnetwork
containing those edges/links has at least the same network controllability as
the original network while assuming the same set of input/leader vertices. We
consider the strong structural controllability (SSC) in our work, which is
useful but computationally challenging. Thus, we utilize two lower bounds on
the network's SSC based on the zero forcing notion and graph distances. We
provide algorithms to compute controllability backbones while preserving these
lower bounds. We thoroughly analyze the proposed algorithms and compute the
number of edges in the controllability backbones. Finally, we compare and
numerically evaluate our methods on random graphs. | Obaid Ullah Ahmad, Waseem Abbas, Mudassir Shabbir | 2023-09-06T01:21:45Z | http://arxiv.org/abs/2309.02649v1 | # Controllability Backbone in Networks
###### Abstract
This paper studies the controllability backbone problem in dynamical networks defined over graphs. The main idea of the controllability backbone is to identify a small subset of edges in a given network such that any subnetwork containing those edges/links has at least the same network controllability as the original network while assuming the same set of input/leader vertices. We consider the strong structural controllability (SSC) in our work, which is useful but computationally challenging. Thus, we utilize two lower bounds on the network's SSC based on the zero forcing notion and graph distances. We provide algorithms to compute controllability backbones while preserving these lower bounds. We thoroughly analyze the proposed algorithms and compute the number of edges in the controllability backbones. Finally, we compare and numerically evaluate our methods on random graphs.
Strong structural controllability, network control, zero forcing, graph distances.
## I Introduction
Network structure profoundly influences the dynamical behavior of networked multiagent systems. For instance, network controllability, connectivity, robustness to failures, information dissemination, and influence evolution in networks rely on the underlying network topology [1]. Therefore, any changes to the network's structural organization, such as adding or removing links between agents, may alter the system-level properties of the network, which could be either beneficial or detrimental. Thus, for a survivable network design and avoid the deterioration in the desired network behavior, a practical approach is to identify a sparse sub-network (or backbone) whose maintenance would guarantee the preservation of the desired network property in the face of modifications. For example, to maintain connectivity, preserving edges in the minimum spanning tree ensures a path between every pair of agents. Similarly, in communication infrastructure networks, connected dominating sets are used to identify the minimum number of agents necessary to form the backbone network [2].
This paper studies the _controllability backbone_ problem in a networked dynamical system defined over a graph \(G=(V,E)\). Network controllability concerns the ability to manipulate the agents within a network as desired through external control signals injected via a subset of agents called _input agents_ or _leaders_. The network controllability depends on the choice of leaders \(V_{\ell}\subseteq V\) and the interconnections between agents [3, 4, 5]. Moreover, the network controllability may deteriorate if the connections/edges between agents change [6, 7, 8, 9]. The main idea of the controllability backbone is to determine a small subset of edges \(E_{B}\subseteq E\) such that _any_ subnetwork of \(G\) containing \(E_{B}\) has at least the same network controllability as \(G\) with the same leaders. In other words, maintaining \(E_{B}\) implies that the minimum network controllability is preserved despite edge modifications.
We consider the _strong structural controllability (SSC)_ for the backbone problem. SSC is advantageous as it depends on the edge set \(E\) and not on the edge weights (which represent the coupling strengths between vertices and often are not precisely known). However, determining the SSC of a network is a challenging computational problem [8, 10, 11]. So a typical approach is to obtain tight lower bounds. Therefore, we aim to identify a controllability backbone for a given network \(G=(V,E)\) and leader set \(V_{\ell}\), where the backbone preserves a tight lower bound on the network's SSC. As for the SSC lower bounds, we consider two widely used bounds based on the zero forcing sets and distances in graphs [12, 13, 14]. Our main contributions are as follows:
1. We present a novel approach to identifying a sparse subgraph in a graph that guarantees the same level of controllability (SSC) as the original graph. We call this subgraph the _controllability backbone_ (Section II).
2. We provide a polynomial algorithm to compute a minimum controllability backbone, which preserves a lower bound on the network's SSC based on zero forcing sets in graphs (Section III).
3. Additionally, we consider a distance-based lower bound on SSC and compute a controllability backbone preserving the distance bound. We derive tight bounds on the number of edges in the distance-based backbone (Section IV).
4. Finally, we illustrate our results and compare different controllability backbones (Section V).
There are previous works dealing with the densification problem, i.e., how can we add edges to a graph while maintaining its controllability (e.g., [15, 16])? In contrast, this paper studies an inverse, i.e., the sparsification problem, to identify a small subset of crucial edges whose existence within any subgraph guarantees the same controllability as the original graph. While some studies have considered identifying edges whose removal from the graph does not deteriorate the network controllability of the remaining graph (e.g., [17, 18, 19, 20]), our problem setup is distinct. We require that _any_ subgraph containing the backbone edges be at least as controllable as the original graph, resulting in a more general problem formulation. Furthermore, our formulation considers the concept of strong structural controllability, which adds to its generality.
The rest of the paper is organized as follows: Section II |
2301.01562 | An interference detection strategy for Apertif based on AOFlagger 3 | Context. Apertif is a multi-beam receiver system for the Westerbork Synthesis
Radio Telescope that operates at 1.1-1.5 GHz, which overlaps with various radio
services, resulting in contamination of astronomical signals with
radio-frequency interference (RFI). Aims. We analyze approaches to mitigate
Apertif interference and design an automated detection procedure for its
imaging mode. Using this approach, we present long-term RFI detection results
of over 300 Apertif observations. Methods. Our approach is based on the
AOFlagger detection approach. We introduce several new features, including ways
to deal with ranges of invalid data (e.g. caused by shadowing) in both the
SumThreshold and scale-invariant rank operator steps; pre-calibration bandpass
calibration; auto-correlation flagging; and HI flagging avoidance. These
methods are implemented in a new framework that uses the Lua language for
scripting, which is new in AOFlagger version 3. Results. Our approach removes
RFI fully automatically, and is robust and effective enough for further
calibration and (continuum) imaging of these data. Analysis of 304 observations
show an average of 11.1% of lost data due to RFI with a large spread. We
observe 14.6% RFI in auto-correlations. Computationally, AOFlagger achieves a
throughput of 370 MB/s on a single computing node. Compared to published
machine learning results, the method is one to two orders of magnitude faster. | A. R. Offringa, B. Adebahr, A. Kutkin, E. A. K. Adams, T. A. Oosterloo, J. M. van der Hulst, H. Dénes, C. G. Bassa, D. L. Lucero, W. J. G. Blok, K. M. Hess, J. van Leeuwen, G. M. Loose, Y. Maan, L. C. Oostrum, E. Orrú, D. Vohl, J. Ziemke | 2023-01-04T12:09:45Z | http://arxiv.org/abs/2301.01562v1 | # An interference detection strategy for Apertif
###### Abstract
Context:Apertif is a multi-beam receiver system for the Westerbork Synthesis Radio Telescope that operates at 1.1-1.5 GHz, which overlaps with various radio services, resulting in contamination of astronomical signals with radio-frequency interference (RFI).
Aims:We analyze approaches to mitigate Apertif interference and design an automated detection procedure for its imaging mode. Using this approach, we present long-term RFI detection results of over 300 Apertif observations.
Methods:Our approach is based on the AOFlagger detection approach. We introduce several new features, including ways to deal with ranges of invalid data (e.g. caused by shadowing) in both the SumThreshold and scale-invariant rank operator steps; pre-calibration bandpass calibration; auto-correlation flagging; and HI flagging avoidance. These methods are implemented in a new framework that uses the Lua language for scripting, which is new in AOFlagger version 3.
Results:Our approach removes RFI fully automatically, and is robust and effective enough for further calibration and (continuum) imaging of these data. Analysis of 304 observations show an average of 11.1% of lost data due to RFI with a large spread. We observe 14.6% RFI in auto-correlations. Computationally, AOFlagger achieves a throughput of 370 MB/s on a single computing node. Compared to published machine learning results, the method is one to two orders of magnitude faster.
Conclusions:
## 1 Introduction
Technical advancement of mankind is driving an increase of man-made radio-frequency transmitters, both terrestrial and in space. This raises the bar for radio astronomical studies that try to detect sky signals that are many orders of magnitude fainter than man-made transmissions. Now that radio-astronomy is evolving into a science where it is the norm to measure data volumes in petabytes, mitigation of radio-frequency interference (RFI) needs to be computationally efficient and fully automated.
Apertif is a receiver system upgrade for the Westerbork Synthesis Radio Telescope (WSRT) that makes use of phased-array feeds to allow for 40 simultaneous adjacent beams on the sky (Van Cappellen et al. 2022). Observations are performed at a central frequency of 1280 or 1370 MHz with an instantaneous bandwidth of 300 MHz.
The data volume produced by Apertif is considerable. Voltages from the 12 dishes with Apertif receivers are correlated for all beams, typically integrated for 30 seconds and recorded with four polarizations. The bandwidth of 300 MHz is split into 384 sub-bands, each with 64 channels of 12.2 kHz. Because of the large bandwidth, it overlaps with various services, including GPS and air-traffic communications. Although the WSRT resides in a radio protected zone, it is not shielded from satellites and air-traffic. Moreover, starting 2020, 5G transmissions make use of the \(1452-1492\) MHz bandwidth. For these reasons, Apertif requires an efficient approach to deal with RFI. Due to the large amount of data, such an approach has to work fully automatically.
The most common method to deal with RFI, is to detect data samples that have a significant contribution of RFI and ignore affected data in the processing (e.g. Winkel et al. 2006; Middelberg 2006; Offringa et al. 2010a; Prasad & Chengalur 2012; Peck & Fenech 2013; Yang et al. 2020; Sun et al. 2022). This process is referred to as data flagging, and is also our method of choice for dealing with RFI in Apertif data in this work. Our detection methodology builds upon the RFI detection pipelines for the Low-Frequency Array (LOFAR; Van Haarlem et al. 2013; Offringa et al. 2010b) and the Murchison Widefield Array (MWA;
Tingay et al., 2013; Offringa et al., 2015). Those pipelines integrate an AoFLAGFLAG strategy, which combines filtering, sumthresholding, morphological operations and heuristics. Details of the AoFLAGER approach will be discussed in SS2.1.
Apertif supports a transient (beam-formed) mode and an imaging mode. The RFI detection approach for these two modes are fundamentally different. In this work we aim at RFI detection in imaging mode, i.e., after having correlated and integrated the voltages from all the antennas. See Sclocco et al. (2019) for an approach to mitigate RFI in beam-forming mode. Our approach is part of a fully automated Apertif imaging pipeline called apercal(Adebahr et al., 2022).
A multi-beam receiver makes it possible to perform spatial filtering techniques to suppress interference(Kocz et al., 2010, 2012; Hellbourg et al., 2014). This requires fast dedicated computing hardware that processes the raw signals from all the beams, which for Apertif is not available. Spatial filtering is also mainly used to filter out a limited number of known transmitters, which for Apertif is likely not sufficient by itself, although it might save some part of the bandwidth.
Another approach to detect interference is by using the spectral kurtosis statistic (Gary et al., 2010; Taylor et al., 2019; Purver et al., 2021). This has shown results that are competitive with amplitude-based detection. However, this requires a specialized correlator and a doubling of the data volume to be able to calculate the kurtosis.
Recently, machine learning has been used to address the issue of RFI detection (Harrison and Mishra, 2019; Yang et al., 2020; Xiao et al., 2022; Sun et al., 2022). Yang et al. (2020) argue that convolutional neural networks can achieve an accuracy that is higher than that of their sumthreshold implementation. For this comparison, the authors use their own customized implementation of the sumthreshold method, whereas in platforms such as AoFLAGER the method is typically applied iteratively and combined with filters (Offringa et al., 2010, 20) and morphological operators (Offringa et al., 2012; Van de Gronde et al., 2016) to enhance the accuracy. With these additions, it has been shown that pipelines such as AoFLAGER typically detect all interference that astronomers would manually flag. In this work, we will showcase what can be achieved with traditional methods -- including their computational requirements -- thereby providing an updated base-line to compare against.
In this paper, we introduce a flagging strategy for Apertif data using the AoFLAGER framework, and demonstrate our designed strategy on Apertif data. In SS2, we will start by introducing the AoFLAGER steps used to construct the Apertif approach, and introduce several new operations that are integrated into the Apertif flagging strategy. In SS3, results of applying this strategy are presented, including long-term statistics and the computational requirements. Finally, in SS4 we discuss the results and draw conclusions.
## 2 Method
For this work, we have designed an interference detection approach for Apertif based on the existing AoFLAGER approach and integrated this into the apercal pipeline. apercal is an automated processing pipeline for Apertif imaging observations (Adebahr et al., 2022), consisting of common steps such as data formatting, interference detection, calibration and imaging. Interference detection is one of the first steps during data reduction and is fundamental for achieving a good and persistent calibration and image quality and later steps of the processing.
To improve the detection quality, several modifications to AoFLAGER are required. This consists of extensions of existing algorithms and optimizing parameters for apertif, which we will discuss in this section. We will start with an overview of the detection approach.
### Overview
Fig. 1 shows an overview of the steps that the default AoFLAGER strategy performs. The AoFLAGER approach to RFI detection in a subset can be summarized as i) estimation and subtraction of the sky signal by applying a Gaussian high-pass filter in time-frequency space (see SS2.5); and ii) detection of excessive values, with increased sensitivity towards spectral-lines and broadband features. The detection is performed with the sumthreshold algorithm (Offringa et al., 2010). Steps i) and ii) are typically iterated three times with increased sensitivity to make sure that the final sky signal estimate is minimally biased by interference. As a final step, the flags from different polarizations are combined and are extended in time and frequency, using the scale-invariant rank (SIR) operator (Offringa et al., 2012; Van de Gronde et al., 2016). This latter step improves detection of interference that tapers off below the noise floor and fills gaps in the flag mask when a persistent transmitter is not fully detected.
With AoFLAGER, detection of interference is performed independently on subsets of the data, and the pipeline of Fig. 1 runs independently for each subset. For Apertif, such a subset was chosen to contain the data from all four linearly polarized correlations (XX, XY, YX, YY), the full bandwidth (300 MHz), an interval of typically half an hour for a single beam and a single correlated baseline. Hence, the detection of interference for different beams, baselines and time intervals is independently performed, even though these are part of the same observation. The motivation for flagging these subsets independently is two fold:
* It improves performance: it allows parallel and distributed detection of subsets. The independent flagging of beams and time intervals matches with the format of the data. Despite this, data access is still not ideal, because the data for one baseline is stored dispersed over the time direction.
* Combined detection does not significantly improve detection: the added value of detection on combined subsets of data is small, i.e., one subset contains little information about the RFI in another subset. This is because the impact of RFI can vary greatly between different beams and different baselines. Furthermore, it rarely occurs that RFI which affects image quality is not detectable in half an hour of data, but is detectable when multiple half hour intervals are combined.
Performing detection on integrated baselines has, in some cases, been shown to make faint RFI detectable (Offringa et al., 2015; Wilensky et al., 2019). Early tests with Apertif data, however, indicated that there is no gain in combining baselines. We have also performed tests that flag after integrating over multiple beams, but again found no improvement in doing so. These tests were not exhaustive and it could be that combined detection on baselines or beams could still improve the accuracy somewhat.
AoFLAGER aims to take out RFI that requires raw, high-resolution data flagging. Because of the high resolution of processed data, the computational performance of detection is critical. It is important to perform high-resolution flagging early, because it results in the highest accuracy and the impact of flagging is reduced compared to low-resolution flagging (Offringa et al., 2013). On the other hand, some phenomena cause the loss
of large time intervals or frequency ranges. Common instrumental causes are correlator failures, temporary local RFI or strong broadband transmitters. Detection of such issues does not require the high-resolution data, and it is therefore less critical to detect such issues in the first apdlagger detection run. Such issues can be found in post-processing of lower-resolution data for which the performance is less critical.
### Invalid data
There are several instrumental issues that may result in data with invalid values that interrupt the data in time or frequency. A few examples of such issues are correlator malfunctions, dish shadowing, incorrectly set sub-band gains, network failures (between stations and the correlator) or data corruption. Such instrumental issues result in visibilities that may have non-physical values for certain times, frequencies, feeds or antennas, or could lead to visibilities with a not-a-number (NaN) value. We will refer to such data as invalid data.
In most cases, invalid data can be detected and flagged early in the processing. For example, shadowing can be determined from the target direction and the layout of the array, and missing sub-band data caused by network congestion can be detected by the correlator. In this paper, we consider the detection of such issues outside the context of interference detection. It does, however, make it necessary for the detector to continue to work in the presence of (pre-detected) invalid data, which may affect only specific times, frequencies or some other selection of data.
Making the apdlagger algorithm aware of invalid data is one of the changes that was required for Apertif. The apdlagger algorithm was originally designed to work on raw high-resolution single-subband LOFAR data. It rarely happens that such a span of data is partially invalid, and initially apdlagger algorithms therefore do not take invalid data into account. In the case of Apertif, the full bandwidth is offered to apdlagger, and the loss or corruption of a single subband causes therefore gaps in the bandwidth. Being a different instrument, Apertif is also affected by different issues that may not affect LOFAR, such as shadowing. For these reasons, we have extended the apdlagger algorithm to take invalid data into account. This requires changes to the sumthreshold and sir-opefator steps of the algorithm, which we will discuss in the next two sections.
### Extension of the sumthreshold algorithm
The sumthreshold algorithm is a combinatorial thresholding method that detects line-like structures in the time-frequency data (Offringa et al. 2010a). This method is effective for the detection of RFI, because most RFI raises the amplitude of consecutive time or frequency samples. The method iteratively thresholds the average over an increasing number of neighbouring samples with a decreasing threshold. With \(i\) the zero-indexed iteration number, \(M_{i}\) the number of samples, \(\chi_{i}\) the threshold and \(\rho\) a constant normally chosen to be 1.5,
\[M_{i} = 2^{i} \tag{1}\] \[\chi_{i} = \chi_{0}\,\rho^{-\log_{2}M_{i}}. \tag{2}\]
\(\chi_{0}\) is a user parameter that controls the total sensitivity of the method. The various default apdlagger algorithms use values of \(\chi_{0}=6\dots 8.5\,\sigma\). The mode of the noise \(\sigma\) is determined from the data that is (at that point in the detection) determined to be RFI free, and is estimated by calculating the truncated mode of the RFI free data, skipping 20% of the outlier values (the 10% minimum and maximum values), thereby assuming that the inner 80% follow a Rayleigh distribution. Assuming that the contribution of the noise is Gaussian distributed in the real and imaginary components of the visibilities, this results in a stable estimate of its standard deviation (Fridman 2008).
A single iteration consists of thresholding all sequences of size \(M_{i}\) in both the time and the frequency direction (unless \(M_{i}=1\)), possibly with different thresholds for the two dimensions, to separately control the sensitivity towards spectral line RFI and transient broadband RFI. Typically, a total of 9 of these iterations are performed, giving a maximum size of \(M_{8}=256\). A sample that is flagged in an earlier iteration or direction, is (temporarily) replaced by the mean of the non-flagged samples in the sequence. The following description demonstrates the first three iterations, using \(\chi_{0}=6\) and \(\rho=1.5\):
1. Flag samples with an absolute value \(\geq 6\sigma\).
2. Flag every sequence of 2 consecutive samples in time with an absolute average \(\geq 4\sigma\) (because \(\chi_{2}=6\sigma\times 1.5^{-\log_{2}(2^{2})}=4\sigma\)).
3. Flag every sequence of 2 consecutive samples in frequency with an absolute average \(\geq 4\sigma\).
4. Repeat 2.(a) and (b) with 4 samples and a threshold of \(\chi_{4}=6\sigma\times 1.5^{-\log_{2}(2^{2})}=2\frac{2}{3}\sigma\).
Figure 1: The default apdlagger strategy for RFI-detection (before modifications for Apertif). These steps are independently performed on smaller subsets of the data. The input data of one independent run through these steps typically consists of approximately an hour of correlations from a single pair of antennas and a single beam, with the full bandwidth and all four linearly polarized cross-correlations present.
Subsequent iterations will threshold sequences of 8, 16, 32, \(\dots\) samples with an average above \(\chi_{\rm S}\approx 1.8\sigma,\chi_{16}\approx 1.2\sigma\), etc.
In the form described by Offringa et al. (2010a), pre-existing classification of invalid data is not taken into account in the sumtireshold method. An example of such a case is shown in Fig. 2, which considers a simulated observation with 200 timesteps and 100 channels. The observation contains spectral-line interference that affects one channel out of every ten channels and increases power at higher frequencies. Timesteps 50--100 are known to be invalid data, and are set to high values by raising them with 10 times the standard deviation.
The second row of Fig. 2 zooms in on time indices 30-60. The first image of the second row shows the result of a basic application of sumtireshold. For this result, the knowledge that some data was invalid is not used. As a result, the invalid data is considered to be RFI, and samples before and after the block of invalid data are flagged with an increased sensitivity. As a result, the false-positive rate is clearly increased.
A simple approach to mitigate this is to consider invalid values to be zero when applying the sumtireshold method. This results in the plot shown in the middle of the second row of Fig. 2. This result does not show increased false positives because of the invalid data. With this approach, information about flagged samples on either side (before/after) of the missing data does not (significantly) aid detection, because the invalid data is considered to be zero, and this lowers the average absolute sum in the iterations of the sumtireshold method that consider longer consecutive ranges. This results in a higher false-negative rate than would theoretically be possible if the information on both sides of the invalid data would have been used together. In particular, the faintest interfering line at channel index 5 is no longer detected.
While the loss in accuracy is minimal, there is a simple method to aid the detection of interference on one side of the block of invalid data with information from the other block: by completely skipping data in the summed direction (time or frequency). In other words, samples that are directly before and after a block of invalid data are treated as if they are consecutive. The result of this is shown in the third column of Fig. 2, which indeed shows a lower false-negative rate. In particular, the faintest spectral line at channel 5 is now fully detected.
When comparing these two approaches to deal with invalid data, the approach to exclude the invalid data leads to a small increase in false-positive detections when the RFI is not consistently present in time or frequency, i.e. when it is present on one side of the invalid data block and not present on the other side. This should be weighted against the increased sensitivity when the RFI is consistently present. The optimal choice therefore depends on the behaviour of the RFI. Because persistent RFI is common, and because it is more important to avoid false negatives in persistent RFI (which might negatively affect later processing steps) over avoiding false negatives in transient RFI (which would lead to a small loss of data), we use the method of excluding invalid data in our Apertif strategy.
We have implemented this in two ways: i) stack all valid data into a temporary storage, run the normal sumtireshold algorithm on these data and reverse the stacking operation on the resulting mask; and ii) skip over the invalid data inside the sumtireshold method. We have timed these two implementations on simulated complex Gaussian data with 10,000 timesteps \(\times\) 256 channels. Each run is repeated 100 times. The first implementation runs about 2.5\(\times\) faster (0.18 s per data set) compared to the second implementation (0.45 s per data set). The first method is still 6\(\times\) slower compared to the regular algorithm
Figure 2: Three methods of handling invalid data in the sumtireshold step. The top image shows the simulated input data, which consists of Gaussian complex noise, spectral line RFI every 10 channels that increases in strength in frequency direction, and a block of invalid data (time indices 50–100), simulating e.g. a temporary correlator failure. The bottom images show a zoom in on the left edge of the invalid data. Flagged data is marked in yellow. Bottom-left: normal sumtireshold without using knowledge of the invalid data; bottom-centre: invalid samples are set to zero before sumtireshold; bottom-right: invalid samples are removed before sumtireshold.
(which takes 0.03 s per data set). This can be explained by the extra copying of data that is required in each iteration (both for the time and for the frequency direction).
### Extension of the scale-invariant rank operator
The SIR-operator is a morphological operation that is used in aolergoger to extend the detected RFI mask in the time and frequency direction. It is an effective step to follow threshold-based methods to detect faint RFI based on the morphology of detected flags (Offringa et al. 2012; Van de Gronde et al. 2016). It is scale invariant, which implies that the fractional increase in flags in one dimension is constant, i.e., independent of the scale of that feature in that dimension.
The SIR-operator is essentially a one-dimensional operator that can be applied to a sequence of flag values. To apply it to radio interferometric data, Offringa et al. (2012) apply the operator in both the time and frequency dimensions: in time it is separately applied to all the channels, and in frequency it is applied separately to all timesteps. The union of these to steps is taken as the result.
Assume that \(X\) is a single sequence of flag values, such that \(X[i]\) holds a Boolean value that represents the state of the flag. The output \(\rho(X)\) of the SIR-operator applied to \(X\), is defined as the union of all subsequences of the input \(X\), for which
\[\#_{\mathcal{F}}^{i,j}\geq(1-\eta)\left(j-i\right). \tag{3}\]
Here, \(\#_{\mathcal{F}}^{i,j}\) is brief for \(\#_{\mathcal{F}}(X[i:j])\), which is the count-operator that returns the number of flagged samples in a sequence. \(X[i:j]\) is the subsequence of samples consisting of all elements \(X[k]\) for \(i\leq k<j\) and \(\eta\in[0\ldots 1]\) is a tunable parameter that sets the aggressiveness of the operator.
Eq. (3) implies that a sequence of flags caused by invalid data is extended on both sides by a ratio of \(\eta\). An example of this is given in the centre-left panel of Fig. 3. This behaviour is undesirable because, unlike most RFI signals, invalid data typically has a sharp boundary and should be flagged like that. The extension of invalid data causes a high number of false positives.
A simple solution is to count invalid data as unflagged data in the SIR operator. This implies that Eq. (3) is modified so that the count operator only counts the number of flags corresponding to valid data:
\[\#_{\mathcal{F}\mathcal{V}}^{i,j}\geq(1-\eta)\left(j-i\right), \tag{4}\]
where \(\#_{\mathcal{F}\mathcal{V}}\) is the number of valid samples that are flagged in the interval \(X[i:j]\) (as opposed to \(\#_{\mathcal{F}}\), which counts flagged values that can both be valid or invalid). Because the right side is unchanged and the left side remains equal or is decreased compared to Eq. (3), this modification always flags an equal or fewer amount of samples. An application of this approach is demonstrated in the centre-right panel of Fig. 3. This approach remedies the extending of flags around invalid data.
The downside of the approach of Eq. (5) is that a continuous transmitter is assumed not to be present in the invalid data range, causing flags on either side to have a decreased probability of getting flagged. For example, in case a correlator fails for a minute during which a transmitter remains present in one channel with decreasing power, the transmitter is less likely to be flagged after the correlator failure. To address this, we further modify Eq. (3) to:
\[\#_{\mathcal{F}}^{i,j}\geq(1-\eta)\,\#_{\mathcal{V}}^{i,j}, \tag{5}\]
where \(\#_{\mathcal{V}}^{i,j}\) is the number of valid (flagged or unflagged) samples in interval \(X[i:j]\). This approach is effectively the same as removing the invalid samples from the sequence before applying Eq. (3). Therefore, a transmitter that gets interrupted by invalid data receives a higher probability to get flagged. An example of this approach is given in the bottom-left panel of Fig. 3. Invalid samples are skipped in this approach, and flagged samples on one side of a sequence of invalid samples may increase the probability of samples on the other side of the sequence, irregardless of the size of the invalid sample sequence.
The approach of Eq. (5) can overstep its goal of using information from before and after a sequence of invalid data, in particular in the case of very long sequences of invalid samples. For example, when considering a transmitter that is active for one minute before the receiving antenna is shadowed for 6 hours (causing invalid data), it is undesirable that samples after shadowing receive higher detection probability because of what happened 6 hours ago. A final modification to the SIR operator we consider is therefore to introduce a penalty parameter \(\rho\) that can balance between Eqs. (4) and (5):
\[\#_{\mathcal{F}}^{i,j}\geq(1-\eta)\left((j-i)\rho+\,\#_{\mathcal{V}}^{i,j}(1- \rho)\right). \tag{6}\]
With \(\rho=0\), invalid samples are skipped, making the method equal to Eq. (5) and with \(\rho=1\), invalid samples are counted as unflagged samples, making the method equal to Eq. (4). A value of \(\rho=0.2\) implies that five invalid samples count as one unflagged sample, thereby lowering the probability of flagging through a block of invalid data, but still transferring some of the flag information from before to after the invalid data and vice versa. This method is demonstrated with a setting of \(\rho=0.1\) in the bottom-right panel of Fig. 3.
Considering the results of all approaches in Fig. 3, it is clearly undesirable to generally extend invalid data using the traditional SIR-operator defined in Eq. 3. Any of the three different variations of the algorithm (Eqs. 4, 5 and 6), which can be described by choosing different \(\rho\)-values in Eq. (6), solve this issue. The different values of \(\rho\) do not cause significant changes. We have tested values of \(\rho\) on a few observations, some with artificially added invalid data, and visually compared the flagging results. Based on these results and the arguments given earlier about finding a balance between Eqs. (4) and (5), we use \(\rho=0.1\).
Introducing the parameter for invalid-data weighting \(\rho\) has no significant effect on the speed of the algorithm. The original algorithm can be implemented with a computational complexity of \(\mathcal{O}(N)\)(Offringa et al. 2012), and the same holds for the algorithm that includes the invalid-data penalty parameter.
### High-pass filtering
The high-pass filter that is applied to remove astronomical source contribution before thresholding is, for computational reasons, implemented as a Gaussian low-pass filter followed by subtracting the difference between the input and the low-pass filtered result. The high frequency resolution of Apertif makes it necessary to use a large filtering kernel in the frequency direction. Effectively, a kernel with a Gaussian sigma of 875 channels and 2.5 timesteps is used. Before filtering, the data is averaged in the frequency direction by a factor of 175, and after low-pass filtering, the result is upscaled to the original resolution using nearest neighbour resampling. This allows a convolution with a much smaller kernel, improving the speed of this operation. The result is an approximate of a Gaussian high-pass filter, but for the purpose of removing the sky signal, this is sufficiently accurate.
### Bandpass correction
In the Apercal Apertif processing pipeline, the entire bandwidth of Apertif is used at once during RFI detection. This is different from the original LOFAR strategy, that flagged small (200 KHz) subbands independently. Using the entire bandwidth has the benefit that broadband RFI that covers several sub-bands can be detected. This is relevant for Apertif observations, which are affected by broadband transmitting satellites and radar.
Because the bandwidth of Apertif is subdivided into subbands using a poly-phase filter bank, the band shape of the poly-phase filter is imprinted on the data. An example of this is shown in the top-left panel of Fig. 4. This is corrected for during calibration, but during flagging (which needs to be done before calibration) the shape is still present.
Performing detection using the entire bandwidth but without correcting for the poly-phase filter bank causes sub-band edge channels to be flagged, because the edges cause sharp transitions that trigger the detector. Moreover, the deviations in the data caused by the band-edges decrease the sensitivity of the detection toward actual RFI. The top-right panel of Fig. 4 shows an example of flagging without bandpass correction.
To remedy this, we implement a sub-band band-pass correction step in the detector. This step corrects the poly-phase filter shape using a static, observation-independent correction. We determine the shape by performing gain-calibration on a clean region of the band, and average the solutions over the subbands. The bottom-left panel of Fig. 4 shows the resulting corrected data set, and the bottom-right panel of Fig. 4 shows the result of flagging the bandpass. As can be seen, the band-pass cor
Figure 3: Different ways of handling invalid data in the SIR-operator step on a simulated data set with a Gaussian burst of interference in a few channels. Purple marks invalid data, yellow is detected as interference. The SIR-operator operates on the flag mask, hence the visibility values are not used. Top: input data. Centre-left: Invalid data is counted as flagged data. Centre-right: Invalid data is counted as unflagged data. Bottom-left: Invalid data is removed before applying the SIR-operator. Bottom-right: Invalid data is penalized with \(\rho=0.1\).
rection has decreased the number of false detections considerably. Some edge channels are still flagged, even after correction. This is caused by aliasing in the sub-band edge channels, which change the statistics of those edge channels slightly. This can lead to artefacts which are very similar to RFI, hence they are occasionally flagged. This flagging is normally of limited concern, because those sub-band edge channels that are flagged are of lower quality. Because of this, they are often discarded during imaging.
### Flagging of auto-correlations
Given the output voltage of the two feeds of the same antenna, \(\mathbf{e}=\left(e_{x},e_{y}\right)\), auto-correlated visibilities are formed by taking the product \(\mathbf{e}^{\mathrm{f}}\mathbf{e}\) (i.e., the outer product \(\mathbf{e}\otimes\mathbf{e}\)) and integrating, resulting in **XX**, **XY**, **YX** and **YY** visibilities. While auto-correlations are not often used for scientific data products, they are useful for system monitoring and quantifying the system noise. For such analyses, it is desirable to flag RFI.
Compared to cross-correlated visibilities, auto-correlated visibilities have different properties: in the **XX** and **YY** correlations, system noise and RFI will not decorrelate, and auto-correlated visibilities are sensitive to the global sky signal instead of fluctuations in the sky signal.
An example of auto-correlated dynamic spectrum from Apertif is shown in the top image of Fig. 5 (after sub-band band-pass correction as described in SS2.6). Compared to cross-correlations such as shown in Fig. 4, the dynamic spectrum of auto-correlated visibilities appears much smoother, is systematically offset from zero and contains stronger structure in the frequency direction.
The flagging strategy that was optimized for the cross-correlations detects RFI by comparing high-passed filtered amplitudes of visibilities to the variance of these amplitudes. Because the amplitude variance is much lower compared to cross-correlations, this results in flagging auto-correlations with increased sensitivity. At the same time, the auto-correlations contain stronger instrumental frequency-structure. These two effects combined causes the cross-correlation flagging strategy to flag all of the visibilities of the auto-correlations of Fig. 5.
To solve this, we use a different flagging configuration for the auto-correlations. The difference with the cross-correlation strategy is as follows:
* The time-direction sumthreshold step (sensitive to consistently high values in the time direction, e.g. band-pass structure) is reduced in sensitivity by a factor of 6.
* The frequency-direction sumthreshold step (sensitive to consistently high values in the frequency direction, e.g. broad-band RFI) is reduced in sensitivity by a factor of 2.
Figure 4: Static sub-band band-pass correction before flagging with the Apertif flagging strategy. Top-left: input before correction; top-right: flagged without correction; bottom-left: input after correction; bottom-right: flagged with correction.
* The size of the high-pass filter kernel is reduced by 3.5 in the frequency direction, to filter out more of the spectral gain fluctuations of the instrument.
* The number of iterations is increased from 3 to 5. This increases the required computations but improves robustness in the presence of a large dynamic range, as is the case for auto-correlations.
* Only the **XX**, **XY** and **YY** correlations are used for detection, to reduce unnecessary computations. **YX** correlations are equal to the conjugated **XY** correlations, and using these for flagging does not provide additional information.
A result of this auto-correlations strategy is shown in the bottom image of Fig. 5. Visual inspection shows that all visible RFI is indeed detected, and the number of false detections appears low. Because we do not have a ground truth, we do not try to quantify these results. Similar to the cross-correlation strategy, the auto-correlation strategy flags parts of the sub-band edges. The centre image of Fig. 5 shows the high-pass filtered data of the final iteration.
### Avoiding HI removal
In observations that cover bright nearby galaxies or the Galactic plane, the 1420 MHz HI line may be detectable in the visibilities from a single cross-correlated baseline. For example, the top-left image of Fig. 6 shows one baseline from a M31 observation, which clearly shows a contribution from HI-emission around 1420 MHz. This poses a challenge for RFI detection, because such a fine, spectrally-consistent signal is quite similar to RFI. As shown in the top-right image of Fig. 6, when standard flagging is performed on these data, the HI emission is detected as RFI.
We analyze different ways to mitigate this. In the Netherlands, frequencies between 1400-1427 MHz are reserved for radio astronomy and other forms of passive research1, and transmitting inside this band is not allowed. As a result, these frequencies are almost free of man-made emission. A simple mitigation strategy is therefore to disable RFI detection inside this band. Unfortunately, the recorded visibilities do occasionally contain strong, non-astronomical values inside this band. The three vertical lines in the images of Fig. 6 are an example of such an observation. Most frequently, these are caused by saturation of a receiver, causing a broadband-like signal in the recorded visibilities, although they might occasionally be caused by RFI emitted at these frequencies (e.g. from a sparking device or lightning). Leaving these broadband contaminants in the data causes degradation of the images. In particular, they cause visible stripes in continuum, full bandwidth images.
Footnote 1: The Dutch spectrum allocations can be found at [https://www.agentschaptelecom.nl/](https://www.agentschaptelecom.nl/)
Another approach is to flag only based on Stokes Q, U and V. Man-made RFI is often polarized, whereas the sky emission in these polarizations is generally much fainter. The result of this approach is shown in the bottom-left image of Fig. 6. While a part of the HI emission has been left intact, it is still bright enough in these polarizations to get detected. This is even the case when flagging on only one of these polarizations: the HI emission is present in all of the polarizations. Moreover, we occasionally observe RFI that is only visible in Stokes I, and removing any of the polarizations decreases the effectiveness of RFI detection. In Fig. 6, the transmitter around 1425 MHz / 0:00 UTC is for example not as well detected in this approach compared to standard flagging.
Because none of these approaches give good results, we consider another approach, and run the flagger twice: in run A) we flag the data with the normal detection strategy, and in run B) we run the detection with a strategy that is insensitive to spectral lines. For frequencies outside the HI range we use the flags from run A), and inside the HI range (1418-1424 MHz) we use B). The result of this approach is shown in the bottom-right image of Fig. 6. With this approach, broadband structures have been detected as RFI and HI emission is left in the data.
To avoid flagging spectral lines in run B), we adjust the following flagging settings during this run):
Figure 5: Flagging of auto-correlations. Top image: input after sub-band band-pass correction; centre image: same after iterative high-pass filtering and with 10x more sensitive colour scale; bottom image: after flagging with the auto-correlation specific strategy. Because auto-correlations have different properties compared to cross-correlations, they require a specialized flagging strategy.
* The high-pass filter in frequency direction is set to have a kernel size of one channel, to filter out fluctuations in frequency.
* The sensitivity of the time-direction sumthreshold step is decreased by a factor of 4, to reduce flagging of line-like structures.
* The sensitivity of the frequency-direction sumthreshold step is decreased by a factor of 2. This reduces flagging of temporal fringes in HI emission.
* The number of iterations is increased to remain robust in the presence of strong HI emission.
On overall, the resulting strategy is almost entirely insensitive to spectral-line-like structures. The sensitivity to broadband structures will also be reduced because of these changes, but given that this strategy remains sensitive to faint broadband structures such as shown in Fig. 6, we consider this tolerable.
Because run B) requires only a small part of the full bandwidth, the second flagging run is relatively fast, hence the increase in computations caused by this is modest (about 20%).
### Reading overhead and memory considerations
During the AOFlagger stage of the apreal pipeline, observations are stored in the Casacore Measurement Set format. In this format, the data of an observation is lexicographically sorted in time, and then in baseline and frequency. While this ordering is suitable for calibration, flagging requires the data baseline by baseline. Unfortunately, the data for a single baseline is spread throughout the file. Therefore, reading a baseline requires reading the file from beginning to end. Because of the block size and caching of storage media, it is inefficient to read the baselines one by one with this approach.
AOFlagger supports three methods for accessing the data:
* Direct reading. In this mode, the data is directly read from the measurement set just before they are needed. Because multiple baselines are processed in parallel using multi-threading, a few baselines are read from the measurement set at once. This mode results in scanning through the input data multiple times, which is computationally costly.
* Reorder before processing. In this mode, the whole measurement set is reordered by baseline, frequency and then time and rewritten to disk in a binary, internal format before processing is started. This results in reading the data only twice and is generally faster than the direct reading mode, but requires disk space to store the copy of the data.
* In-memory data. In this mode, the whole measurement set is read into memory before starting processing. This results in reading the data only once and is generally the fastest mode, but requires a considerable amount of memory.
Apertif data sets are large and expensive to read: reading the data more than once is undesirable. As a result, the only acceptable reading mode is the in-memory mode. In the particular computing mode where Apercal runs, the amount of memory required by this mode is a considerable constraint, and requires a dedicated node for each flagging operation performed.
Other observatories have solved this issue by integrating aoflagger into a multi-step preprocessing pipeline that stream through the data, split the data in time for flagging and hand these data over part by part to AOFlagger via its application programming interface. Examples of such pipelines are cortex (Offringa et al. 2015) and DP3 (Van Diepen et al. 2018), which are preprocessing pipelines for the Murchison Widefield Array and the Low-Frequency Array, respectively. In this approach, several tasks (e.g. conversion, phase rotation, flagging, averaging, compression) can be applied with a single read through the data, thereby reducing the read overhead. In the case of Apertif, such
Figure 6: Band-pass corrected M31 data from WSRT RT9 \(\times\) RTA with a strong HI signal. Top-left image: input data. The bright emission around 1420 MHz is from HI and should not be flagged. The vertical lines are instrument or RFI artefacts that should be flagged. Top-right image: after RFI detection without HI modifications, showing in pink what is flagged. Bottom-left image: after RFI detection using Stokes Q, U and V. Bottom-right image: after RFI detection using a specialized strategy for 1418-1424 MHz.
a streaming pipeline does not exist. Instead, aoflagger runs as a stand-alone tool inside Apercal.
To solve the memory and reading issue for Apertif, we implemented a time-chunking approach into aoflagger. In this mode, aoflagger reads small chunks in time and flags these independently. This makes it possible to use the memory reading mode, because the data for individual chunks is small enough to fit in memory. It does imply that the algorithm has less information available to do its RFI detection. Therefore, it is important to let time chunks still have a significant size, because AOFlagger would otherwise not be able to find faint RFI, that is persistent in time, but not detectable in a small chunk. For Apertif, we use a chunk size corresponding to about half an hour of data.
### Use of Lua
Before AOFlagger version 3, AOFlagger strategies were written in the extensible markup language (XML). An XML file specifies a sequence of steps and is interpreted by AOFlagger, and this sequence is executed separately for the data from every baseline. The sequences run multi-threaded, and reading and writing of data is done outside of the strategy. Examples of XML steps are to calculate visibility amplitudes; running submission of sfr operations on the data; or to combine the flags of all polarizations.
Over the years, the use of AOFlagger extended to more and more use-cases: different telescopes, flagging after calibration, high-resolution flagging, etc. It became desirable to make the strategies more flexible. In particularly, it became desirable to support standard scripting structures such as loops, conditionals, variables and to provide standardized documentation of the steps. The idea was therefore formed to embed a standard interpreter into AOFlagger and provide a function interface for each step. The data-intensive computations are still performed by high-performance precompiled C++ code, while these are glued together using an interpreted script, thereby combining flexibility with high performance.
Our first approach was to embed it into Python, because of its popularity in astronomical data science. After having implemented a prototype that embeds the Python interpreter into AOFlagger, it turns out some of the features of the Python interpreter conflict with how AOFlagger runs these scripts. Particular challenges were to deal with the global interpret lock; memory management; and fast restarts of the interpreter. While there are various ways to work around these issues, the design goals of the Python language and interpreters do not focus specifically to make the language embeddable.
Lua2 is a scripting language that is widely used for embedding scripts in applications, notably in computer games to implement scripted game sequences. This scenario is close to the AOFlagger use-case: the interpreter is integrated into such games, called many times and supports multi-threaded script execution. Algorithmic code that requires high performance can be implemented in compiled languages (C++ in the AOFlagger case). With this idea in mind, we decided to integrate the Lua interpreter into AOFlagger and implement all steps as Lua functions.
Footnote 2: [https://www.lua.org/](https://www.lua.org/)
The use of a full scripting language has increased the possibilities inside the flagging strategies considerably. For example, it is now possible to adapt the strategy based on properties such as the baseline length, frequency, auto- or cross-correlation, etc. A consequence of the new interface is that existing strategies need to be rewritten, which can not be done automatically. All default strategies have been rewritten to use Lua, which currently includes specialized scripts for 11 observatories (Aarftaac, APERTIF, Arecibo, ATCA, Bighorns, JVLA, MWA, WSRT, LOFAR, NenuFAR). These have all been verified to produce the same result as the old XML-based strategies. Because the new function interface gives better control over what steps need to be run, the speed of the new strategies is slightly higher (several percent). We do not notice any significant overhead from using Lua: the computational time is dominated by the computations inside the function calls.
## 3 Results
Apertif observations are processed by the automated Apercal pipeline. This pipeline includes the flagging strategy as described in Sec. 2. In this section, we present results of the full flagging step on Apertif observations. The data that we look at has been recorded between 2019 and 2022. Science products from the first year of observing have been described in the first Apertif data release (Adams et al. in press; Kutkin et al. in press).
### RFI detection examples
The detection strategy described in Sec. 2 runs fully automated, and does not require further flagging before calibration and continuum imaging. In general, manual inspection of data after RFI detection shows no residual RFI and few false positives. Fig. 7 shows the 1280-1430 MHz range of a typical observation. The top plot shows the data before RFI detection, and the bottom plot shows in white what has been detected as RFI. Fig. 8 shows a challenging case with wider bandwidth, with a moderate amount of RFI, missing data (1200-1220 MHz) and strong fringes. Top and bottom plots show again before and after detection. This also demonstrates the challenging situation for radio astronomical science between 1150 and 1300 MHz.
For continuum imaging, it is often useful (or at least pragmatic) to take out any visibility that appears to have a contribution from RFI. For spectral imaging, a flagging result such as shown in Fig. 8 is problematic, because many channels are fully removed. In those cases, it is possible to reduce the sensitivity of the RFI detection. The sensitivity is specified as a variable in the script. For the detection result shown in Fig. 9, the sensitivity was decreased by a factor of 3. Compared with the result in Fig. 8, this reduced the flagging from 49% to 33%. This takes out the strongest RFI, but leaves weak (but visible) RFI in the data. Decreasing the sensitivity further continues to trade the availability of visibilities with a lower quality of those visibility.
### RFI characteristics and long-term statistics
During the flagging step, statistics are collected that summarize the (detected) RFI occupancy and data quality. We have collected these statistics for 304 of the currently processed observations. Averaged over all these observations and the full bandwidth, the total detected RFI occupancy is 11.1% in the cross-correlated baselines and 14.6% in auto-correlated baselines. Fig. 10 shows the detected spectral RFI occupancy for each observation, as well as the occupancy averaged over all observations. Only cross-correlated data is included. At most frequencies, the average loss of data due to RFI is about 10%, but with a spread of approximately 0-15% between observations, and a few larger outliers.
Frequencies between 1400 and 1427 MHz are reserved for radio astronomy. At these frequencies, the average RFI occupancy is slightly lower (approximately 8%), but is evidently still affected by instrumental effects (such as receiver saturation) or natural and unintended RFI (such as lightning). Fig. 6 shows data that is affected by such broadband artefacts. It is likely that the \(\sim\)10% base-level of occupancy is caused by such artefacts.
Some observations show a small excess RFI occupancy at 1420 MHz. This is caused by HI that is detected as RFI. The methods to avoid flagging HI that are described in SS2.8 were implemented only halfway 2021. Some of the observations that are flagged before that still show false-positive detections at HI frequencies, but all observations after avoiding HI was implemented show indeed no HI flagging.
The same base level of 10% is not visible at frequencies above 1430 MHz. The reason for this difference is that only a relative small number of observations cover frequencies above 1430 MHz. Frequencies between 1427 and 1492 MHz are allocated to various services, including mobile communication and fixed transmissions3. Some of these are satellite based. In 2020, the 1452--1492 MHz band was auctioned in the Netherlands and thereafter allocated for the use of 5G mobile phone downlink. As shown in Fig. 10, the use of data above 1430 MHz is limited.
Footnote 3: See [https://www.agentschaptelecom.nl/](https://www.agentschaptelecom.nl/)
Some channels between 1300-1400 MHz contain a few outlier RFI occupancies. These are caused by a nearby radar station that is occasionally turned on. Frequencies between 1130 and 1300 MHz are predominantly affected by RFI from Global Navigation Satellite Systems (GNSS), such as the US GPS, Russian GLONASS, Chinese BeiDou, and European Galileo satellite constellations. All these constellations use satellites in orbits at \(\sim\)2000 km and with high orbital inclinations (\(i\) = 54-65\({}^{\circ}\)) to provide global coverage. Frequencies for wide band transmissions are assigned to, and shared between, these systems at 1176.45, 1191.795, 1207.14, 1227.6, 1278.75 MHz (for GPS, BeiDou, Galileo) and 1202.025 and 1242.9375-1251.6875 MHz (for GLONASS).
Wide band signals are detected at these frequencies throughout the entire observation of Fig. 8 covering the band down to 1130 MHz. Using orbital ephemerides of these satellite constellations, we find that the strong temporal RFI observed in Fig. 8 at 13:06, 14:46, 16:29, 18:13 and 19:54UTC is caused by BeiDou satellites passing within 5\({}^{\circ}\) from the pointing of the APERTIF compound beam. The pass of 18:13UTC had a minimum separation of 0\(\aas@@fstack{\prime}\)31 and led to saturation of the receiver, affecting the entire observing band. Two GPS satellites passed at 1\(\aas@@fstack{\prime}\)47 and 2\(\aas@@fstack{\prime}\)30 separation from the beam pointing at 22:02 and 23:02UTC, and one Galileo satellite at 3\(\aas@@fstack{\prime}\)72 at 22:59UTC, and coincident increases of the RFI levels are observed, but not as strong as with the passes of BeiDou satellites. The GNSS signals observed away from these passes near the primary APERTIF beam are likely due to far sidelobes or multi-path reflections of GNSS signals from the WSRT focus structure or other nearby structures directly into the receiver.
### Computational requirements
In this section we summarize the computational requirements of the Apertif RFI detection strategy, with the aim of making it possible to approximate the computational requirements for other telescopes when a similar flagging strategy is used. Since the total throughput is depending on many complex factors of the computing platform (e.g. clock speed, cores, memory bandwidth, instruction set, vectorization), we aim at giving a first-order estimate only.
We measure the performance of flagging a set with visibilities from a single observation. We use an Apertif observation with 1346 timesteps, 24572 channels and 4 polarizations, for a total of 132M visibilities. This makes the visibility data, which consists of 4-byte single-precision real and imaginary values, 1.1 GB in size.
We perform our test on a desktop machine with an AMD Ryzen 7 2700X 8-Core processor and 64 GB of memory. This processor can perform hyper-threading, and thus we run 16 detections in parallel. We load the data in memory before detection and do not store the results, to avoid any disk access. Averaged over 10 runs, it takes 46 seconds to run 16 detections, which amounts to a throughput of 370 MB/s (or 46M visibilities/second). At the time of writing, a typical fast spinning disk achieves a sustained reading throughput of a few hundred MB/s. Hence, disk access can be a significant cost of a stand-alone RFI detection step. This can be problematic for supercomputers, because they have high computing power, but not a high I/O throughput.
### Comparison against a machine learning approach
Some studies have found that machine learning can improve the accuracy of RFI detection. In Yang et al. (2020), the authors test their own sumthreshold implementation against a machine learning approach, using a ground truth flag mask that is manually determined by an engineer. Such a ground truth mask is difficult to make in general, including for Apertif data, where broadband RFI tapers off and it is unclear from which points samples are truly unaffected by RFI. We can however conclude that, after our pipeline, all visibly affected samples have been identified. Moreover, imaging results have achieved the thermal noise of the instrument, thereby indicating that the accuracy of interference detection is not a limitation.
This conflicts somewhat with the conclusions made by Yang et al. (2020). The sumthreshold implementation that is used there to compare their results with, does not achieve the published accuracy of aoplagger, because residual interference is visually present. Potential explanations for these differences could be i) that Yang et al. train their network for a specific scenario but did not optimize their sumthreshold approach; or ii) that they do not use a full (i.e. aoplagger-like) sumthreshold-based pipeline that includes the sir operation and that is similarly optimized for their instrument. An important consideration is that morphological operations are aimed at detecting RFI that is below the noise, therefore invisible to scientists that manually classify RFI. In the comparisons done in Yang et al. 2020, samples detected by the morphological operator would all be counted as false positives, whereas this operator has been shown to improve the final science results (Offringa et al. 2012). It can therefore not yet be stated that, based on accuracy, machine learning methods are outperforming traditional based methods. Rather, it is clear that both methods are competitive and are accurate enough to largely mitigate the problem of interference in radio data.
There are differences in the computational performance though. In Xiao et al. (2022), machine learning methods flag a one-hour FAST observation of 67 GB in 61% of the observing time using 8 computing nodes (Xiao et al. 2022). This amounts to a single-node computational performance of 14 GB/hour. On the other hand, the single-node performance of the aoplagger approach listed in SS3.3 is 370 MB/s, or 1.3 TB/hour, and aoplag
gcr is therefore almost two orders of magnitude faster. While the performance of the computing nodes used for the computational performance analyses may differ somewhat, and it is therefore not a direct comparison, it is evident that the aolragger approach is significantly faster. In Sun et al. (2022), authors compare the run-time of aolragger to their convolutional neural network (CNN) approach and find that aolragger is two to four times faster. However, the authors measured the total runtime of the aolragger executable, which would include disk access, start-up overhead and time spent in the casacore library to transfer the measurement set data. Because the flagging speed is near the disk access speed, this overhead can be substantial. A better benchmark is possible by using the C++ or Python API of aolragger directly. On their Sim_RFI-1 dataset, they reach an aolragger speed of 250 GB/hour, while in this work, with a more advanced strategy, we reach 1.3 TB/hour on similar hardware. Their CNN method reaches a speed of 145 GB/hour, which is an order of magnitude faster than what is reached by Xiao et al. (2022), but is an order of magnitude below what we reach with our aolragger approach.
## 4 Discussion & conclusions
We have described and demonstrated an automated RFI detection strategy aimed at flagging Acertif data. Our detection strategy implements novel suntthreshold and sbr-operator algorithms that take prior information about invalid data into account. It also avoids the flagging of HI emission, works on auto-correlations, corrects the sub-band band-pass and contains some further parameter optimizations for Acertif. The change from the AOFlagger XML strategies towards fully scripted strategies provides flexibility that made these changes quite easy to implement and supports flexibility during experimentation. Besides making the process easier and faster, an automated RFI detection strategy also makes the results reproducible, compared to when RFI is flagged manually, and it allows reducing the data size by averaging early on in the data reduction processing.
We expect that our RFI detection strategy will work for data from other instruments, in particular those with a frequency coverage comparable to Acertif, such as MeerKAT, ASKAP, JVLA and future SKA-mid observations around 1.0 - 1.5 GHz. Different bands might require some changes to the strategy parameters, but should be able to reuse a large part of the approach.
While machine learning techniques may compete with the accuracy of AOFlagger, they do not compete with its speed. Moreover, we have shown it is possible to add new features to AOFlagger, such as avoiding the 21-cm HI signal, accurate detection in the presence of invalid data and flagging of auto-correlations. None of the current available machine learning techniques support these scenarios. Most parameters, such as the sensitivity towards broadband and line RFI, or the expected smoothness of the data, are intuitive and easy to tweak for science cases that e.g. require that transients do not get flagged, or that require a difference balance between taking out all visible RFI on one hand, and keeping as much data available for further processing on the other hand. This will be challenging, if at all possible, to implement in a machine learning framework.
In this work, we have not made use of the multi-beaming capabilities of Acertif: beam are flagged independently. While some first-order testing indicates that using data integrated over all beams does not improve flagging accuracy, it can be expected that RFI does correlate somewhat over beams. A strategy where the integrated data is searched for RFI, and where this is used as additional input for the flagging of individual beams, might be effective for detecting RFI that is below the noise for a single beam.
###### Acknowledgements.
This work makes use of data from the Acertif system installed at the Westerbork Synthesis Radio Telescope owned by ASTRON. ASTRON, the Netherlands Institute for Radio Astronomy, is an institute of the Dutch Research Council (de Mederlandse Organisative von Wetenschappelijk Onderdex, NWO). BA acknowledges funding from the German Science Foundation DFG, within the Collaborative Research Center SFB1491 "Cosmic Interacting Matters - From Source to Signal". EAKA is supported by the WISE research programme, which is financed by NWO. Mvdrh and KMH, acknowledge funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013)/ERC Grant Agreement No. 291531 "HIStbinomN". JvL, YM and ICO acknowledge funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013)/ERC Grant Agreement No. 617199 "ALERF"; PI. JvL. KMH further acknowledges financial support from the State Agency for Research of the Spanish Ministry of Science, Innovation and Universities through the "Center of Excellence Seven Ochoa" awarded to the Instituto de Astrofisica de Analulacion (SEV-2017-0709) from the coordination of the participation in SKA-SPAN, funded by the Ministry of Science and innovation (MICIN) and grant RIT2018-096228-B-C3 (MCIU/AEI/FEDER,UE). JvL further acknowledges funding from Vici research programme "ARGO" with project number 639.043.815, financed by NWO. DV acknowledges support from the Netherlands eScience Center (NLeSC) under grant ASDL15.406.
|
2306.14876 | Quantum trajectories for time-local non-Lindblad master equations | For the efficient simulation of open quantum systems we often use quantum
jump trajectories given by pure states that evolve stochastically to unravel
the dynamics of the underlying master equation. In the Markovian regime, when
the dynamics is described by a Gorini-Kossakowski-Sudarshan-Lindblad (GKSL)
master equation, this procedure is known as Monte-Carlo wavefunction (MCWF)
approach . However, beyond ultraweak system-bath coupling, the dynamics of the
system is not described by an equation of GKSL type, but rather by the Redfield
equation, which can be brought into pseudo-Lindblad form. Here negative
dissipation strengths prohibit the conventional approach. To overcome this
problem, we propose a pseudo-Lindblad quantum trajectory (PLQT) unraveling. It
does not require an effective extension of the state space, like other
approaches, except for the addition of a single classical bit. We test the PLQT
for the eternal non-Markovian master equation for a single qubit and an
interacting Fermi Hubbard chain coupled to a thermal bath and discuss its
computational effort compared to solving the full master equation. | Tobias Becker, Ché Netzer, André Eckardt | 2023-06-26T17:45:36Z | http://arxiv.org/abs/2306.14876v3 | # Quantum trajectories for time-local non-Lindblad master equations
###### Abstract
For the efficient simulation of open quantum systems we often use quantum jump trajectories given by pure states that evolve stochastically to unravel the dynamics of the underlying master equation. In the Markovian regime, when the dynamics is described by a Lindblad master equation, this procedure is known as Monte Carlo wavefunction (MCWF) approach [1; 2]. However, beyond ultraweak system-bath coupling, the dynamics of the system is not described by an equation of Lindblad type, but rather by the Redfield equation, which can be brought into pseudo-Lindblad form. Here negative dissipation strengths prohibit the conventional approach. To overcome this problem, we propose a pseudo-Lindblad quantum trajectory (PLQT) unraveling. It does not require an effective extension of the state space, like other approaches, except for the addition of a single classical bit. We test the PLQT for the eternal non-Markovian master equation for a single qubit and an interacting Fermi Hubbard chain coupled to a thermal bath and discuss its computational effort compared to solving the full master equation.
_Introduction.-_ Away from thermodynamic equilibrium the properties of an open quantum system do not simply follow from the fundamental principles of statistical mechanics, but depend on the very details of the surrounding environment. This includes both transient dynamics, as the algorithm of a quantum computer or the relaxation following a quantum quench, and non-equilibrium steady states. Therefore, it is crucial to find an effective equation of motion for the open system that accurately captures the impact of the environment. At the same time, and equally important, the theoretical description should allow for efficient numerical simulations. A powerful approach for the latter are quantum trajectory simulations, where a stochastic process for the evolution of pure states is considered, the ensemble average of which describes the open system. Compared to the evolution of the full density operator (scaling quadratically with the state-space dimension \(D\)), these simulations require less memory, since pure states scale only linearly with \(D\). Moreover such unravelings can also directly describe stochastic processes of measured systems [3; 4; 5].
Quantum trajectory simulations are rather straightforward in the ultraweak-coupling limit, where the system-bath coupling is weak compared to the (quasi)energy level splitting in the system. Here the system is described by a master equation of GKSL form (Gorini Kossakowski Sudarshan Lindblad) [6; 7] (\(\hbar=1\)),
\[\dot{\varrho}=-\mathrm{i}[H,\varrho]+\sum_{i}\gamma_{i}\left(L_{i}\varrho L_{i}^{ \dagger}-\frac{1}{2}\{L_{i}^{\dagger}L_{i},\varrho\}\right), \tag{1}\]
with the coherent evolution given by some Hamiltonian \(H\) and dissipation described by jump operators \(L_{i}\) with associated non-negative strengths \(\gamma_{i}\). Here, \(H\), \(\gamma_{i}\), and \(L_{i}\) can be time dependent.
From this equation, we can immediately obtain a stochastic process for the evolution of pure states, known as Monte-Carlo wavefunction (MCWF) approach [1; 2; 8; 9; 10; 11; 12; 13]. In each time step \(\delta t\), the state either evolves coherently according to \(\ket{\psi(t+\delta t)}\propto(1-\mathrm{i}\delta tH_{\mathrm{eff}}(t))\ket{ \psi(t)}\) with probability \(1-\sum_{i}r_{i}(t)\delta t\) and effective Hamiltonian
\[H_{\mathrm{eff}}(t)=H-\frac{\mathrm{i}}{2}\sum_{i}\gamma_{i}L_{i}^{\dagger}L _{i}, \tag{2}\]
or a quantum jump occurs, \(\ket{\psi(t+\delta t)}\propto L_{i}\ket{\psi(t)}\), with probability \(r_{i}(t)\delta t\), with jump rates \(r_{i}(t)=\gamma_{i}\bra{\psi(t)}L_{i}^{\dagger}L_{i}\ket{\psi(t)}\). The state of the system is then given (approximated) by the ensemble average \(\rho(t)=\overline{\ket{\psi(t)}\overline{\psi(t)}\overline{\psi(t)}}\) over an infinitely (sufficiently) large number \(N\) of trajectories \(\ket{\psi_{n}(t)}\), where \(\overline{X}=\frac{1}{N}\sum_{n=1}^{N}X_{n}\).
However, the assumption of ultraweak coupling is questionable in various situations, for instance in large systems, with small finite-size gaps and tiny avoided crossings between many-body states, as well as in Floquet systems, with driving frequency \(\omega\) the average quasi energy level spacing is \(\omega/D\).
Beyond ultraweak coupling, master equations in pseudo-Lindblad form can be found, which look like a GKSL master equation Eq. (1), except for the fact that the coefficients \(\gamma_{i}\) also take negative values. For instance, the Redfield equation obtained in (Floquet)-Born-Markov approximation can be brought to this form [14]. Generally, negative relaxation strengths are relevant for non-Markovian dynamics [15], stochastic Hamiltonians with non-Markovian noise [16], gauge transformed Lindbladians [17] and exact master equations [18; 19; 20]. These negative values are incompatible with the conventional MCWF, since the probability \(r_{i}(t)\delta t\) for a quantum jump would become negative. To overcome this problem different quantum jump unravelings have been formulated, which, however, require significantly more computational resources [21; 22; 23; 24; 25; 26; 27; 28; 29]. In many approaches the system's state space needs to be extended, so that its dimensionality at least doubles [21; 22; 23; 24; 25; 30; 31; 32]. For oscillating strengths between positive and negative values, moreover, an alternative non-Markovian quantum jump method (NMQJ) has been proposed in which jump processes are inverted [26; 27; 28; 33]. This method does
not work, if \(\gamma_{i}<0\) for all times and does not admit the parallel evaluation of trajectories. For non-oscillatory strengths the rate operator quantum jump approach can be applied [29], however, it requires a rather costly diagonalization of a state-dependent operator in every time step of the evolution. A generalization of the non-Markovian jump method for many-body systems has been proposed in Ref. [34] and can be used to study measurement induced phase transitions.
In this work we propose pseudo-Lindblad quantum trajectories (PLQT), which work for arbitrary \(\gamma_{i}\), where the trajectories evolve independently and which does not require the doubling of state space. In the following, this is realized by extending the system's state space in a in a minimal (and for the memory requirement of simulations practically irrelevant) fashion by a single classical bit \(s\in\{-1,+1\}\), \(\ket{\psi(t)}\rightarrow\{\ket{\psi(t)},s(t)\}\).
_Algorithm.-_ To unravel the dynamics by quantum trajectories \(\{\ket{\psi(t)},s(t)\}\) first choose a time step \(\delta t\), which is sufficiently short for the first-order time integration, and jump rates \(r_{i}(t)>0\) for each jump operator \(L_{i}\) (to be specified below). Within one time step a quantum jump occurs described by
\[\begin{split}\ket{\psi^{(i)}(t+\delta t)}& =\frac{\sqrt{\ket{\gamma_{i}}}L_{i}\ket{\psi(t)}}{\sqrt{r_{i}(t)}},\\ s^{(i)}(t+\delta t)&=\frac{\gamma_{i}}{\ket{\gamma_ {i}}}s(t),\end{split} \tag{3}\]
with probability \(r_{i}(t)\delta t\) or alternatively, with the remaining probability \(1-\sum_{i}r_{i}(t)\delta t\), the state evolves coherently with \(H_{\text{eff}}\) [Eq. (2)] [35]
\[\ket{\psi^{(0)}(t+\delta t)} =\frac{(1-\mathrm{i}\delta tH_{\text{eff}}(t))\ket{\psi(t)}}{ \sqrt{1-\delta t\sum_{i}r_{i}(t)}}, \tag{4}\] \[s^{(0)}(t+\delta t) =s(t). \tag{5}\]
We now show that
\[\varrho(t)=\overline{s(t)\ket{\psi(t)}\!\!\bra{\psi(t)}}. \tag{6}\]
For a pure initial state, \(\sigma(t)=s(t)\ket{\psi(t)}\!\!\bra{\psi(t)}\), on average the update scheme is the weighted sum of these processes,
\[\begin{split}\overline{\sigma(t+\delta t)}=&\sum _{i}r_{i}(t)\delta t\ \sigma^{(i)}(t+\delta t)\\ &+\bigg{(}1-\sum_{i}r_{i}(t)\delta t\bigg{)}\sigma^{(0)}(t+ \delta t),\end{split} \tag{7}\]
with \(\sigma^{(i)}=s^{(i)}\ket{\psi^{(i)}}\!\!\bra{\psi^{(i)}}\) and \(\sigma^{(0)}=s^{(0)}\ket{\psi^{(0)}}\!\!\bra{\psi^{(0)}}\). By inserting Eqs. (3) and (4) above, the jump rates \(r_{i}(t)\) cancel out and one arrives at
\[\begin{split}\overline{\sigma(t+\delta t)}=&\sigma( t)+\delta t\bigg{(}\sum_{i}\gamma_{i}L_{i}\sigma(t)L_{i}^{\dagger}\\ &-\mathrm{i}H_{\text{eff}}(t)\sigma(t)+\mathrm{i}\sigma(t)H_{ \text{eff}}(t)^{\dagger}\bigg{)},\end{split} \tag{8}\]
almost corresponding to the action of the master Eq. (1). The final step to arrive at Eq. (1) is to average Eq. (8) also over an ensemble of pure states at time \(t\), so that \(\overline{\sigma(t+\delta)}\rightarrow\varrho(t+\delta t)\) and \(\sigma(t)\rightarrow\varrho(t)\). As will be discussed below, one consequence of the presence of negative weights \(\gamma_{i}<0\) is that individual wave functions \(\psi_{n}\) are not normalized. As a result, the ensemble averaged trace is preserved only in the limit \(N\rightarrow\infty\) of an infinite ensemble [36]. Therefore, in a finite ensemble, one obtains better convergence by explicit normalization, \(\varrho_{N}=\frac{1}{N}\sum_{n}^{N}s_{n}\ket{\psi_{n}}\!\!\bra{\psi_{n}}\), with \(\mathcal{N}=\sum_{n}^{N}s_{n}\bra{\psi_{n}}\) at every time \(t\). A rigorous proof using the Ito-formalism is outlined in the supplemental material [37]. In case that all \(\gamma_{i}\) are positive the sign bits do not change and the algorithm corresponds to the conventional MCWF approach. Note that recently also another unraveling of non-Lindblad master equations was proposed in Ref. [38]. It is different from our approach, but also involves an effective classical degree of freedom, given by a real number of constant average, rather than our single bit, whose average is time-dependent, as will be seen below.
Here, as for other unraveling schemes [22], the jump rates \(r_{i}(t)>0\) can, in principle, be chosen arbitrarily. In practice there is, however, a trade off. Whereas for too small rates \(r_{i}\), large ensembles of trajectories are required to sample each jump process \(i\) sufficiently, we also have to require that the probability \(1-\sum_{i}r_{i}\delta t\) remains positive and large enough for the given time step \(\delta t\). A typical choice is [13],
\[r_{i}(t)=\ket{\gamma_{i}}\frac{\lVert L_{i}\ket{\psi(t)}\rVert^{2}}{\lVert \psi(t)\rangle\rVert^{2}}, \tag{9}\]
for which the quantum jump does not alter the norm \(\lVert\psi\rangle\rVert\equiv\bra{\psi}\!\!\bra{\psi}^{1/2}\) of the state, i.e. \(\lVert\!\left\lVert\psi^{(i)}(t+\delta t)\right\rangle\rVert=\lVert\!\left\lVert \psi(t+\delta t)\right\rangle\!\rVert\). Note, however, that for \(\gamma_{i}<0\) this choice implies that the norm increases during the coherent time evolution with \(H_{\text{eff}}\), \(\lVert\!\left\lVert\psi^{(0)}(t+\delta t)\right\rangle\!\rVert=(1+\delta t \sum_{\gamma_{i}<0}r_{i}(t))\lVert\!\left\lVert\psi(t)\right\rangle\!\rVert\)[39]. This is not the case for the conventional MCWF approach, where \(\gamma_{i}\geq 0\).
_Non-Markovian dephasing for single qubit.-_ As a proof of principle we implement the PLQT algorithm for a qubit subjected to purely dissipative dynamics,
\[\dot{\varrho}(t)=\frac{1}{2}\Big{[}\mathcal{L}_{x}+\mathcal{L}_{y}-\tanh(t) \mathcal{L}_{z}\Big{]}\varrho(t), \tag{10}\]
with GKSL channels \(\mathcal{L}_{i}\varrho=\sigma_{i}\varrho_{i}-\varrho\), where \(\sigma_{i}\) are Pauli operators, with \(\sigma_{i}^{\dagger}\sigma_{i}=\sigma_{i}^{2}=1\). This equation is known as the eternal non-Markovian master equation [40; 15]. The existence of a negative relaxation rate makes it inaccessible to the standard MCWF, while also the NMQJ approach fails, since \(-\tanh(t)<0\) for all \(t\).
However, for this model the PLQT approach is easily implemented and because the jump operators are unitary the jump rates are state independent, i.e. \(\lVert\!\left\lVert\varphi(t)\right\rangle\!\rVert^{2}=\lVert\!\left\lVert \psi(t)\right\rangle\!\rVert^{2}\) leads to a simplification in Eq. (9), and one has
\(r_{x}=r_{y}=1/2\), \(r_{z}(t)=\tanh(t)/2\). Also the effective Hamiltonian \(H_{\text{eff}}=-\frac{1}{2}(1-\tanh(t)/2)\) entering Eq. (4) is not state-dependent (i.e. proportional to the identity).
On average the sign follows the rate equation \(\overrightarrow{s(t)}=-2r_{z}(t)\overline{s(t)}\), which is solved by \(\overline{s(t)}=1/\cosh(t)\), as shown in Fig. 1 (c) [37]. Quantum jumps are realized in the Bloch-vector representation by reflections at the \(y\)-\(z\)-plane and \(x\)-\(z\)-plane for \(\sigma_{x}\) and \(\sigma_{y}\), respectively [Fig. 1 (d)]. The \(\sigma_{z}\) quantum jump is a reflection at the \(x\)-\(y\)-plane and due to the negative relaxation strength the sign flip is accounted by an additional point reflection at the origin.
By simulating \(N=10^{5}\) trajectories in Fig. 1 (a), (b) we obtain accurate results for the transient dynamics until at \(t_{\text{R}}\sim 2\) the system reaches the steady state regime. Besides this physical relaxation time, we also find an algorithmic relaxation time \(t_{A}\sim 4\), at which the number of negative and positive trajectories become equal and the averaged sign decays to zero [Fig. 1 (c)]. Beyond this algorithmic relaxation time, fluctuations are typically increased [Fig. 1 (a) and (b)]. This effect can be understood by noting that a stochastic process of a real variable \(x_{n}\) with positive mean \(\overline{x}\) will have bounded fluctuations \(\Delta x=\overline{(x-\overline{x})^{2}}^{-1/2}\leq\overline{x}\) as long as \(x_{n}>0\), whereas \(\Delta x\) is not bounded, when \(x_{n}\) can also take negative values. Thus, ideally, \(t_{A}\) should be large compared to the time span of the simulation (which is \(t_{R}\), if we are interested in computing the steady state). The algorithmic relaxation time is determined by the inverse sign-flip rate \(r_{\text{sf}}=\sum_{i,\gamma_{i}<0}r_{i}\), e.g. \(t_{A}=r_{\text{neg}}^{-1}\) for time-independent \(r_{\text{sf}}\). Thus, we can increase \(t_{A}\) simply by lowering the rates for negative processes with "rates" \(\gamma_{i}<0\) relative to positive ones with \(\gamma_{i}>0\). However, this will also increase the number of trajectories needed for properly sampling those negative-"rate" processes. Thus, before doing this, one should first attempt to rewrite the master equation, so that the relative weight of negative processes is reduced. This can be done for pseudo-Lindblad equations derived from the Redfield equation [14], as we will recapitulate now.
_Redfield dynamics.-_ For a microscopic model a master equation is often derived within the Born-Markov-Redfield formalism [41; 42]. We consider a system-bath Hamiltonian of the form \(H_{\text{tot}}=H+\sum_{i}S_{i}\otimes B_{i}+H_{i}\) with system Hamiltonian \(H\) that couples to individual baths \(H_{i}\) where \(S_{i}\) and \(B_{i}\) denote the system and bath coupling operators, respectively. The Redfield equation can then be written in pseudo-Lindblad form [14]
\[\dot{\varrho} =-\mathrm{i}[H+H^{\text{LS}},\varrho] \tag{11}\] \[+\sum_{i,\sigma=\pm}\sigma\left(L_{i\sigma}\varrho L_{i\sigma}^{ \dagger}-\frac{1}{2}\{L_{i\sigma}^{\dagger}L_{i\sigma},\varrho\}\right),\]
with Lamb-shift Hamiltonian \(H^{\text{LS}}=(1/2\mathrm{i})\sum_{i}S_{i}\mathbb{S}_{i}+\text{H.c.}\), convolution operators \(\mathbb{S}_{i}=\int_{0}^{\infty}d\tau\left\langle B(\tau)B\right\rangle e^{ \mathrm{i}H\tau}S_{i}e^{-\mathrm{i}H\tau}\) and Lindblad-like jump operators
\[L_{i\sigma}=\frac{1}{\sqrt{2}}\Bigg{[}\lambda_{i}(t)S_{i}+\sigma\frac{1}{ \lambda_{i}(t)}\mathbb{S}_{i}\Bigg{]} \tag{12}\]
with arbitrary, time-dependent real parameters \(\lambda_{i}(t)\). We see that due to the negative relaxation rates with \(\sigma=-1\), the Redfield equation is generally not of GKSL form, unless further approximations are employed in the limit of ultra weak coupling [41; 42; 43] or for high bath temperatures [44; 14]. For an purely Ohmic bath, the choice [14]
\[\lambda_{i,\text{glob}}(t)^{2}=\sqrt{\frac{\text{tr}\ \mathbb{S}_{i}^{\dagger} \mathbb{S}_{i}}{\text{tr}\ S_{i}S_{i}}} \tag{13}\]
is the optimum that minimizes the norm of the negative channels in the pseudo-Lindblad equation globally, i.e. on average for all states. A further reduction of negative processes can be achieved by state-dependent optimization. Namely, according to Eq. (9), where (assuming, without loss of generality a normalized state) the rates for negative quantum jumps with \(L_{i-}\) are given by
Figure 1: Non-Markovian dynamics [Eq. (10)] for density matrix elements \(\varrho_{00}\) (a), \(\text{Re}\varrho_{01}\) (solid), \(\text{Im}\varrho_{01}\) (dashed) (b) (and the Bloch-vector in the \(x\)-\(y\)-plane (d). Analytical solution (black) and unraveling with \(N=10^{5}\) PLQTs with time step \(\delta t=0.01\) in blue (\(N=10,10^{3}\) in thin and intermediate grey lines) for an initial Bloch state with \(\phi=\Theta=\pi/4\). (c) Shows the averaged sign bit.
\(2\mathrm{Re}\left\langle\psi(t)|S_{i}\mathrm{S}_{i}|\psi(t)\right\rangle\). Thus, the choice,
\[\lambda_{i,\mathrm{loc}}(t)^{2}=\frac{\|\mathrm{S}_{i}\left|\psi(t)\right\rangle \|}{\|S_{i}\left|\psi(t)\right\rangle\|}, \tag{14}\]
minimizes the rates for negative quantum jumps in the unraveling of the Redfield equation. Since the states in the numerator and the denominator of Eq. (14) have to be evaluated for evolving the state anyway, this local optimization (which is not described in Ref. [14]) can be implemented efficiently.
We test our method for the extended Hubbard chain of spinless fermions,
\[H=-J\sum_{\ell=0}^{M-1}\left(a_{\ell}^{\dagger}a_{\ell+1}+a_{ \ell+1}^{\dagger}a_{\ell}\right)+V\sum_{\ell=0}^{M-1}a_{\ell}^{\dagger}a_{\ell }a_{\ell+1}^{\dagger}a_{\ell+1}, \tag{15}\]
with fermionic operators \(a_{\ell}\), tunneling strength \(J\) and nearest-neighbour interaction strength \(V\). For the dissipator, we have
\[\left\langle n|\mathrm{S}_{\ell}|m\right\rangle=\frac{J(\Delta_{ nm})}{e^{\Delta_{nm}/T}-1}\left\langle n|S_{\ell}|m\right\rangle, \tag{16}\]
with system operator \(S_{\ell}=a_{\ell}^{\dagger}a_{\ell}\), level splitting \(\Delta_{nm}=E_{n}-E_{m}\) and bath temperature \(T\). We consider a purely Ohmic bath with spectral density, \(J(E)=\gamma E\) and coupling strength \(\gamma\).
In Fig. 2 we depict the decay of the interaction energy for an initial state in which pairs of adjacent sites are occupied \(|\psi(0)\rangle=|011011\ldots\rangle\). Quench dynamics for such an charge density wave in a spin polarized Fermi Hubbard model have been realized recently in the Bakr-group [45]. We assume strong interactions \(V/J=7\), for which the dublon pairs can only be broken when the system exchanges energy with the bath. This leads to a decay of the energy of the open system as depicted in Fig. 2 (a), where the transient oscillations are well reproduced.
_Numerical implementation.-_ Let us now discuss the numerical implementation of the PLQT approach. Since the trajectories are independent we run them in parallel. Depending on the physical quantity of interest, let's say observable \(A\), it is often reasonable not to store the actual time dependent state, as large vectors with complex entries, but rather expectation values \(\langle\psi_{n}(t)|A|\psi_{n}(t)\rangle\) together with the norm \(\|\psi_{n}(t)\|\) and the sign \(s_{n}(t)\). While the storage of the trajectory data boils down to a few real numbers, the time evolution requires the full state vector. The memory needed for the time integration of a quantum trajectory would grow linear with the state-space dimension \(D\), if not only the Hamiltonian, but also the jump operators were sparse. The latter is the case, however, mainly in phenomenological master equations with local jump operators and not for the Redfield equation, so that the memory needed usually scales like \(D^{2}\). The memory needed for integrating the Redfield equation scales equally like \(D^{2}\) (since it is sufficient to store and apply the jump operators rather than the full superoperator). Nevertheless, we find that the memory requirement for quantum trajectories to be much lower than that for integrating the master equation. In Fig. 3 the required memory (a) and (b) and single-time-step run time (c) and (d) is compared for solving the full Redfield master equation (blue) and a single trajectory (red). We find that the required memory is noticeably reduced for the quantum trajectory simulation, even though, as discussed above, it still scales like \(D^{2}\). (The latter is not specific to our approach, but generically the case also for other forms of quantum trajectory simulations). For the run time the relative reduction is even stronger and shows different scaling with \(D\). Essentially the difference are two matrix-matrix products needed for the Redfield integration and one matrix-vector product for the PLQTs. Note that the unravelling can also be combined with matrix product states (e.g. [46; 47]). It is interesting to see in how far such an approach would compare to a representation of the density operator by matrix product operators (e.g. Refs. [48; 49]).
_Conclusion.-_ We have developed an efficient unraveling of master equations of pseudo-Lindblad form, which includes the Redfield equation as an important case [14]. Different from previous approaches, it requires a minimal extension of the state space by one classical sign bit only, is applicable also for dissipation strengths that are always negative, does not require matrix diagonalization during the time integration, and allows for a parallel implementation, since all trajectories are independent from each other. We believe that it will be a useful tool for the simulation of open many-body systems beyond ul
Figure 2: Dynamics of scaled interaction energy of extended Hubbard chain of 2 [8] particles on 4 [13] sites (a) [(b)] with \(V/J=7\). We compare the dynamics of the isolated (grey line) and open system with \(\gamma/J=0.02\) and \(T/J=1\) (blue line for PLQT, thin orange for Redfield equation). The decrease of interactions is related to bath-induced doublon-breaking processes.
tra weak system-bath coupling. In future work it will be interesting to systematically investigate the impact of negative dissipation strengths \(\gamma_{i}\) on the required ensemble size and the optimal choice of the corresponding rates for efficient simulation. Moreover, it would be interesting to systematically compare the performance of our PLQT to that of the influence-martingale approach of Ref. [38].
We thank Francesco Intravaia, Charlotte Maurer and Alexander Schnell for fruitful discussions. This research was funded by by the Deutsche Forschungsgemeinschaft (DFG) via the Research Unit FOR 2414 under project No. 277974659.
|
2308.02206 | Large deviations for an obstacle problem with T-monotone operator and
multiplicative noise | In this paper, we study the large deviation principle (LDP) for obstacle
problems governed by a T-monotone operator and small multiplicative stochastic
reaction. Our approach relies on a combination of new sufficient condition to
prove LDP by Matoussi, Sabbagh and Zhang [Appl. Math. Optim. 2021] and
Lewy-Stampacchia inequalities to manage the Lagrange-multiplier associated with
the obstacle. | Yassine Tahraoui | 2023-08-04T08:47:20Z | http://arxiv.org/abs/2308.02206v2 | # Large deviations for an obstacle problem with T-monotone operator and multiplicative noise
###### Abstract.
In this paper, we study the large deviation principle (LDP) for obstacle problems governed by a T-monotone operator and small multiplicative stochastic reaction. Our approach relies on a combination of new sufficient condition to prove LDP by Matoussi, Sabbagh and Zhang [_Appl. Math. Optim._ 83: 849-879, 2021] and Lewy-Stampacchia inequalities to manage the Lagrange multiplier associated with the obstacle.
_Center for Mathematics and Applications (NOVA Math), NOVA SST, Portugal_
**Keywords:** Variational inequality, Stochastic PDEs, Obstacle problems, Large deviations.
**MSC:** 35K86, 35R35, 60F10, 60H15.
###### Contents
* 1 Introduction
* 2 Content of the study
* 3 Formulation of the problem and main results
* 4 Well-posedness of skeleton Equations (3.3)
* 5 Proof of large deviation principle
## 1. Introduction
We are interested in investigating an asymptotic properties of the solution to a stochastic obstacle problems with small noise. More precisely, our aim is to prove the large deviation principle for the solution \((u_{\delta})_{\delta\downarrow 0}\) to some obstacle problems which can be written (see Section 3)
\[\partial I_{K}(u_{\delta})\ni f-\partial_{t}\Big{(}u_{\delta}- \delta\int_{0}^{.}G(u_{\delta},\cdot)dW\Big{)}-A(u_{\delta},\cdot);\quad \delta>0, \tag{1.1}\]
where \(K\) is a closed convex subset of \(L^{p}(\Omega;L^{p}(0,T;V))\) related to a constraint \(\psi\), \(A\) is a nonlinear T-monotone operator defined on some Banach space \(V\), \((\Omega,\mathscr{F},(\mathscr{F}_{t})_{t\geq 0},P)\) is a filtered probability space and \(W(t)\) is a \(Q\)-Wiener process in some separable Hilbert space \(H\).
Abstract type problems (1.1) can be used to describe several phenomena and arise in many mathematical models associated with a number of fields including Physics, Biology and Finance. For example, the evolution of damage in a continuous medium taking into account the microscopic description [2], the question of American option pricing, some questions of flows in porous media and phase transition questions, some optimal stopping-time problems and population dynamics models, see _e.g._[2, 3, 21, 26] and references theirin. In the deterministic setting, it can be formulated by using variational inequalities, see _e.g._[11, 18, 26] and their references. Concerning the well-posedness questions, many authors have been interested in the topic, without seeking to be exhaustive, let us mention [9] about the reflected solutions of parabolic SPDEs driven by a space time white noise on the spatial interval \([0,1]\), quasilinear stochastic PDEs with obstacle in [8] and [12] for stochastic variational inequalities. Besides the well posedness, a regularity properties of the reflected stochastic measure was studied in [27], where stochastic Lewy-Stampacchia's inequalities where proved in the case of stochastic T-monotone obstacle problems and then [4, 24] for stochastic scalar conservation laws with constraint and pseudomonotone parabolic obstacle problems.
Large deviation principle concerns the exponential decay, with the corresponding rate functions, of the laws of solutions to a different stochastic dynamics. There exists many works on large deviations for dynamical systems in finite/infinte dimensional setting driven by white noises. Without exaustivness, let us refer to the of Freidlin and Wentzell [10] for the small perturbation for stochastic differential equations, [5] about the variational representation for functionals of Wiener process and it's application to prove large deviation principle and [6, 7, 13, 14, 16, 19] for large deviations results about different stochastic dynamics related with stochastic equations.
We recall that (1.1) are a free boundary type problems and also can be understood as coupled system of stochastic PDEs as well, which generates many mathematical difficulties in the analysis of a such problem. In particular, the management of the singularities caused by the obstacle. Therefore, one needs to manage in an appropriate way the stochastic reflected measure, resulting from the interaction between the solution and the obstacle near to the contact set. Concerning large deviations of obstacle problem, let us mention [25] for large deviation principle to SPDEs with reflection, studied in [9], by using the weak convergence approach. Another type of result concerns large deviation principle for invariant measures to solutions of reflected SPDEs driven by a space-time white noise in 1D, see [28]. Recently, the authors in [17] provided a new sufficient condition to verify the criteria of the weak convergence and used it to study large deviation principle for quasilinear stochastic obstacle problems. Our aim is to present a result about large deviations for stochastic obstacle problem governed by T-monotone operator, including p-Laplace type operator, a multiplicative noise and large class of obstacles. The method is based on the combination of proving the new sufficient condition of [17, Theorem 3.2] in our setting and using Lewy-Stampacchia inequalities to manage the stochastic reflected measure.
The paper is organized in the following way: after giving the hypotheses, we recall some basics about stochastic obstacle problems and Large deviations. Section 3 is devoted to the mathematical formulation of the problem and the main results of this paper. In Section 4, we prove the well-posedness and the corresponding Lewy-Stampacchia inequalities for the skeleton equation (3.3), by using penalization and approximation techniques. Section 5 concerns the proof of the large deviation principle for the stochastic dynamics associated with (1.1), where Lewy-Stampacchia inequalities will be used to manage the singularities caused by the obstacle,
which serves to study the continuity of the skeleton equations (3.3) from the Cameron-Martin Hilbert space endowed with the weak topology to some Polish space endowed with the strong topology. Finally, we study the limit as \(\delta\to 0\) in appropriate sense.
## 2. Content of the study
### Notation and functional spaces
Let us denote by \(D\subset\mathbb{R}^{d},d\geq 1\) a Lipschitz bounded domain, \(T>0\) and consider \(\max(1,\frac{2d}{d+2})<p<+\infty\). As usual, \(p^{\prime}=\frac{p}{p-1}\) denotes the conjugate exponent of \(p\), \(V=W^{1,p}_{0}(D)\), the sub-space of elements of \(W^{1,p}(D)\) with null trace, endowed with Poincare's norm, \(H=L^{2}(D)\) is identified with its dual space so that, the corresponding dual spaces to \(V\), \(V^{\prime}\) is \(W^{-1,p^{\prime}}(D)\) and the Lions-Guelfand triple \(V\hookrightarrow_{d}H=L^{2}(D)\hookrightarrow_{d}V^{\prime}\) holds. Denote by \(p^{*}=\frac{pd}{d-p}\) if \(p<d\) the Sobolev embedding exponent and remind that
\[\text{if }p<d, \quad V\hookrightarrow L^{a}(D),\ \forall a\in[1,p^{*}]\text{ and compact if }a\in[1,p^{*}),\] \[\text{if }p=d, \quad V\hookrightarrow L^{a}(D),\ \forall a<+\infty\text{ and compactly},\] \[\text{if }p>d, \quad V\hookrightarrow C(\overline{D})\text{ and compactly }.\]
Since \(\max(1,\frac{2d}{d+2})<p<+\infty\), the compactness of the embeddings hold in Lions-Guelfand triple. The duality bracket for \(T\in V^{\prime}\) and \(v\in V\) is denoted \(\langle T,v\rangle\) and the scalar product in \(H\) is denoted by \((\cdot,\cdot)\).
Let \((\Omega,\mathscr{F},P)\) be a complete probability space endowed with a right-continuous filtration \(\{\mathscr{F}_{t}\}_{t\geq 0}\)1 completed with respect to the measure \(P\). \(W(t)\) is a \(\{\mathscr{F}_{t}\}_{t\geq 0}\)-adapted \(Q\)-Wiener process in \(H\), where \(Q\) is non-negative symmetric operator with finite trace _i.e._\(trQ<\infty\). Denote by \(\Omega_{T}=(0,T)\times\Omega\) and \(\mathscr{P}_{T}\) the predictable \(\sigma\)-algebra on \(\Omega_{T}\)2. Set \(H_{0}=Q^{1/2}H\) and we recall that \(H_{0}\) is a separable Hilbert space endowed with the inner product \((u,v)_{0}=(Q^{-1/2}u,Q^{-1/2}v)\), for any \(u,v\in H_{0}\). The space \((L_{2}(H_{0},H),\|\cdot\|_{L_{2}(H_{0},H)})\) stands for the space of Hilbert-Schmidt operators from \(H_{0}\) to \(H\)3, \(\mathbb{E}\) stands for the expectation with respect to \(P\).
Footnote 1: For example, \((\mathscr{F}_{t})_{t\geq 0}\) is the augmentation of the filtration generated by \(\{W(s),0\leq s\leq t\}_{0\leq t\leq T}\).
Footnote 2: \(\mathscr{P}_{T}:=\sigma(\{]s,t[\}\times F_{s}|0\leq s<t\leq T,F_{s}\in\mathscr{ F}_{s}\}\cup\{\{0\}\times F_{0}|F_{0}\in\mathscr{F}_{0}\})\) (see [15, p. 33]). Then, a process defined on \(\Omega_{T}\) with values in a given space \(E\) is predictable if it is \(\mathscr{P}_{T}\)-measurable.
Footnote 3: \(\|T\|_{L_{2}(H_{0},H)}^{2}=\sum_{k\in\mathbb{N}}\|Te_{k}\|_{H}^{2}\) where \(\{e_{k}\}_{k\in\mathbb{N}}\) is an orthonormal basis for \(H_{0}\), see _e.g._[15, Appendix B].
We recall that an element \(\xi\in L^{p^{\prime}}(0,T;V^{\prime})\) (resp. \(L^{p^{\prime}}(\Omega_{T};V^{\prime})\)) is called _non-negative i.e._\(\xi\in(L^{p^{\prime}}(0,T;V^{\prime}))^{+}\)(resp. \((L^{p^{\prime}}(\Omega_{T};V^{\prime}))^{+}\)) iff
\[\int_{0}^{T}\langle\xi,\varphi\rangle_{V^{\prime},V}\,dt\geq 0\quad(\text{ resp. }\mathbb{E}\int_{0}^{T}\langle\xi,\varphi\rangle_{V^{\prime},V}\,dt\geq 0)\]
holds for all \(\varphi\in L^{p}(0,T;V)\) (resp. \(L^{p}(\Omega_{T};V)\)) such that \(\varphi\geq 0\). In this case, with a slight abuse of notation, we will often write \(\xi\geq 0\).
Denote by \(L^{p}(0,T;V)^{*}=(L^{p^{\prime}}(0,T;V^{\prime}))^{+}-(L^{p^{\prime}}(0,T;V^{ \prime}))^{+}\varsubsetneq L^{p^{\prime}}(0,T;V^{\prime})\) the order dual as being the difference of two non-negative elements of \(L^{p^{\prime}}(0,T;V^{\prime})\), more precisely \(h\in L^{p}(0,T;V)^{*}\) iff \(h=h^{+}-h^{-}\) with \(h^{+},h^{-}\in L^{p^{\prime}}(0,T;V^{\prime})^{+}\).
### Assumptions
We will consider in the sequel the following assumptions:
* Let \(A:V\times[0,T]\to V^{\prime}\), \(G:H\times[0,T]\to L_{2}(H_{0},H)\), \(\psi:[0,T]\to V\), \(f:[0,T]\to V^{\prime}\) such that:
* \(\exists\alpha,\bar{K}>0,\lambda_{T},\lambda\in\mathbb{R}\), \(l_{1}\in L^{\infty}([0,T])\) and \(g\in L^{\infty}([0,T])\) such that:
* \(t\in[0,T]\) a.e., \(\forall v\in V,\quad\langle A(v,t),v\rangle+\lambda\|v\|_{H}^{2}+l_{1}(t)\geq \alpha\|v\|_{V}^{p}\).
* \((T-\) monotonicity) \(t\in[0,T]\) a.e., \(\forall v_{1},v_{2}\in V\), \(\lambda_{T}(v_{1}-v_{2})^{+})_{H}+\langle A(v_{1},t)-A(v_{2},t),(v_{1}-v_{2} )^{+}\rangle\geq 0\). Note that since \(v_{1}-v_{2}=(v_{1}-v_{2})^{+}-(v_{2}-v_{1})^{+}\), \(\lambda_{T}Id+A\) is also monotone.
* \(t\in[0,T]\) a.e., \(\forall v\in V,\quad\|A(v,t)\|_{V^{\prime}}\leq\bar{K}\|v\|_{V}^{p-1}+g(t)\).
* \((\)Hemi-continuity) \(t\in[0,T]\) a.e., \(\forall v,v_{1},v_{2}\in V\), \(\eta\in\mathbb{R}\mapsto\langle A(v_{1}+\eta v_{2},t),v\rangle\) is continuous.
* \(\exists M,L>0\) such that
* \(t\in[0,T]\) a.e., \(\forall\theta,\sigma\in H\), \(\|G(\theta,t)-G(\sigma,t)\|_{L_{2}(H_{0},H)}^{2}\leq M\|\theta-\sigma\|_{H}^{ 2}\).
* \(t\in[0,T]\) a.e., \(\forall u\in H\), \(\|G(u,t)\|_{L_{2}(H_{0},H)}^{2}\leq L(1+\|u\|_{H}^{2})\).
* \(\psi\in L^{\infty}(0,T;V)\), \(\frac{d\psi}{dt}\in L^{p^{\prime}}(0,T;V^{\prime})\) and \(G(\psi,t)=0\) a.e. \(t\in[0,T]\).
* \(f\in L^{\infty}(0,T;V^{\prime})\) and assume that \(h^{+}-h^{-}=h=f-\partial_{t}\psi-A(\psi,\cdot)\in L^{p}(0,T;V)^{*}\).
* \(u_{0}\in H\) satisfies the constraint, _i.e._\(u_{0}\geq\psi(0)\).
* \((\)Strong monotonicity\()\)\(\exists\lambda_{T}\in\mathbb{R},\exists\bar{\alpha}>0\): \(t\in[0,T]\) a.e., \(\forall v_{1},v_{2}\in V\), \(\langle A(v_{1},t)-A(v_{2},t),v_{1}-v_{2}\rangle\geq\bar{\alpha}\|v_{1}-v_{2} \|_{V}^{p}-\lambda_{T}\|v_{1}-v_{2}\|_{H}^{2}\).
**Remark 2.1**.: Note that \(G(\psi,t)=0\) a.e. in \([0,T]\) in \(H_{4}\) plays a crucial role in the analysis of skeleton equation (3.3). More precisely, it serves in the proof of Lewy-Stampacchia inequalities (3.4) associated with (3.3) "deterministic", by using the natural assumption \(H_{5}\) (see _e.g._[11, Thm. 2.2]), to manage the reflected measure related to the obstacle in appropriate sense. Moreover, a large class of \(V\)-valued obstacle problems can reduce to the question of a positivity obstacle problem with a stochastic reaction term vanishing at \(0\), see [27, Rmk. 3].
**Remark 2.2**.:
* In the case \(p\geq 2\), the assumptions on \(l_{1},g,\psi\) and \(f\) can be relaxed to a weaker assumptions, more precisely, we can consider \(l_{1}\in L^{1}([0,T]),g\in L^{p^{\prime}}([0,T])\) in \(\mathsf{H}_{2}\), \(\psi\in L^{p}(0,T;V)\) in \(\mathsf{H}_{4}\) and \(f\in L^{p^{\prime}}(0,T;V^{\prime})\) in \(H_{5}\).
* The assumptions \(l_{1},g\in L^{\infty}([0,T])\) in \(\mathsf{H}_{2}\), \(\psi\in L^{\infty}(0,T;V)\) in \(\mathsf{H}_{4}\) and \(f\in L^{\infty}(0,T;V^{\prime})\) in \(H_{5}\) are assumed to ensure the existence of solution to the Skeleton equations (3.3) if \(\max(1,\frac{2d}{d+2})<p<2\). Else, extra assumptions on \(G\) has to be considered, we refer to _e.g._[14, Section 2] for such assumptions.
* In the case \(1<p\leq\frac{2d}{d+2}(d\geq 3)\), one needs either to use a method of some finite dimensional approximation of the diffusion \(G\)(to establish large deviation principle in the case of equations), which leads to an extra assumptions on \(G\)[14, Section 2 (A4)] or some temporal regularity on \(G\), see _e.g._[16, Section 2 (A4)]. The situation seems more delicate because of the reflected measure, which depends on the solution as well.
* The role of \(H_{7}\) is to establish large deviation principle in \(L^{p}(0,T;V)\cap C([0,T];H)\). Else, we obtain large deviation principle only in \(C([0,T];H)\), see Theorem 3.2.
### Stochastic obstacle problems
Denote by \(K\) the convex set of admissible functions
\[K=\{v\in L^{p}(\Omega_{T};V),\quad v(x,t,\omega)\geq\psi(x,t)\quad\text{a.e. in }D\times\Omega_{T}\}.\]
The question is to find \((u,k),\) in a space defined straight after, solution to
\[\begin{cases}du+A(u,\cdot)ds+kds=fds+G(u,\cdot)dW&\text{in}\quad D\times\Omega_{T},\\ u(t=0)=u_{0}&\text{in}\quad H,\ a.s.,\\ u\geq\psi&\text{in}\quad D\times\Omega_{T},\\ u=0&\text{on}\quad\partial D\times\Omega_{T},\\ \langle k,u-\psi\rangle=0\,\ \text{and}\,\ k\leq 0&\text{in}\quad\Omega_{T}.\end{cases} \tag{2.1}\]
**Definition 2.1**.: The pair \((u,k)\) is a solution to Problem (2.1) if:
* \(u:\Omega_{T}\to H\) and \(k:\Omega_{T}\to V^{\prime}\) are predictable processes, \(u\in L^{2}(\Omega;C([0,T];H)).\)
* \(u\in L^{p}(\Omega_{T};V)\), \(u(0,\cdot)=u_{0}\) and \(u\geq\psi\), _i.e._, \(u\in K\).
* \(k\in L^{p^{\prime}}(\Omega_{T};V^{\prime})\) with \[-k\in(L^{p^{\prime}}(\Omega_{T};V^{\prime}))^{+}\ \text{and}\ \mathbb{E}\!\int_{0}^{T}\langle k,u-\psi\rangle_{V^{\prime},V}\,dt=0.\] (2.2)
* P-a.s, for all \(t\in[0,T]\): \(u(t)+\int_{0}^{t}[A(u,\cdot)+k]ds=u_{0}+\int_{0}^{t}G(u,\cdot)dW(s)+\int_{0}^ {t}fds\) in \(V^{\prime}\).
**Remark 2.3**.: Condition (2.2) can be understood as a minimality condition on \(k\) in the sense that \(k\) vanishes on the set \(\{u>\psi\}\). Moreover, (2.2) implies that, for all \(v\in K\),
\[\mathbb{E}\!\int_{0}^{T}\langle k,u-v\rangle_{V^{\prime},V}\,dt\geq 0.\]
**Remark 2.4**.: Note that Problem (2.1) can be written in the equivalent form (see [1, p.7\(-\)8]):
\[\partial I_{K}(u)\ni f-\partial_{t}\left(u-\int_{0}^{\cdot}G(u)\,dW\right)-A (u,\cdot),\]
where \(\partial I_{K}(u)\) represents the sub-differential of \(I_{K}:L^{p}(\Omega_{T};V)\to\overline{\mathbb{R}}\) defined as
\[I_{K}(u)=\left\{\begin{array}{ll}0,&u\in K,\\ +\infty,&u\notin K,\end{array}\right.\]
and \(\partial I_{K}(u)=N_{K}(u):=\left\{y\in L^{p^{\prime}}(\Omega_{T};V^{\prime} );\ \mathbb{E}\int_{0}^{T}\langle y,u-v\rangle_{V^{\prime},V}dt\geq 0,\ \text{for all}\ v\in K \right\}.\)
**Theorem 2.1**.: _[_27_, Theorem 1]_ _Under Assumptions (H\({}_{1}\))-(H\({}_{6}\)), there exists a unique solution \((u,k)\) to Problem (2.1) in the sense of Definition 2.1. Moreover, the following Lewy-Stampacchia's inequality4 holds \(0\leq\partial_{t}\!\left(u\!-\!\int_{0}^{\cdot}G(u,\cdot)dW\right)\!+A(u, \cdot)\!-f\leq h^{-}=\left(f-\partial_{t}\psi-A(\psi,\cdot)\right)^{-}.\)_
Footnote 4: See [24, Definition 1.3.].
### Large deviations
Let \(E\) be a Polish space with the Borel \(\sigma-\)field \(\mathscr{B}(E)\).
**Definition 2.2**.: [5, Def. 4.1](Rate function) A function \(I:E\to[0,\infty]\) is called a rate function on \(E\) if for each \(M<\infty\) the level set \(\{x\in E:I(x)\leq M\}\) is a compact.
**Definition 2.3**.: (Large deviation principle) A family \(\{X^{\epsilon}\}_{\epsilon}\) of \(E-\)valued random elements is said to satisfy a large deviation principle with rate function \(I\) if for each Borel subset \(A\) of \(E\)
\[-\inf_{x\in G^{0}}I(x)\leq\liminf_{\epsilon\to 0}\epsilon^{2}\log P(X^{ \epsilon}\in G)\leq\limsup_{\epsilon\to 0}\epsilon^{2}\log P(X^{\epsilon}\in G) \leq-\inf_{x\in\bar{G}}I(x),\]
where \(G^{0}\) and \(\bar{G}\) are respectively the interior and the closure of \(G\) in \(E\).
**Definition 2.4**.: _[_5_, Def. 4.2]_ _Let \(I\) be a rate function on \(E\). A family \(\{X^{\delta}\}_{\delta>0}\) of E-valued random elements is said to satisfy the Laplace principle on \(E\) with rate function \(I\) if for all real valued, bounded, and continuous functions \(h\) on \(E\):_
\[\lim_{\epsilon\to 0}\epsilon\log\mathbb{E}\big{\{}\exp[-\frac{1}{\epsilon}h(X^{ \epsilon})]\big{\}}=-\inf_{x\in E}\{h(x)+I(x)\}. \tag{2.3}\]
We recall that the large deviation principle and the Laplace principle are equivalent for \(E\)-valued random elements, see _e.g._[5].
Now, let us introduce some notations to formulate LDP.
\[\text{For }N\in\mathbb{N}:\quad S_{N}=\Big{\{}\phi\in L^{2}([0,T];H_{0}); \quad\int_{0}^{T}\|\phi(s)\|_{H_{0}}^{2}ds\leq N\Big{\}},\]
\[\mathscr{A}=\Big{\{}v:\quad v\text{ is }H_{0}\text{-valued }\text{ predictable process and }P\big{(}\int_{0}^{T}\|v(s)\|_{H_{0}}^{2}ds<\infty\big{)}=1\Big{\}}.\]
Recall that the set \(S_{N}\) endowed with the weak topology is a Polish space and define
\[\mathscr{A}_{N}=\{v\in\mathscr{A}:v(\omega)\in S_{N}\text{ P-a.s.}\}. \tag{2.4}\]
Let \(g^{\delta}:C([0,T],H)\to E\) be a measurable map such that \(g^{\delta}(W(\cdot))=X^{\delta}\).
_Assumption (H)_.: Suppose that there exists a measurable map \(g^{0}:C([0,T];H)\to E\) such that
1. For every \(N<\infty\), any family \(\{v^{\delta}:\delta>0\}\subset\mathscr{A}_{N}\) and any \(\epsilon>0\), \[\lim_{\delta\to 0}P(\rho(Y^{\delta},Z^{\delta})>\epsilon)=0,\] where \(Y^{\delta}=g^{\delta}(W(\cdot)+\frac{1}{\delta}\int_{0}^{\cdot}v _{s}^{\delta}ds),Z^{\delta}=g^{0}(\int_{0}^{\cdot}v_{s}^{\delta}ds)\) and \(\rho(\cdot,\cdot)\) stands for the metric in the space \(E\).
2. For every \(N<\infty\) and any family \(\{v^{\delta}:\delta>0\}\subset S_{N}\) satisfying that \(v_{\delta}\) converges weakly to some element \(v\) as \(\delta\to 0\), \(g^{0}(\int_{0}^{\cdot}v_{s}^{\delta}ds)\) converges to \(g^{0}(\int_{0}^{\cdot}v_{s}ds)\) in the space \(E\).
Let us recall the following result from [17], which will be used to prove Theorem 3.2.
**Theorem 2.2**.: _[_17_, Theorem 3.2]_ _If \(X^{\delta}=g^{\delta}(W(\cdot))\) and the Assumption (H) holds, then the family \(\{X^{\delta}\}_{\delta>0}\) satisfies the Large deviation principle on \(E\) with the rate function \(I\) given by_
\[I(f)=\inf_{\{\phi\in L^{2}([0,T],H_{0});f=g^{0}(\int_{0}^{\cdot}\phi_{s}ds)\} }\big{\{}\frac{1}{2}\int_{0}^{T}\|\phi(s)\|_{H_{0}}^{2}ds\big{\}}, \tag{2.5}\]
_with the convention \(\inf\{\emptyset\}=\infty\)._
## 3. Formulation of the problem and main results
### Obstacle problem with small noise
Let \(\delta>0\), we are concerned with the small noise large deviation principle (LDP) of the following problem:
\[\begin{cases}du_{\delta}+A(u_{\delta},\cdot)ds+k_{\delta}ds=fds+\delta G( \cdot,u_{\delta})dW&\text{in}\quad D\times\Omega_{T},\\ u_{\delta}(t=0)=u_{0}&\text{in}\quad D,\\ u_{\delta}\geq\psi&\text{in}\quad D\times\Omega_{T},\\ u_{\delta}=0&\text{on}\quad\partial D\times\Omega_{T},\\ \text{a.e. in}\quad\Omega_{T}.\end{cases} \tag{3.1}\]
Thanks to Theorem 2.1, there exists a unique solution \(X_{\delta}:=(u_{\delta},k_{\delta})\) to Problem (3.1) in the sense of Definition 2.1. In particular, \(\{X_{\delta}\}_{\delta}\) takes values in
\[(C([0,T];H)\cap L^{p}(0,T;V))\times L^{p^{\prime}}(0,T;V^{\prime})\quad P-a.s.\]
It's well-known that \((C([0,T];H)\cap L^{p}(0,T;V),|\cdot|_{T})\) is a Polish space with the following norm
\[|f-g|_{T}=\sup_{t\in[0,T]}\|f-g\|_{H}+\Big{(}\int_{0}^{T}\|f-g\|_{V}^{p}ds\Big{)} ^{1/p}.\]
The existence of a unique strong solution of the obstacle problem (3.1) ensures the existence of a Borel-measurable map (see _e.g._[20, Thm. 2.1 ])
\[g^{\delta}:C([0,T];H)\to C([0,T];H)\cap L^{p}(0,T;V) \tag{3.2}\]
such that \(u_{\delta}=g^{\delta}(W)\) P-a.s. In order to state the main results, let us introduce the skeleton equation associated to (3.1). Let \(\phi\in L^{2}([0,T];H_{0})\), we are looking for \((y^{\phi},R^{\phi})\) solution to
\[\begin{cases}\dfrac{dy^{\phi}}{dt}+A(y^{\phi},\cdot)+R^{\phi}=f+G( \cdot,y^{\phi})\phi\quad\text{ in }V^{\prime},y^{\phi}(0)=u_{0}\text{ in }H,\\ y^{\phi}\geq\psi\text{ a.e. in }\quad D\times[0,T],\quad-R^{\phi}\in(V^{\prime})^{+}: \langle R^{\phi},y^{\phi}-\psi\rangle_{V^{\prime},V}=0\text{ a.e. in }[0,T].\end{cases} \tag{3.3}\]
**Definition 3.1**.: The pair \((y^{\phi},R^{\phi})\) is a solution to (3.3) if and only if:
* \((y^{\phi},R^{\phi})\in(L^{p}(0,T;V)\cap C([0,T];H))\times L^{p^{\prime}}(0,T;V ^{\prime})\), \(y^{\phi}(0)=u_{0}\text{ and }y^{\phi}\geq\psi\).
* \(-R^{\phi}\in(L^{p^{\prime}}(0,T;V^{\prime}))^{+}\text{ and }\int_{0}^{T}\langle R^{\phi},y^{\phi}-\psi \rangle ds=0\).
* For all \(t\in[0,T]\),
\[(y^{\phi}(t),\Phi)+\int_{0}^{t}\langle R^{\phi},\Phi\rangle ds+\int_{0}^{t} \langle A(y^{\phi},\cdot),\Phi\rangle ds=(u_{0},\Phi)+\int_{0}^{t}(f+G(\cdot, y^{\phi})\phi,\Phi)ds,\quad\forall\Phi\in V.\]
**Remark 3.1**.: If \((y^{\phi},R^{\phi})\) is a solution to (3.3) in the sense of Definition 3.1. Then, \(y^{\phi}\) satisfies the following variational inequality: for any \(v\in L^{p}(0,T;V)\) such that \(v\geq\psi\)
\[\int_{0}^{T}\langle\dfrac{dy^{\phi}}{ds}+A(\cdot,y^{\phi})-G(\cdot,y^{\phi}) \phi-f,v-y^{\phi}\rangle ds\geq 0.\]
**Proposition 3.1**.: _Under Assumptions H\({}_{1}\)-H\({}_{6}\), there exists a unique solution \((y^{\phi},R^{\phi})\) to (3.3) in the sense of Definition 3.1. Moreover, the following Lewy-Stampacchia inequality holds:_
\[0\leq\dfrac{dy^{\phi}}{ds}+A(y^{\phi},\cdot)-G(y^{\phi},\cdot)\phi-f\leq h^{ -}=\left(f-\partial_{t}\psi-A(\psi,\cdot)\right)^{-}. \tag{3.4}\]
Proof.: The proof of Proposition 3.1 will be given in Section 4.
Now, let us define the measurable mapping \(g^{0}\) as follows
\[g^{0}:C([0,T];H) \to L^{p}(0,T;V)\cap C([0,T];H)\] \[\int_{0}^{\cdot}\phi ds \mapsto g^{0}(\int_{0}^{\cdot}\phi ds):=y^{\phi}\quad\text{ for }\phi\in L^{2}([0,T];H_{0}), \tag{3.5}\]
where \(y^{\phi}\in L^{p}(0,T;V)\cap C([0,T];H)\) is the unique solution of (3.3). Now,
introduce the following rate function
\[I(y)=\inf_{\{\phi\in L^{2}([0,T],H_{0})\}}\big{\{}\frac{1}{2}\int_{0}^{T}\| \phi(s)\|_{H_{0}}^{2}ds;\quad y:=y^{\phi}=g^{0}(\int_{0}^{\cdot}\phi ds)\big{\}}. \tag{3.6}\]
The main result is given as follows.
**Theorem 3.2**.: _Assume \(H_{1}\)-\(H_{6}\) hold. Let \(\{u^{\delta}\}_{\delta>0}\) be the unique solution of (3.1). Then_
1. \(\{u^{\delta}\}_{\delta}\) _satisfies LDP on_ \(C([0,T];H)\)_, as_ \(\delta\to 0\)_, with the rate function_ \(I\) _given by (_3.6_)._
2. _If moreover_ \(H_{7}\) _holds. Then_ \(\{u^{\delta}\}_{\delta}\) _satisfies LDP on_ \(C([0,T];H)\cap L^{p}(0,T;V)\)_, as_ \(\delta\to 0\)_, with the rate function_ \(I\) _given by (_3.6_)._
Proof.: Theorem 3.2 is a consequence of Lemma 5.1 and Lemma 5.2, see Section 5.
_Example_.: For \(1<p<\infty\), consider the p-Laplace operator given by
\[B(u)=-\text{div}[|\nabla u|^{p-2}\nabla u],\quad u\in W_{0}^{1,p}(D)=V.\]
The unique solution \((u_{\delta})_{\delta>0}\) of (3.1), with \(A=B\) and \(f,\psi,u_{0}\) and \(G\) satisfying \(H_{3}\)-\(H_{6}\), satisfies LDP in \(C([0,T];H)\cap L^{p}(0,T;V)\) when \(p\geq 2\) and satisfies LDP in \(C([0,T];H)\) when \(\max(1,\frac{2d}{d+2})<p<2\).
## 4. Well-posedness of skeleton Equations (3.3)
Let us start with the following result, which ensures the uniqueness of the solution to (3.3).
**Lemma 4.1**.: _Let \((y_{1},R_{1})\) and \((y_{2},R_{2})\) be two solutions to (3.3) in the sense of Definition 3.1 associated with \((f_{1},\phi_{1})\) and \((f_{2},\phi_{2})\). Then, there exists a positive constant \(C>0\) such that_
\[\sup_{t\in[0,T]}\|(y_{1}-y_{2})(t)\|_{H}^{2}\leq C\|f_{1}-f_{2}\|_{L^{p^{\prime }}(0,T;V^{\prime})}\|y_{1}-y_{2}\|_{L^{p}(0,T;V)}+\int_{0}^{T}\|\phi_{1}-\phi_{ 2}\|_{H_{0}}^{2}ds.\]
_Moreover, if \(H_{7}\) holds then_
\[\sup_{t\in[0,T]}\|(y_{1}-y_{2})(t)\|_{H}^{2}+\int_{0}^{T}\|y_{1}-y_{2}\|_{V}^{ p}ds\leq C\|f_{1}-f_{2}\|_{L^{p^{\prime}}(0,T;V^{\prime})}\|y_{1}-y_{2}\|_{L^{p}( 0,T;V)}+C\int_{0}^{T}\|\phi_{1}-\phi_{2}\|_{H_{0}}^{2}ds,\]
_where \(C:=C(M,\phi,G)\) and \(\phi_{1},\phi_{2}\in L^{2}([0,T];H_{0})\)._
Proof of Lemma 4.1.: Let \(t\in[0,T]\) and \((y_{1},R_{1})\) and \((y_{2},R_{2})\) are two solutions to (3.3) in the sense of Definition 3.1 with \((f_{1},\phi_{1})\) and \((f_{2},\phi_{2})\). By using \(y_{1}-y_{2}\) as test function in the equations satisfied by \(y_{1}-y_{2}\) and integrating in time from \(0\) to \(t\), one gets
\[\frac{1}{2}\|(y_{1}-y_{2})(t)\|_{H}^{2} +\int_{0}^{t}\langle A(y_{1},\cdot)-A(y_{2},\cdot),y_{1}-y_{2} \rangle ds-\int_{0}^{t}(G(\cdot,y_{1})\phi_{1}-G(\cdot,y_{2})\phi_{2},y_{1}-y _{2})ds\] \[+\int_{0}^{t}\langle R_{1}-R_{2},y_{1}-y_{2}\rangle ds+\int_{0}^{ t}\langle f_{1}-f_{2},y_{1}-y_{2}\rangle ds\]
* Since \(y_{1},y_{2}\) are solutions to (3.3) in the sense of Definition 3.1, it yields \[\langle R_{1}-R_{2},y_{1}-y_{2}\rangle=\langle R_{1},y_{1}-y_{2}\rangle+\langle R _{2},y_{2}-y_{1}\rangle\geq 0\quad\text{a.e. in }[0,T].\] Therefore \(\int_{0}^{t}\langle R_{1}-R_{2},y_{1}-y_{2}\rangle ds=\int_{0}^{t}(\langle R _{1},y_{1}-y_{2}\rangle+\langle R_{2},y_{2}-y_{1}\rangle)ds\geq 0\).
* By using \(H_{3}\), we get \[\int_{0}^{t}(G(\cdot,y_{1})\phi_{1}-G(\cdot,y_{2})\phi_{2},y_{1}-y _{2})ds\] \[\leq\int_{0}^{t}\|G(\cdot,y_{1})\|_{L_{2}(H_{0},H)}\|\phi_{1}- \phi_{2}\|_{H_{0}}\|y_{1}-y_{2}\|_{H}ds\] \[\qquad+\int_{0}^{t}\|G(\cdot,y_{1})-G(\cdot,y_{2})\|_{L_{2}(H_{0},H)}\|\phi_{2}\|_{H_{0}}\|y_{1}-y_{2}\|_{H}ds\]
\[\leq\int_{0}^{t}\|\phi_{1}-\phi_{2}\|_{H_{0}}^{2}ds+\int_{0}^{t}(M+\|\phi_{2}\|_{H _{0}}^{2}+\|G(\cdot,y_{1})\|_{L_{2}(H_{0},H)}^{2})\|y_{1}-y_{2}\|_{H}^{2}ds.\]
* From \(H_{2}\), one has \(\int_{0}^{t}\langle A(y_{1},\cdot)-A(y_{2},\cdot),y_{1}-y_{2} \rangle ds\geq-\lambda_{T}\int_{0}^{t}\|y_{1}-y_{2}\|_{H}^{2}ds\). Moreover, if \(\mathsf{H}_{7}\) holds, we have \[\int_{0}^{t}\langle A(y_{1},\cdot)-A(y_{2},\cdot),y_{1}-y_{2}\rangle ds\geq- \lambda_{T}\int_{0}^{t}\|y_{1}-y_{2}\|_{H}^{2}ds+\int_{0}^{t}\bar{\alpha}\|y_{ 1}-y_{2}\|^{p}ds.\]
* Holder inequality gives \(\sup_{t\in[0,T]}|\int_{0}^{t}\langle f_{1}-f_{2},u_{1}-u_{2}\rangle ds|\,\leq \,\|f_{1}-f_{2}\|_{L^{p^{\prime}}(0,T;V^{\prime})}\|y_{1}-y_{2}\|_{L^{p}(0,T;V )}.\)
Therefore, Gronwall's lemma ensures the existence of \(C>0\) such that
\[\int_{0}^{T}\|y_{1}-y_{2}\|^{p}ds+ \sup_{t\in[0,T]}\|(u_{1}-u_{2})(t)\|_{H}^{2}\] \[\leq C\big{(}\|f_{1}-f_{2}\|_{L^{p^{\prime}}(0,T;V^{\prime})}\|y_{ 1}-y_{2}\|_{L^{p}(0,T;V)}+\int_{0}^{T}\|\phi_{1}-\phi_{2}\|_{H_{0}}^{2}ds\big{)},\]
where \(C:=C(M,\phi,G)<\infty\).
We will prove Proposition 3.1 in two steps. First, Subsection 4.1 is devoted to the proof of Proposition 3.1 with regular data. Next, we will get the result in the general setting by using some approximations in Subsection 4.2.
### Well-posedeness of (3.3) with regular data
First, consider the following assumptions:
\(H^{*}\): Assume that \(\phi\) in \(L^{\infty}([0,T];H_{0})\) and denotes by \(C_{\phi}=\|\phi\|_{L^{\infty}(0,T;H_{0})}\).
\(H^{**}\): Assume that \(h^{-}\) is a non negative element of \(L^{\vec{q}^{\prime}}(0,T;L^{\vec{q}^{\prime}}(D))\) where \(\tilde{q}=\min(2,p)\). The proof of following Theorem 4.2 will result from Subsections 4.1.1-4.1.4.
**Theorem 4.2**.: _Under Assumptions (\(\mathsf{H}_{1}\))-(\(\mathsf{H}_{6}\)) and assuming moreover that \(h^{-}\in L^{\vec{q}^{\prime}}(0,T;L^{\vec{q}^{\prime}}(D))\), and \(H^{*}\) holds. There exists a unique solution \((y,\rho)\in L^{p}(0,T;V)\times L^{\vec{q}^{\prime}}(0,T;L^{\vec{q}^{\prime}}(D))\) such that:_
1. \(y\in C([0,T];H)\)_,_ \(y\geq\psi\)_,_ \(y(0)=u_{0}\) _and_ \(\rho\leq 0\)_._
2. \(\rho(y-\psi)=0\) _a.e. in_ \([0,T]\times D\) _and_ \(\forall v\in L^{p}(0,T;V);v\geq\psi\)_:_ \(\langle\rho,y-v\rangle\geq 0\) _a.e. in_ \([0,T]\)_._
3. _For all_ \(t\in[0,T]\)_,_ \[y(t)+\int_{0}^{t}\rho ds+\int_{0}^{t}A(y,\cdot)ds=u_{0}+\int_{0}^{t}G(y,\cdot) \phi ds+\int_{0}^{t}fds.\]
4. _The following Lewy-Stampacchia inequality holds:_ \[0\leq\frac{dy}{dt}+A(y,\cdot)-G(y,\cdot)\phi-f\leq h^{-}=\left(f-\partial_{t} \psi-A(\psi,\cdot)\right)^{-}.\]
#### 4.1.1. Penalization
Let \(\epsilon>0\) and consider the following approximation:
\[\left\{\begin{array}{l}\frac{dy_{\epsilon}}{dt}+(A(y_{\epsilon},\cdot)-\widetilde{G}(y_{\epsilon},\cdot)\phi-\frac{1}{\epsilon}[(y_{\epsilon}- \psi)^{-}]^{\tilde{q}-1}=f\\ y_{\epsilon}(0)=u_{0},\end{array}\right. \tag{4.1}\]
where \(\widetilde{G}(y_{\epsilon},\cdot)=G(\max(y_{\epsilon},\psi),\cdot)\)5. Denote by \(\bar{A}(y_{\epsilon},\cdot)=A(y_{\epsilon},\cdot)-\widetilde{G}(\cdot,y_{ \epsilon})\phi-\frac{1}{\epsilon}[(y_{\epsilon}-\psi)^{-}]^{\tilde{q}-1}-f\).
Footnote 5: The proposed perturbation of \(\widetilde{G}\) makes formally the term coming from \(G\) vanishes on the free-set where the constraint is violated, which play a crucial role to estimate the reflection due to the obstacle.
1. Note that \(\widetilde{G}(\cdot,y_{\epsilon})\phi:[0,T]\times H\to H\) satisfies \[\|\widetilde{G}(\cdot,y_{\epsilon})\phi\|_{H}^{2}\leq\|G(\cdot,\max(y_{ \epsilon},\psi))\|_{L_{2}(H_{0},H)}^{2}\|\phi\|_{H_{0}}^{2}\leq LC_{\phi}^{2}(1 +\|y_{\epsilon}\|_{H}^{2}+\|\psi\|_{H}^{2}).\] Thus, by the properties of the penalization term, \(A\) and \(f\) (see [27, Section 3.1]), we get that \(\bar{A}\) is an operator defined on \(V\times[0,T]\) with values in \(V^{\prime}\).
2. (Local monotonicity) Let \(y_{1},y_{2}\in V\), by using \(H_{3,1}\) we get \[([\widetilde{G}(\cdot,y_{1})-\widetilde{G}(\cdot,y_{2})]\phi,y_{1}-y_{2})\leq M\|\phi\|_{H_{0}}\|y_{1}-y_{2}\|_{H}^{2}\leq MC_{\phi}\|y_{1}-y_{ 2}\|_{H}^{2}.\] Thanks to \(H_{2,2}\), \(H_{3,1}\) and \(x\mapsto-x^{-}\) is non-decreasing, there exists \(C_{1}\in\mathbb{R}\) such that \[\langle\bar{A}(y_{1},\cdot)-\bar{A}(y_{2},\cdot),y_{1}-y_{2}\rangle\geq C_{1 }\|y_{1}-y_{2}\|_{H}^{2}\]
3. The structure of the penalization operator and \(H_{3,1}\) yield the hemi-continuity of \(\bar{A}\).
4. (Coercivity): Note that for any \(\delta>0\), there exists \(C_{\delta,\epsilon}>0\) such that: \(\forall v\in V,\) \[\langle f,v\rangle\leq C_{\delta}\|f\|_{V^{\prime}}^{p^{\prime}}+\delta\|v\|_{V}^{p} \quad,\] \[\langle-\frac{1}{\epsilon}[(v-\psi)^{-}]^{\tilde{q}-1},v\rangle\geq \langle-\frac{1}{\epsilon}[(v-\psi)^{-}]^{\tilde{q}-1},\psi\rangle\] \[\geq -\delta\|v\|_{L^{\tilde{q}}(D)}^{\tilde{q}}-C_{\delta,\epsilon} \|\psi\|_{L^{\tilde{q}}(D)}^{\tilde{q}}\geq-\delta C\|v\|_{V}^{p}-C_{\delta, \epsilon}\|\psi\|_{L^{\tilde{q}}(D)}^{\tilde{q}}-C_{\delta,\epsilon},\] \[(\widetilde{G}(\cdot,v)\phi,v)\leq \|\widetilde{G}(\cdot,v)\|_{L_{2}(H_{0},H)}\|\phi\|_{H_{0}}\|v\|_{ H}\leq\|\widetilde{G}(\cdot,v)\|_{L_{2}(H_{0},H)}^{2}+C_{\phi}^{2}\|v\|_{H}^{2}\] \[\leq (L+C_{\phi}^{2})\|v\|_{H}^{2}+L(1+\|\psi\|_{H}^{2}),\] where \(C\) is related to the continuous embedding of \(V\) in \(L^{\tilde{q}}(D)\). Denote by \(\tilde{l}_{1}=L(1+\|\psi\|_{H}^{2})+l_{1}+C_{\delta}\|f\|_{V^{\prime}}^{p^{ \prime}}+C_{\delta,\epsilon}\|\psi\|_{L^{\tilde{q}}(D)}^{\tilde{q}}\). It is a \(L^{\infty}([0,T])\) thanks to the assumptions on \(f\) and \(\psi\), depending only on the data. Therefore, by a convenient choice of \(\delta\), \(\bar{A}\) satisfies \(H_{2,1}\) by considering \(\tilde{l}_{1}\) instead of \(l_{1}\).
5. (Growth): Let \(v\in V,\) \[\|-\frac{1}{\epsilon}[(v-\psi)^{-}]^{\tilde{q}-1}\|_{L^{\tilde{q} }(D)}=\frac{1}{\epsilon}\|(v-\psi)^{-}\|_{L^{\tilde{q}}(D)}^{\tilde{q}-1} \leq C_{\epsilon}\left(\|v\|_{L^{\tilde{q}}(D)}^{\tilde{q}-1}+\|\psi\|_{L^ {\tilde{q}}(D)}^{\tilde{q}-1}\right)\] \[\leq C_{\epsilon}\left(\|v\|_{L^{\tilde{q}}(D)}^{p-1}+\|\psi\|_{L^ {\tilde{q}}(D)}^{p-1}\right)+C_{p}\] since \(\tilde{q}<p\) may be possible. Now, since the embeddings of \(L^{\tilde{q}}\left(D\right)\) in \(V^{\prime}\) and of \(V\) in \(L^{\tilde{q}}(D)\) are continuous, one has \(\|-\frac{1}{\epsilon}[(v-\psi)^{-}]^{\tilde{q}-1}\|_{V^{\prime}}\leq C_{ \epsilon}\left(\|v\|_{V}^{p-1}+\|\psi\|_{V}^{p-1}\right)+C_{p}\). Moreover, \(\|\widetilde{G}(\cdot,v)\phi\|_{H}\leq\|\widetilde{G}(\cdot,v)\|_{L_{2}(H_{0}, H)}\|\phi\|_{H_{0}}\leq C_{\phi}^{2}+L\|v\|_{H}^{2}+L(1+\|\psi\|_{H}^{2})\). Since the embeddings of \(H\) in \(V^{\prime}\) is continuous, we get \[\|\bar{A}(\cdot,v)\|_{V^{\prime}} \leq C_{\phi}^{2}+L\|v\|_{H}^{2}+L(1+\|\psi\|_{H}^{2})+C_{\epsilon }\left(\|v\|_{V}^{p-1}+\|\psi\|_{V}^{p-1}\right)+C_{p}+g+\bar{K}\|v\|_{V}^{p-1}\] \[\leq K_{\epsilon}\|v\|_{V}^{p-1}+L\|v\|_{H}^{2}+\tilde{g},\] where \(\tilde{g}=C_{\epsilon}\|\psi\|_{V}^{p-1}++L(1+\|\psi\|_{H}^{2})+g+C_{\phi}^{2}+C _{p}\in L^{\infty}([0,T])\). Therefore \(\bar{A}\) satisfies \(H_{2,3}\) with \(\tilde{g}\) instead of \(g\).
By using [15, Thm. 5.1.3], for all \(\epsilon>0\), there exists a unique solution \(u_{\epsilon}\in L^{p}(0,T;V)\cap C([0,T];H)\) to (4.1) and satisfying:
\[y_{\epsilon}(t)+\int_{0}^{t}(A(y_{\epsilon},\cdot)-\widetilde{G}(y_{\epsilon}, \cdot)\phi-\frac{1}{\epsilon}[(y_{\epsilon}-\psi)^{-}]^{\tilde{q}-1})ds=u_{0}+ \int_{0}^{t}fds\quad\text{ in }V^{\prime}\text{ for all }t\in[0,T].\]
#### 4.1.2. Uniform estimates
**Lemma 4.3**.:
1. \((y_{\epsilon})_{\epsilon>0}\) _is bounded in_ \(L^{p}(0,T;V)\cap C([0,T];H)\)_._
2. \((A(y_{\epsilon},\cdot))_{\epsilon>0}\) _is bounded in_ \(L^{p^{\prime}}(\Omega_{T};V^{\prime})\)_._
3. \((\frac{(y_{\epsilon}-\psi)^{-}}{\epsilon^{\frac{1}{\tilde{q}}}})_{\epsilon}\) _is bounded in_ \(L^{\tilde{q}}(Q)\)_._
Proof.: Let \(\epsilon>0\) and note that \(\frac{d(y_{\epsilon}-\psi)}{dt}+\Big{(}A(y_{\epsilon},\cdot)- \widetilde{G}(\cdot,y_{\epsilon})\phi-\frac{1}{\epsilon}[(y_{\epsilon}-\psi)^{ -}]^{\tilde{q}-1})=[f-\frac{d\psi}{dt}]\).
By using \(y_{\epsilon}-\psi\) as a test function and integrating in time from \(0\) to \(t\), we get
\[\|y_{\epsilon}-\psi\|_{H}^{2}(t)+2\int_{0}^{t}\langle A(y_{ \epsilon},\cdot),y_{\epsilon}-\psi\rangle ds-2\int_{0}^{t}(\widetilde{G}( \cdot,y_{\epsilon})\phi,y_{\epsilon}-\psi)ds\] \[-2\int_{0}^{t}\int_{D}\frac{1}{\epsilon}[(u_{\epsilon}-\psi)^{-} ]^{\tilde{q}-1}(u_{\epsilon}-\psi)dxds=\|u_{0}-\psi(0)\|_{H}^{2}+2\int_{0}^{t }\langle f-\frac{d\psi}{ds},y_{\epsilon}-\psi\rangle ds.\]
Note that \(-2\int_{0}^{t}\int_{D}\frac{1}{\epsilon}[(y_{\epsilon}-\psi)^{-}]^{\tilde{q}- 1}(y_{\epsilon}-\psi)dxds=\frac{2}{\epsilon}\int_{0}^{t}\!\!\!\int_{D}[(y_{ \epsilon}-\psi)^{-}]^{\tilde{q}}dxds\). By using \(H_{2,3}\) and \(H_{3}\), we get
\[(\widetilde{G}(\cdot,y_{\epsilon})\phi,y_{\epsilon}-\psi)\leq\| \widetilde{G}(\cdot,y_{\epsilon})\|_{L_{2}(H_{0},H)}\|\phi\|_{H_{0}}\|y_{ \epsilon}-\psi\|_{H}\leq L\|y_{\epsilon}\|_{H}^{2}+C_{\phi}^{2}\|y_{\epsilon}- \psi\|_{H}^{2}+C(\psi);\] \[\langle A(y_{\epsilon},\cdot),y_{\epsilon}-\psi\rangle\geq \alpha\|y_{\epsilon}\|_{V}^{p}-\lambda\|y_{\epsilon}\|_{H}^{2}-l_ {1}-\langle A(y_{\epsilon},\cdot),\psi\rangle\] \[\geq \alpha\|y_{\epsilon}\|_{V}^{p}-\lambda\|y_{\epsilon}\|_{H}^{2}-l_ {1}-\bar{K}\|y_{\epsilon}\|_{V}^{p-1}\|\psi\|_{V}-g\|\psi\|_{V}\] \[\geq \frac{\alpha}{2}\|y_{\epsilon}\|_{V}^{p}-\lambda\|y_{\epsilon}\|_ {H}^{2}-l_{1}-C(\psi),\]
where \(C(\psi)\in L^{1}([0,T])\). Thus, for any positive \(\gamma\), Young's inequality yields the existence of a positive constant \(C_{\gamma}\) that may change form line to line, such that
\[\|y_{\epsilon}-\psi\|_{H}^{2}(t)+2\int_{0}^{t}\frac{\alpha}{2}\|y _{\epsilon}\|_{V}^{p}(s)ds+\frac{2}{\epsilon}\int_{0}^{t}\!\!\!\int_{D}[(y_{ \epsilon}-\psi)^{-}]^{\tilde{q}}dxds\leq(\lambda+L)\int_{0}^{t}\|y_{\epsilon} \|_{H}^{2}(s)ds\] \[+\|l_{1}+C(\psi)\|_{L^{1}([0,T])}+C_{\gamma}(f,\frac{d\psi}{dt})+ \gamma\int_{0}^{t}\|y_{\epsilon}-\psi\|_{V}^{p}(s)ds+C_{\phi}^{2}\int_{0}^{t }\|y_{\epsilon}-\psi\|_{H}^{2}(s)ds+Lt\] \[\qquad\leq C\int_{0}^{t}\|y_{\epsilon}-\psi\|_{H}^{2}(s)ds+\frac{ \alpha}{2}\int_{0}^{t}\|y_{\epsilon}\|_{V}^{p}(s)ds+C,\]
for a suitable choice of \(\gamma\). Then, the first part and the third one of the lemma is proved by Gronwall's lemma, the second by adding \(\mathsf{H}_{2,3}\).
Note that Lemma 4.3\({}_{(3)}\) is not sufficient to pass to the limit in the penalization term. Thus, we will prove the next result, where \(H_{4}\), \(H_{5}\) and the perturbation of \(G\) will play a crucial role.
**Lemma 4.4**.: \((\frac{1}{\epsilon}[(y_{\epsilon}-\psi)^{-}]^{\tilde{q}-1})_{\epsilon>0}\) _is bounded in \(L^{\tilde{q}^{\prime}}([0,T]\times D)\)._
Proof.: From (4.1), we have
\[\frac{d(y_{\epsilon}-\psi)}{dt}+\Big{(}A(y_{\epsilon},\cdot)-A(\psi,\cdot)- \widetilde{G}(\cdot,y_{\epsilon})\phi-\frac{1}{\epsilon}[(y_{\epsilon}-\psi)^ {-}]^{\tilde{q}-1})=[f-\frac{d\psi}{dt}-A(\psi,\cdot)]\]
By using the admissible test function \(-(y_{\epsilon}-\psi)^{-}\) in (4.1), integrating in time from \(0\) to \(t\) (see [11, Corollary 4.5]) and using that \(u_{0}\geq\psi(0)\), we obtain
\[\|(y_{\epsilon}-\psi)^{-}\|_{H}^{2}(t)+2\int_{0}^{t}\langle A(y_{ \epsilon},\cdot)-A(\psi,\cdot),-(y_{\epsilon}-\psi)^{-}\rangle ds+2\int_{0}^{t }\int_{D}\frac{1}{\epsilon}[(y_{\epsilon}-\psi)^{-}]^{\tilde{q}}dxds\\ =2\int_{0}^{t}(\widetilde{G}(\cdot,y_{\epsilon})\phi,-(y_{ \epsilon}-\psi)^{-})ds+2\int_{0}^{t}\langle f-\frac{d\psi}{ds}-A(\psi,\cdot),-( y_{\epsilon}-\psi)^{-}\rangle ds\\ \leq 2\int_{0}^{t}\langle-h^{-},-(y_{\epsilon}-\psi)^{-}\rangle ds.\]
Note that \((\widetilde{G}(\cdot,y_{\epsilon})\phi,-(y_{\epsilon}-\psi)^{-})=(G(\cdot, \psi)\phi,-(y_{\epsilon}-\psi)^{-})=0\) a.e. in \([0,T]\), since \(\widetilde{G}(\cdot,y_{\epsilon})=G(\cdot,\psi)\) on the set \(\{y_{\epsilon}\leq\psi\}\).
On the other hand, \(H_{2,2}\) ensures \(\langle A(y_{\epsilon},\cdot)-A(\psi,\cdot),-2(y_{\epsilon}-\psi)^{-}\rangle \geq-2\lambda_{T}\|(y_{\epsilon}-\psi)^{-}\|_{H}^{2}\), a.e. \(t\in[0,T]\), since this last term is equal to \(2\langle A(\psi,\cdot)-A(y_{\epsilon},\cdot),(\psi-y_{\epsilon})^{+}\rangle \geq-2\lambda_{T}\|(\psi-u_{\epsilon})^{+}\|_{H}^{2}\). By using \(H_{5}\), we get
\[\|(y_{\epsilon}-\psi)^{-}(t)\|_{L^{2}(D)}^{2}+\frac{2}{\epsilon} \int_{0}^{t}\|(y_{\epsilon}-\psi)^{-}(s)\|_{L^{\tilde{q}}(D)}^{\tilde{q}}ds\\ \leq 2\int_{0}^{t}\langle h^{-}(s),(y_{\epsilon}-\psi)^{-}(s) \rangle ds+2\lambda_{T}\int_{0}^{t}\|(y_{\epsilon}-\psi)^{-}(s)\|_{H}^{2}ds. \tag{4.2}\]
We are in position to use similar arguments to the one used in the last part of the proof of [27, Lemma 3] to conclude.
As a consequence, the following lemma holds.
**Lemma 4.5**.: \((u_{\epsilon})_{\epsilon>0}\) _is a Cauchy sequence in the space \(C([0,T];H)\)._
Proof.: Let \(1>\epsilon\geq\eta>0\) and consider \(y_{\epsilon}-y_{\eta}\), which satisfies the following equation
\[y_{\epsilon}(t)-y_{\eta}(t)+\int_{0}^{t}(A(y_{\epsilon},\cdot)-A (y_{\eta},\cdot)) +(-\frac{1}{\epsilon}[(y_{\epsilon}-\psi)^{-}]^{\tilde{q}-1}+ \frac{1}{\eta}[(y_{\eta}-\psi)^{-}]^{\tilde{q}-1})ds\] \[=\int_{0}^{t}(\widetilde{G}(y_{\epsilon},\cdot)\phi-\widetilde{G }(y_{\eta},\cdot)\phi)ds.\]
By using \(y_{\epsilon}-y_{\eta}\) as test function and integrate from \(0\) to \(t\), one gets for any \(t\in[0,T]\)
\[\frac{1}{2}\|(y_{\epsilon}-y_{\eta})(t)\|_{H}^{2}+\int_{0}^{t} \langle A(y_{\epsilon},\cdot)-A(y_{\eta},\cdot),y_{\epsilon}-y_{\eta}\rangle ds\\ +\int_{0}^{t}\langle-\frac{1}{\epsilon}[(y_{\epsilon}-\psi)^{-}] ^{\tilde{q}-1}+\frac{1}{\eta}[(y_{\eta}-\psi)^{-}]^{\tilde{q}-1},y_{\epsilon}- y_{\eta}\rangle ds=\int_{0}^{t}\int_{0}^{t}(\widetilde{G}(y_{\epsilon},\cdot)\phi- \widetilde{G}(y_{\eta},\cdot)\phi,y_{\epsilon}-y_{\eta})ds\]
We argue as in the proof of Lemma 4.1 with \(f_{1}=f_{2}\) and note that we need only to discuss the penalization term. By using the monotonicity of the penalization operator, arguments already detailed in the proof of [27, Lemma 4] lead to
\[\sup_{t\in[0,T]}\|(y_{\epsilon}-y_{\eta})(t)\|_{H}^{2}\leq C(\epsilon+ \epsilon^{\frac{1}{p-1}})+C\int_{0}^{T}\sup_{\tau\in[0,s]}\|(y_{\epsilon}-y_{ \eta})(\tau)\|_{H}^{2}ds\]
and Gronwall's lemma ensures that \((y_{\epsilon})_{\epsilon>0}\) is a Cauchy sequence in the space \(C([0,T];H)\)
#### 4.1.3. Existence of solution
**Lemma 4.6**.: _There exist \(y\in L^{p}(0,T;V)\cap C([0,T];H)\) and \((\rho,\chi)\in L^{\vec{q}^{\prime}}(0,T;L^{\vec{q}^{\prime}}(D))\times L^{p^{ \prime}}(0,T;V^{\prime})\) such that the following convergences hold, up to sub-sequences denoted by the same way,_
\[y_{\epsilon}\rightharpoonup y\quad\text{in}\quad L^{p}(0,T;V), \tag{4.3}\] \[y_{\epsilon}\to y\quad\text{in}\quad C([0,T];H),\] (4.4) \[A(y_{\epsilon},\cdot)\rightharpoonup\chi\quad\text{in}\quad L^{p^ {\prime}}(0,T;V^{\prime}),\] (4.5) \[-\frac{1}{\epsilon}[(y_{\epsilon}-\psi)^{-}]^{\tilde{q}-1} \rightharpoonup\rho,\quad\rho\leq 0\quad\text{in}\quad L^{\vec{q}^{ \prime}}([0,T]\times D). \tag{4.6}\]
Proof.: By compactness with respect to the weak topology in the spaces \(L^{p}(0,T;V)\), \(L^{p^{\prime}}(0,T;V^{\prime})\) and \(L^{\vec{q}^{\prime}}([0,T]\times D)\), there exist \(y\in L^{p}(0,T;V)\), \(\chi\in L^{p^{\prime}}(0,T;V^{\prime})\) and \(\rho\in L^{\vec{q}^{\prime}}([0,T]\times D)\) such that (4.3), (4.5) and (4.6) hold (for sub-sequences). Thanks to Lemma 4.5, we get the strong convergence of \(y_{\epsilon}\) to \(y\) in \(C([0,T];H)\hookrightarrow L^{2}([0,T]\times D)\). Moreover, \(\rho\leq 0\) since the set of non positive functions of \(L^{\vec{q}^{\prime}}([0,T]\times D)\) is a closed convex subset of \(L^{\vec{q}^{\prime}}([0,T]\times D)\).
Concerning the initial condition and constraint, we get
* Lemma 4.5 ensures that \(y_{\epsilon}(0)=u_{0}\) converges to \(u(0)\) in \(H\) and \(y(0)=u_{0}\) in \(H\).
* Thanks to Lemma 4.4 and Lemma 4.5, we deduce that \((u_{\epsilon}-\psi)^{-}\to(u-\psi)^{-}=0\) in \(L^{\tilde{q}}([0,T]\times D)\) and \(u\geq\psi\) a.e.
**Lemma 4.7**.: \(\widetilde{G}(\cdot,y_{\epsilon})\phi\to\widetilde{G}(\cdot,y)\phi=G(\cdot,y )\phi\) _in \(L^{2}(0,T;H)\), as \(\epsilon\to 0\)._
Proof.: We have
\[\int_{0}^{T}\|\widetilde{G}(\cdot,y_{\epsilon})\phi-\widetilde{G }(\cdot,y)\phi\|_{H}^{2}ds \leq\int_{0}^{T}\|\widetilde{G}(\cdot,y_{\epsilon})-\widetilde{G }(\cdot,y)\|_{L_{2}(H_{0},H)}^{2}\|\phi\|_{H_{0}}^{2}dt\] \[\leq MC_{\phi}^{2}\int_{0}^{T}\|y_{\epsilon}-y\|_{H}^{2}dt\to 0.\]
\(\widetilde{G}(\cdot,y)\phi=G(\cdot,y)\phi\) is a consequence of \(u\geq\psi\).
**Lemma 4.8**.: \(\rho(y-\psi)=0\) a.e. in \([0,T]\times D\) and for any \(v\in L^{p}(0,T;V);v\geq\psi\), \(\rho(y-v)\geq 0\) a.e. in \([0,T]\times D\).
Proof.: On one hand, by Lemma 4.4, we have
\[0\leq-\frac{1}{\epsilon}\int_{0}^{t}\langle[(y_{\epsilon}-\psi)^{-}]^{\tilde {q}-1},y_{\epsilon}-\psi\rangle ds=\frac{1}{\epsilon}\int_{0}^{t}\|(y_{ \epsilon}-\psi)^{-}(s)\|_{L^{\tilde{q}}}^{\tilde{q}}ds\leq C\epsilon^{\vec{q} ^{\prime}-1}\to 0.\]
On the other hand, by Lemma 4.6, we distinguish two cases:
* If \(p\geq 2\) then \(-\frac{1}{\epsilon}(y_{\epsilon}-\psi)^{-}\rightharpoonup\rho\) in \(L^{2}([0,T]\times D)\) and \(y_{\epsilon}-\psi\to y-\psi\) in \(L^{2}([0,T]\times D)\) by Lemma 4.5. Hence \(\int_{0}^{T}\int_{D}\rho(y-\psi)dxdt=0\) and \(\rho(y-\psi)=0\) a.e. in \([0,T]\times D\), since the integrand is always non-positive.
* If \(2>p>1\) then \(-\frac{1}{\epsilon}[(y_{\epsilon}-\psi)^{-}]^{p-1}\rightharpoonup\rho\) in \(L^{p^{\prime}}([0,T]\times D)\) and \(y_{\epsilon}-\psi\to y-\psi\) in \(L^{p}([0,T]\times D)\) by Lemma 4.5 and the same conclusion holds.
One finishes the proof by noticing that if \(v\in L^{p}(0,T;V);v\geq\psi\), one has a.e. in \([0,T]\times D\) that,
\[\rho(y-v)=\overbrace{\rho(y-\psi)}^{=0}+\overbrace{\rho(\psi-v)}^{\geq 0} \geq 0.\]
Our aim now is to prove that \(A(y,\cdot)=\chi\). For any \(t\in[0,T]\), we have
\[y_{\epsilon}(t)-y(t)+\int_{0}^{t}[(A(y_{\epsilon},\cdot)-\chi)+(-\frac{1}{ \epsilon}[(y_{\epsilon}-\psi)^{-}]^{\tilde{q}-1}-\rho)]ds=\int_{0}^{t}[G(y_{ \epsilon},\cdot)-G(y,\cdot)]\phi ds\text{ in }V^{\prime}.\]
By using \(y_{\epsilon}-y\) as a test function and integrate from \(0\) to \(t\), we obtain
\[\frac{1}{2}\|(y_{\epsilon}-y)(t)\|_{H}^{2}+\overbrace{\int_{0}^{t}\langle A( y_{\epsilon},\cdot)-\chi,y_{\epsilon}-y\rangle ds}^{I_{1}}+\overbrace{\int_{0}^{t}\langle- \frac{1}{\epsilon}[(y_{\epsilon}-\psi)^{-}]^{\tilde{q}-1}-\rho,y_{\epsilon}- y\rangle ds}^{I_{2}}\]
\[=\overbrace{\int_{0}^{t}\langle(G(y_{\epsilon},\cdot)-G(y,\cdot))\phi,y_{ \epsilon}-y\rangle ds}^{I_{3}}\]
Let \(v\in L^{p}(0,T;V)\cap C([0,T];H)\) and \(t\in]0,T]\) and note the following:
* \(I_{1}=\int_{0}^{t}\langle A(y_{\epsilon},\cdot),y_{\epsilon}\rangle ds-\int_{ 0}^{t}\langle A(y_{\epsilon},\cdot),y\rangle ds-\int_{0}^{t}\langle\chi,y_{ \epsilon}-y\rangle ds\) and \[\int_{0}^{t}\langle A(y_{\epsilon},\cdot),y_{\epsilon}\rangle ds=\int_{0}^{t} \langle A(y_{\epsilon},\cdot)-A(v,\cdot),y_{\epsilon}-v\rangle ds+\int_{0}^{t }\langle A(v,\cdot),y_{\epsilon}-v\rangle ds+\int_{0}^{t}\langle A(y_{ \epsilon},\cdot),v\rangle ds\]
* \(\geq\int_{0}^{t}\langle A(v,\cdot),y_{\epsilon}-v\rangle ds+\int_{0}^{t} \langle A(y_{\epsilon},\cdot),v\rangle ds-\lambda_{T}\int_{0}^{t}\|y_{ \epsilon}-v\|_{H}^{2}ds.\)
* We have \(\int_{0}^{t}\langle-\frac{1}{\epsilon}[(y_{\epsilon}-\psi)^{-}]^{\tilde{q}-1 },y_{\epsilon}-y\rangle ds\geq\int_{0}^{t}\langle-\frac{1}{\epsilon}[(y_{ \epsilon}-\psi)^{-}]^{\tilde{q}-1},\psi-y\rangle ds.\)
Thanks to \(H_{3}\) we have \(|I_{3}|\leq MC_{\phi}^{2}\int_{0}^{t}\|y_{\epsilon}(s)-y(s)\|_{H}^{2}ds.\) Thus, we are able to infer
\[\frac{1}{2}\|(y_{\epsilon}-y)(t)\|_{H}^{2}+\int_{0}^{t}\langle A(v,\cdot),y_{ \epsilon}-v\rangle ds+\int_{0}^{t}\langle A(y_{\epsilon},\cdot),v-y\rangle ds -\int_{0}^{t}\langle\chi,y_{\epsilon}-y\rangle ds\]
\[+\int_{0}^{t}\langle-\frac{1}{\epsilon}[(y_{\epsilon}-\psi)^{-}]^{\tilde{q}-1 },\psi-y\rangle ds-\int_{0}^{t}\langle\rho,y_{\epsilon}-y\rangle ds\leq(MC_ {\phi}^{2}+\lambda_{T})\int_{0}^{t}\|y_{\epsilon}(s)-y(s)\|_{H}^{2}ds.\]
By setting \(t=T\) and passing to the limit as \(\epsilon\to 0\), thanks to Lemmas 4.6 and 4.8, we get
\[\int_{0}^{T}\langle A(v,\cdot)-\chi,y-v\rangle ds\leq\int_{0}^{T}\langle\rho,y -\psi\rangle ds=0.\]
We are now in a position to use "Minty's trick" [22, Lemma 2.13 p.35] and deduce that \(A(y,\cdot)=\chi\).
#### 4.1.4. Lewy-Stampacchia's inequality
In order to consider more general setting, one needs to estimate \(\rho\) in satisfactory way. Hence, we will prove Lewy-Stampacchia's inequality estimate, which give a lower and upper bounds to \(\rho\) in some dual sense, where the dual order assumption in \(H_{5}\) is crucial. First, note that \(\frac{dy}{dt}+A(y,\cdot)-G(y,\cdot)\phi-f=-\rho\geq 0.\) Moreover, we have
\[0\leq\frac{dy}{dt}+A(y,\cdot)-G(y,\cdot)\phi-f\leq h^{-}\text{ in }L^{\tilde{q}}([0,T]\times D)-sense. \tag{4.7}\]
Indeed, let \(y\) be the unique solution given at the end of Subsection 4.1.3. Denote by \(K_{1}\) the closed convex set \(K_{1}=\{v\in L^{p}(0,T;V),\quad v\leq y\quad\text{a.e. in }D\times[0,T]\}.\) We recall that \(u\) satisfies
\[(f+h^{-})-\frac{dy}{dt}-A(y,\cdot)+G(y,\cdot)\phi=h^{-}+\rho,\quad\rho\leq 0, \quad\rho\in L^{\tilde{q}^{\prime}}([0,T]\times D).\]
Consider the following auxiliary problem: \((z,\nu)\in L^{p}(0,T;V)\times L^{\tilde{q}^{\prime}}(0,T;L^{\tilde{q}^{\prime} }(D))\) such that
\[\left\{\begin{array}{ll}i.)&z\in C([0,T];H),\quad z(0)=u_{0}\quad\text{and} \quad z\in K_{1},\\ ii.)&\nu\geq 0,\quad\langle\nu,z-y\rangle=0\text{ a.e. in }[0,T]\text{ and }\forall v\in K_{1},\ \langle\nu,z-v\rangle\geq 0\text{ a.e. in }[0,T].\\ iii.)&\text{For any }t\in[0,T]:\\ z(t)+\int_{0}^{t}\nu ds+\int_{0}^{t}A(z,\cdot)ds=u_{0}+\int_{0}^{t}G(z,\cdot) \phi ds+\int_{0}^{t}(f+h^{-})ds.\end{array}\right. \tag{4.8}\]
Note that the result of existence and uniqueness of the solution \((z,\nu)\) to (4.8) can be proved, by cosmetic changes of what has been done in the Subsections 4.1.1-4.1.3, by passing to the limit in the following penalized problem:
\[\left\{\begin{array}{ll}z_{\epsilon}(t)+\int_{0}^{t}(A(z_{\epsilon},\cdot)+ \frac{1}{\epsilon}[(z_{\epsilon}-y)^{+}]^{\tilde{q}-1}-(f+h^{-}))ds=u_{0}+\int _{0}^{t}\widetilde{G}(z_{\epsilon},\cdot)\phi ds\\ z_{\epsilon}(0)=u_{0},\end{array}\right.\]
where \(\widetilde{G}(z_{\epsilon},\cdot)=G(\min(z_{\epsilon},y),\cdot).\) Moreover,
\[\frac{dz}{dt}+A(z,\cdot)-G(z,\cdot)\phi-(f+h^{-})=-\nu\leq 0\quad\text{ in }\quad L^{\tilde{q}^{\prime}}([0,T]\times D)\]
and \(z\) satisfies the following Lewy-Stampacchia inequality:
\[\frac{dz}{dt}+A(z,\cdot)-G(z,\cdot)\phi-f\leq h^{-}\quad\text{ in }\quad L^{\tilde{q}^{\prime}}([0,T]\times D).\]
Since the solution of (3.3) in Theorem 4.2 is unique, the proof of Lewy-Stampacchia's inequality (4.7) follows by using the same arguments presented in [27, Subsection 3.2], by showing that \(z=y\).
### Well-posedeness of (3.3) with general data
We will proceed in two steps.
#### 4.2.1. The case \(\phi\in L^{2}(0,T;H_{0})\)
Assume only in this part that \(H^{**}\) holds. Let \(m\in\mathbb{N}\), let \(\phi_{m}\in L^{\infty}([0,T];H_{0})\)6 such that \(\phi_{m}\to\phi\) in \(L^{2}([0,T];H_{0})\). Following Theorem 4.2, there exists a unique \((y_{m},\rho_{m})\in L^{p}(0,T;V)\times L^{\tilde{q}^{\prime}}(0,T;L^{\tilde{q} ^{\prime}}(D))\) satisfying:
Footnote 6: \((\phi_{m})_{m}\) can be constructed _e.g._ by using a standard cut-off techniques.
* \(y_{m}\in C([0,T],H)\), \(y_{m}\geq\psi\) and \(\rho_{m}\leq 0\).
* For any \(t\in[0,T]\): \(y_{m}(t)+\int_{0}^{t}(A(y_{m},\cdot)+\rho_{m})ds=u_{0}+\int_{0}^{t}G(y_{m}, \cdot)\phi_{m}+\int_{0}^{t}fds\) in \(V^{\prime}\).
* \(\langle\rho_{m},y_{m}-\psi\rangle=0\) a.e. in \((0,T)\) and \(\forall v\in L^{p}(0,T;V);v\geq\psi\), \(\langle\rho_{m},y_{m}-v\rangle\geq 0\) a.e. in \((0,T)\).
* The following Lewy-Stampacchia inequality holds: \(0\leq-\rho_{m}\leq h^{-}\).
We will to pass to the limit as \(m\to+\infty\). The first step is to obtain a uniform estimates independent of \(m\), similar to the one developed in Subsubsection 4.1.2. Next, we conclude by using the same arguments of Subsubsection 4.1.3.
We have \(\frac{dy_{m}}{dt}+A(y_{m},\cdot)-G(\cdot,y_{m})\phi_{m}-\rho_{m}=f\). Let \(0\leq t\leq T\), by using \(y_{m}\) as a test function and integrating in time from \(0\) to \(t\), we get
\[\|y_{m}\|_{H}^{2}(t)+2\int_{0}^{t}\langle A(y_{m},\cdot),y_{m} \rangle ds-2\int_{0}^{t}(G(\cdot,y_{m})\phi_{m},y_{m})ds-2\int_{0}^{t}\int_{D} \rho_{m}y_{m}dxds\\ =\|u_{0}\|_{H}^{2}+\int_{0}^{t}\langle f,y_{m}\rangle ds.\]
Note that, by using Lewy-Stampacchia and Young inequalities, for any \(\gamma>0\)
\[2|\int_{0}^{t}\int_{D}\rho_{m}y_{m}dxds| \leq 2\int_{0}^{t}\!\!\!\int_{D}|\rho_{m}||y_{m}|dxds\] \[\leq\gamma\|y_{m}\|_{L^{\tilde{q}}([0,t]\times D)}^{\tilde{q}}+C _{\gamma}\|h^{-}\|_{L^{\tilde{q}^{\prime}}([0,T]\times D)}^{\tilde{q}^{ \prime}}\leq C_{\gamma}+\gamma\|y_{m}\|_{L^{\tilde{q}}([0,t]\times D)}^{\tilde {q}}.\]
By using \(H_{3}\), one has \(2\int_{0}^{t}(G(\cdot,y_{m})\phi_{m},y_{m})ds\leq Lt+\int_{0}^{t}(\|\phi_{m}\|_ {H_{0}}^{2}+L)\|y_{m}\|_{H}^{2}ds\). Thus, arguments already detailed in the proof of Lemma 4.3 yields the existence of \(C>0\), independent of \(m\), such that
\[E\|y_{m}\|_{H}^{2}(t)+2E\int_{0}^{t}\frac{\alpha}{2}\|y_{m}\|_{V}^{p}(s)ds\leq CE \int_{0}^{t}(1+\|\phi_{m}\|_{H_{0}}^{2})\|y_{m}\|_{H}^{2}(s)ds+\frac{\alpha}{2 }E\int_{0}^{t}\|y_{m}\|_{V}^{p}(s)ds+C,\]
Since \((\phi_{m})_{m}\) is bounded in \(L^{2}([0,T];H_{0})\), we obtain
**Lemma 4.9**.:
* \((y_{m})_{m}\) _is bounded in_ \(L^{p}(0,T;V)\cap C([0,T];H)\)_._
* \((A(y_{m},\cdot))_{m}\) _is bounded in_ \(L^{p^{\prime}}(0,T;V^{\prime})\) _and_ \((\rho_{m})_{m}\) _is bounded in_ \(L^{\tilde{q}^{\prime}}([0,T]\times D)\)_._
Similarly to the proof of Lemma 4.1 with \(f_{1}=f_{2}\) and using that \(\phi_{m}\) converges strongly to \(\phi\) in \(L^{2}(0,T;H_{0})\), we deduce
\[(u_{m})_{m}\]
is a Cauchy sequence in the space in
\[C([0,T];H)\]
Now, we are in position to use the same arguments as in the Subsubsection 4.1.3 to deduce that Theorem 4.2 holds with \(\phi\in L^{2}([0,T];H_{0})\). Lewy-Stampacchia inequality is a consequence of the passage to the limit in the one satisfied by \(\rho_{m}\).
#### 4.2.2. The general case
Let \(\phi\in L^{2}([0,T];H_{0})\) and \(h^{-}\in(L^{p^{\prime}}(0,T;V^{\prime}))^{+}\). Thanks to Lemma [11, Lemma 4.1], there exists \(h_{n}\in L^{\tilde{q}^{\prime}}(0,T;L^{\tilde{q}^{\prime}}(D))\) non negative such that \(h_{n}\longrightarrow h^{-}\) in \(L^{p^{\prime}}(\Omega_{T},V^{\prime})\).
Associated with \(h_{n}\), denote the following \(f_{n}\) by
\[f_{n}=\frac{d\psi}{dt}+A(\psi,\cdot)+h^{+}-h_{n},\quad h^{+}\in(L^{p^{\prime} }(0,T;V^{\prime}))^{+}.\]
Note that \(f_{n}\in L^{p^{\prime}}(0,T;V^{\prime})\) and converges strongly to \(f\) in \(L^{p^{\prime}}(0,T;V^{\prime})\). Denote by \((y_{n},k_{n})\) the sequence of solutions given by Theorem 4.2 with \(\phi\in L^{2}([0,T];H_{0})\) where \(h^{-}\) is replaced by \(h_{n}\).
By Lewy-Stampacchia inequality, one has \(0\leq-k_{n}\leq h_{n}\). For any \(\varphi\in L^{p}(0,T;V)\), it holds that
\[\int_{0}^{T}|\langle k_{n},\varphi\rangle|ds\leq\int_{0}^{T}\langle-k_{n}, \varphi^{+}\rangle ds+\int_{0}^{T}\langle-k_{n},\varphi^{-}\rangle ds\]
\[\leq\int_{0}^{T}\langle h_{n},\varphi^{+}\rangle ds+\int_{0}^{T}\langle h_{n}, \varphi^{-}\rangle ds\leq 2\|h_{n}\|_{L^{p^{\prime}}(\Omega_{T},V^{\prime})}\| \varphi\|_{L^{p}(\Omega_{T},V)}.\]
Since \((h_{n})_{n}\) converges to \(h\) in \(L^{p^{\prime}}(0,T;V^{\prime}),\) one gets that \((h_{n})_{n}\) is bounded independently of \(n\) in \(L^{p^{\prime}}(0,T;V^{\prime})\) and therefore
\[(k_{n})_{n}\text{ is bounded independently of $n$ in }L^{p^{\prime}}(0,T;V^{\prime}). \tag{4.9}\]
Let \(t\in[0,T]\) and \(n\in\mathbb{N}^{*}\), by using \(y_{n}\) as test function in the equations satisfied by \(y_{n}\), one gets
\[\frac{1}{2}\|y_{n}(t)\|_{H}^{2}+\int_{0}^{t}\langle A(y_{n},\cdot),y_{n} \rangle ds=\frac{1}{2}\|u_{0}\|_{H}^{2}+\int_{0}^{t}\langle-k_{n},y_{n} \rangle ds+\int_{0}^{t}\langle f_{n},y_{n}\rangle ds+\int_{0}^{t}(G(y_{n}, \cdot)\phi,y_{n})ds\]
Since \(f_{n}\) converges to \(f\) in \(L^{p^{\prime}}(0,T;V^{\prime})\), it holds that \((f_{n})_{n}\) is bounded independently of \(n\) in \(L^{p^{\prime}}(0,T;V^{\prime})\). Therefore, by Young's inequality, we get
\[\int_{0}^{T}|\langle f_{n}-k_{n},y_{n}\rangle|ds\leq\frac{\alpha}{2}\int_{0}^{ T}\|y_{n}(s)\|_{V}^{p}ds+C\|f_{n}-k_{n}\|_{L^{p^{\prime}}(0,T;V^{\prime})}^{p^{ \prime}}.\]
By using \(H_{3}\), we get \((G(\cdot,y_{n})\phi,y_{n})\leq(L+\|\phi\|_{H_{0}}^{2})\|y_{n}\|_{H}^{2}+L\). Therefore
\[\sup_{s\in[0,t]}\|y_{n}(s)\|_{H}^{2}+\int_{0}^{t}\|y_{n}(s)\|_{V}^{p}ds\leq C( 1+\int_{0}^{t}(1+\|\phi\|_{H_{0}}^{2})\sup_{\tau\in[0,s]}\|y_{n}(\tau)\|_{H}^{ 2}ds).\]
By using Gronwall's lemma and \(H_{2,3}\), one concludes that
\[(y_{n})_{n}\text{ and }(A(y_{n},\cdot))_{n}\text{ are bounded in }L^{p}(0,T;V)\cap C([0,T];H)\text{ and }L^{p^{\prime}}(0,T;V^{\prime}),\text{ resp.} \tag{4.10}\]
Now, by using similar arguments to the proof of Lemma 4.1 with \(\phi_{1}=\phi_{2}\), and that \(f_{n}\) converges strongly to \(f\) in \(L^{p^{\prime}}(0,T;V^{\prime})\), we get
\[(y_{n})_{n}\text{ is a Cauchy sequence in the space }C([0,T];H) \tag{4.11}\]
Now, by using the same arguments as in the Subsubsection 4.1.3 (see also [27, Subsect. 3.3]), we deduce the Theorem 4.2 for general \(f\). At last, Lewy-Stampacchia inequality is a consequence of the passage to the limit in the one satisfied by \(k_{n}\), which completes the proof of Proposition 3.1.
## 5. Proof of large deviation principle
### Continuity of skeleton equations with respect to the signals
**Lemma 5.1**.: _Assume that (\(H_{1}\))-(\(H_{6}\)) hold. Let \(N<\infty\), for any family \(\{v^{\delta}:\delta>0\}\subset S_{N}\) satisfying that \(v_{\delta}\) converges weakly to some element \(v\) as \(\delta\to 0\), we have_
* \(g^{0}(\int_{0}^{\cdot}v_{s}^{\delta}ds)\) _converges to_ \(g^{0}(\int_{0}^{\cdot}v_{s}ds)\) _in the space_ \(C([0,T];H)\)_,_
* _If moreover_ \(H_{7}\) _holds, then_ \[g^{0}(\int_{0}^{\cdot}v_{s}^{\delta}ds)\text{ converges to }g^{0}(\int_{0}^{\cdot}v_{s}ds)\text{ in }L^{p}(0,T;V)\cap C([0,T];H),\]
_where \(g^{0}\) is given in (3.5)._
Proof.: Let \((\phi_{n})_{n}\subset S_{N}\) such that \(\phi_{n}\) converges to \(\phi\) weakly in \(L^{2}(0,T;H_{0})\). Denote by \(y_{n}\) and \(y^{\phi}\) the solutions of (3.3) corresponding to \(\phi_{n}\) and \(\phi\) respectively. To prove Lemma 5.1, we show
\[y_{n}\to y^{\phi}\text{ in }L^{p}(0,T;V)\cap C([0,T];H),\]
if \(H_{1}\)-\(H_{7}\) hold and the convergence in \(C([0,T];H)\) follows similarly, if \(H_{7}\) does not hold.
_The first step._ Thanks to Proposition 3.1, there exists \((y_{n},R_{n})\) satisfying
* \((y_{n},-R_{n})\in(L^{p}(0,T;V)\cap C([0,T];H))\times(L^{p}(0,T;V^{\prime}))^{+}\)_,_ \(y_{n}(0)=u_{0}\) _and_ \(y_{n}\geq\psi\)_._
* _For any_ \(v\in L^{p}(0,T;V);\quad v\geq\psi,\quad\langle R_{n},y_{n}-v\rangle ds\geq 0\) _a.e. in_ \([0,T]\)_._
* _For all_ \(t\in[0,T]\)_:_ \(y_{n}(t)+\int_{0}^{t}R_{n}ds+\int_{0}^{t}A(y_{n},\cdot)ds=u_{0}+ \int_{0}^{t}(f+G(\cdot,y_{n})\phi_{n})ds\) _in_ \(V^{\prime}\)_._
* _The following Lewy Stampacchia inequality holds_ \[0\leq\frac{dy_{n}}{dt}+A(y_{n},\cdot)-G(y_{n},\cdot)\phi-f=-R_{n}\leq h^{-}= \left(f-\partial_{t}\psi-A(\psi,\cdot)\right)^{-}.\] (5.1)
First, let us establish some uniform estimates w.r.t. \(n\). We have
\[\frac{dy_{n}}{dt}+A(y_{n},\cdot)-G(\cdot,y_{n})\phi_{n}+R_{n}=f \tag{5.2}\]
Let \(t\in[0,T]\), by using \(y_{n}\) as a test function and integrating in time from \(0\) to \(t\), we get
\[\|y_{n}\|_{H}^{2}(t)+2\int_{0}^{t}\langle A(y_{n},\cdot),y_{n}\rangle ds-2\int _{0}^{t}(G(\cdot,y_{n})\phi_{n},y_{n})ds+2\int_{0}^{t}\langle R_{n},y_{n} \rangle ds=\|u_{0}\|_{H}^{2}+2\int_{0}^{t}\langle f,y_{n}\rangle ds\]
By using (5.1), we obtain that: \((R_{n})_{n}\) is bounded by \(\|h^{-}\|_{L^{p}(0,T;V^{\prime})}:=C\) in \(L^{p^{\prime}}(0,T;V^{\prime})\). Thus, by using Young inequality, for any \(\gamma>0\)
\[2|\int_{0}^{t}\langle R_{n},y_{n}\rangle ds|\leq 2\int_{0}^{t}\|R_{n}\|_{V}\|y_{ n}\|_{V}ds\leq C_{\gamma}+\gamma\|y_{n}\|_{L^{p}(0,T;V)}^{p}.\]
By using \(H_{3}\), one has \(\int_{0}^{t}(G(\cdot,y_{n})\phi_{n},y_{n})ds\leq Lt+\int_{0}^{t}(\|\phi_{n}\|_ {H_{0}}^{2}+L)\|y_{n}\|_{H}^{2}ds\). Therefore, arguments already detailed in the proof of Lemma 4.3 ensure
\[\|y_{n}\|_{H}^{2}(t)+2\int_{0}^{t}\frac{\alpha}{2}\|y_{n}\|_{V}^{p}(s)ds\leq C \int_{0}^{t}(1+\|\phi_{n}\|_{H_{0}}^{2})\|y_{n}\|_{H}^{2}(s)ds+\frac{\alpha}{2 }\int_{0}^{t}\|y_{n}\|_{V}^{p}(s)ds+C(f,T).\]
Gronwall inequality ensures \(\|y_{n}\|_{H}^{2}(t)+\int_{0}^{t}\frac{\alpha}{2}\|y_{n}\|_{V}^{p}(s)ds \leq C(f,T)e^{T+N}\). Thus, we get
\[\left\{\begin{array}{l}(y_{n})_{n}\mbox{ is bounded in }L^{p}(0,T;V)\cap C([0,T];H),\\ (A(y_{n},\cdot))_{n}\mbox{ and }(R_{n})_{n}\mbox{ are bounded in }L^{p^{\prime}}(0,T;V^{\prime}),\\ (G(\cdot,y_{n})\phi_{n})_{n}\mbox{ is bounded in }L^{2}(0,T;H).\end{array}\right. \tag{5.3}\]
From (5.2), we have \(\frac{dy_{n}}{dt}=f-A(y_{n},\cdot)+G(\cdot,y_{n})\phi_{n}-R_{n}\) and \((\frac{dy_{n}}{dt})_{n}\) is bounded in \(L^{r}(0,T;V^{\prime})\) where \(r=\min(2,p^{\prime}),\) thanks to (5.3). Set \(\mathbb{V}=\{v\in L^{p}(0,T;V);\quad\frac{dv}{dt}\in L^{r}(0,T;V^{\prime})\}.\) Thanks to [23, Corollary 6], note that \(\mathbb{V}\cap L^{\infty}(0,T;H)\) is compactly embedded into \(L^{q}(0,T;H),\) for any finite \(q>1\). By using (5.3), there exist \(y\in L^{\infty}(0,T;H)\cap\mathbb{V}\) and \((R,\chi)\in(L^{p^{\prime}}(0,T;V^{\prime}))^{2}\) such that, up to sub-sequences denoted by the same way,
\[y_{n}\rightharpoonup y\quad\mbox{in}\quad\mathbb{V}\mbox{ and }y_{n} \to y\mbox{ in}\quad L^{2}(0,T;H), \tag{5.4}\] \[y_{n}\rightharpoonup y\quad\mbox{in}\quad L^{\infty}(0,T;H),\] (5.5) \[(A(y_{n},\cdot),R_{n})\rightharpoonup(\chi,R)\quad\mbox{in}\quad L ^{p^{\prime}}(0,T;V^{\prime})\times L^{p^{\prime}}(0,T;V^{\prime}),\] (5.6) \[y_{n}(t)\rightharpoonup y(t),\forall t\in[0,T]\quad\mbox{in}\quad H. \tag{5.7}\]
Indeed, by compactness with respect to the weak topology in the spaces \(\mathbb{V}\), \(L^{p^{\prime}}(0,T;V^{\prime})\) and the compact embedding of \(\mathbb{V}\hookrightarrow L^{2}(0,T;H)\), there exist \(y\in\mathbb{V}\), \(\chi,R\in L^{p^{\prime}}(0,T;V^{\prime})\)
such that (5.4) and (5.6) hold (up to a sub-sequences). Moreover, (5.5) follows from the compactness with respect to the weak-* topology in \(L^{\infty}(0,T;H)\). Since \(\mathbb{V}\hookrightarrow C([0,T];V^{\prime})\), one obtains that \(y_{n}(t)\rightharpoonup y(t)\) in \(V^{\prime}\) and by taking into acount the boundedness of \((y_{n})_{n}\) in \(C([0,T];H)\), we obtain (5.7). On the other hand, \(-R\in(L^{p^{\prime}}(0,T;V^{\prime}))^{+}\) since the set of non positive elements of \(L^{p^{\prime}}(0,T;V^{\prime})\) is a closed convex subset of \(L^{p^{\prime}}(0,T;V^{\prime})\). On the other hand, note that \(y(0)=u_{0}\) thanks to (5.7) and since \(y_{n}(0)=u_{0}\). \(y\) satisfies the constaint _i.e._\(y\geq\psi\) thanks to (5.4).
As a consequence of (5.4), we get
\[\lim_{n\to\infty}\int_{0}^{t}(G(\cdot,y_{n})\phi_{n},\Phi)ds=\int_{0}^{t}(G( \cdot,y)\phi,\Phi)ds,\quad\forall\Phi\in V,\forall t\in[0,T].\]
Indeed, let \(\Phi\in V\) and note that
\[|\int_{0}^{t}(G(\cdot,y_{n})\phi_{n}-G(\cdot,y)\phi,\Phi)ds|\] \[\leq|\int_{0}^{t}(G(\cdot,y_{n})\phi_{n}-G(\cdot,y)\phi_{n},\Phi) ds+\int_{0}^{t}(G(\cdot,y)\phi_{n}-G(\cdot,y)\phi,\Phi)ds|\] \[\leq\int_{0}^{t}\|G(\cdot,y_{n})-G(\cdot,y)\|_{L_{2}(H_{0},H)}\| \phi_{n}\|_{H_{0}}\|\Phi\|_{H}ds+|\int_{0}^{t}(G(\cdot,y)\phi_{n}-G(\cdot,y) \phi,\Phi)ds|=I_{1}^{n}+I_{2}^{n}.\]
Thanks to Holder inequality, we write
\[I_{1}^{n}\leq C_{\Phi}\int_{0}^{T}\|G(\cdot,y_{n})-G(\cdot,y)\|_{L_{2}(H_{0},H )}\|\phi_{n}\|_{H_{0}}ds\leq\sqrt{N\cdot MC_{\Phi}}(\int_{0}^{T}\|y_{n}-y\|_{H }^{2}dt)^{\frac{1}{2}}\to 0.\]
Recall that \(G(\cdot,y)\in L^{2}(0,T;L_{2}(H_{0},H))\), since \(\phi_{n}\) converges weakly to \(\phi\) in \(L^{2}(0,T;H_{0})\) we obtain that \(\lim_{n\to\infty}I_{2}^{n}=0\). By passing to the limit in (5.2), we get
\[(y(t),\Phi)+\int_{0}^{t}\langle R,\Phi\rangle ds+\int_{0}^{t}\langle\chi,\Phi \rangle ds=(u_{0},\Phi)+\int_{0}^{t}(f+G(\cdot,y)\phi,\Phi)ds,\quad\forall \Phi\in V.\]
We will prove that \(A(y,\cdot)=\chi\) and \(\langle R,\psi-y\rangle ds=0\) a.e. First, we take the difference between the last equation and (5.2).
Let \(t\in[0,T]\), thanks to Proposition 3.1, we know that \(\langle R_{n},y_{n}-\psi\rangle=0\) a.e. in \([0,T]\), then
\[\int_{0}^{t}\langle R_{n},y_{n}-y\rangle ds-\int_{0}^{t}\langle R,y_{n}-y\rangle ds =\int_{0}^{t}\langle R_{n},y_{n}-\psi\rangle ds+\int_{0}^{t} \langle R_{n},\psi-y\rangle ds-\int_{0}^{t}\langle R,y_{n}-y\rangle ds\] \[=\int_{0}^{t}\langle R_{n},\psi-y\rangle ds-\int_{0}^{t}\langle R,y_{n}-y\rangle ds.\]
By using the last equality, (5.2) and the monotonicity of \(A\) (see Subsubsection 4.1.3), we obtain
\[\frac{1}{2}\|(y_{n}-y)(t)\|_{H}^{2}+\int_{0}^{t}\langle A(v,\cdot),y_{n}-v \rangle ds+\int_{0}^{t}\langle A(y_{n},\cdot),v-y\rangle ds-\int_{0}^{t} \langle\chi,y_{n}-y\rangle ds \tag{5.8}\]
\[+\int_{0}^{t}\langle R_{n},\psi-y\rangle ds-\int_{0}^{t}\langle R,y_{n}-y \rangle ds\leq|\int_{0}^{t}\langle G(y_{n},\cdot)\phi_{n}-G(y,\cdot)\phi,y_{n} -y\rangle ds|=|I|,\]
where \(v\in L^{p}(0,T;V)\). On the other hand, thanks to \(H_{3}\) and (5.3), we get
\[\int_{0}^{T}\|G(y_{n},\cdot)\phi_{n}\|_{H}^{2}ds\leq\int_{0}^{T}\|G(y_{n}, \cdot)\|_{L_{2}(H_{0},H)}^{2}\|\phi_{n}\|_{H_{0}}^{2}ds\]
\[\leq L\int_{0}^{T}\big{(}1+\|y_{n}\|_{H}^{2}\big{)}\|\phi_{n}\|_{H_{0}}^{2}ds \leq C(L)\cdot N.\]
and similarly we get \(\int_{0}^{T}\|G(y,\cdot)\phi\|_{H}^{2}ds\leq C(L)\cdot N\). Thus, by using (5.4) and Cauchy Schwarz inequality, we deduce \(\lim\limits_{n\to\infty}|I|=0\), since \(|I|\leq\int_{0}^{T}\big{(}\|G(y_{n},\cdot)\phi_{n}\|_{H}+\|G(y,\cdot)\phi\|_{H }\big{)}\|y_{n}-y\|_{H}ds\). Note that \(\liminf\limits_{n}\|(y_{n}-y)(t)\|_{H}^{2}=\liminf\limits_{n}\|y_{n}(t)\|_{H} ^{2}-\|y(t)\|_{H}^{2}\geq 0,\forall t\in[0,T]\), thanks to (5.7). By setting \(t=T\) and passing to the limit as \(n\to\infty\) in (5.8), we get
\[\int_{0}^{T}\langle R,\psi-y\rangle ds+\int_{0}^{T}\langle A(v,\cdot)-\chi,y-v \rangle ds\leq 0.\]
By setting \(v=y\) in the last inequality, we get \(\int_{0}^{T}\langle R,\psi-y\rangle ds\leq 0\). We know already that \(-R\in(L^{p^{\prime}}(0,T;V^{\prime}))^{+}\) and \(y\geq\psi\), which gives \(\int_{0}^{T}\langle R,\psi-y\rangle ds\geq 0\). Hence, \(\int_{0}^{T}\langle R,\psi-y\rangle ds=0\) and \(\langle R,\psi-y\rangle=0\) a.e. in \([0,T]\), since the integrand is always non negative (see [27, Remark 5]). Finally, one has \(\int_{0}^{T}\langle A(v,\cdot)-\chi,y-v\rangle ds\leq 0\). By using "Minty's trick" [22, Lemma 2.13 p.35], we deduce that \(A(y,\cdot)=\chi\). In conclusion, \(y\) satisfies
* \((y,-R)\in(L^{p}(0,T;V)\cap L^{\infty}(0,T;H))\times(L^{p^{\prime}}(0,T;V^{ \prime}))^{+}\), \(y(0)=u_{0}\) and \(y\geq\psi\).
* \(\langle R,\psi-y\rangle=0\) a.e. in \([0,T]\) and \(\forall v\in L^{p}(0,T;V):\quad v\geq\psi\), \(\langle R,y-v\rangle\geq 0\) a.e. in \([0,T]\).
* For all \(t\in[0,T]\), \[(y(t),\Phi)+\int_{0}^{t}\langle R,\Phi\rangle ds+\int_{0}^{t}\langle A(y,\cdot ),\Phi\rangle ds=(u_{0},\Phi)+\int_{0}^{t}(f+G(\cdot,y)\phi,\Phi)ds,\quad \forall\Phi\in V.\] (5.9)
_The second step._ From (5.4), we have (up to subsequence, _a priori_)
\[y_{n}\text{ converges to }y\text{ strongly in }L^{2}(0,T;H), \tag{5.10}\]
but the uniqueness of the limit yields the convergence of the whole sequence. Let us prove that the following convergence holds
\[y_{n}\to y\text{ in }L^{p}(0,T;V)\cap C([0,T];H).\]
By taking the difference between (5.2) and (5.9) and using \(y_{n}-y\) as test function, we get
\[\frac{1}{2}\|(y_{n}-y)(t)\|_{H}^{2}+\int_{0}^{t}\langle A(y_{n}, \cdot)-A(y,\cdot),y_{n}-y\rangle ds+\int_{0}^{t}\langle R_{n}-R,y_{n}-y\rangle ds\] \[\quad=\int_{0}^{t}\langle G(y_{n},\cdot)\phi_{n}-G(y,\cdot)\phi,y _{n}-y\rangle ds=\mathscr{I}(t),\quad\forall t\in[0,T].\]
Since \(y_{n},y\in L^{p}(0,T;V)\) and satisfying \(y_{n},y\geq\psi\), we obtain
\[\int_{0}^{t}\langle R_{n}-R,y_{n}-y\rangle ds=\int_{0}^{t}\langle R_{n},y_{n} -y\rangle ds+\int_{0}^{t}\langle R,y-y_{n}\rangle ds\geq 0.\]
Since \(\lambda_{T}Id+A\) is T-monotone, one has \(\int_{0}^{t}\langle A(y_{n},\cdot)-A(y,\cdot),y_{n}-y\rangle ds\geq-\lambda_{T }\int_{0}^{t}\|y_{n}-y\|_{H}^{2}ds\). If moreover H\({}_{9}\) holds, we have
\[\int_{0}^{t}\langle A(y_{n},\cdot)-A(y,\cdot),y_{n}-y\rangle ds\geq-\lambda_{ T}\int_{0}^{t}\|y_{n}-y\|_{H}^{2}ds+\int_{0}^{t}\bar{\alpha}\|y_{n}-y\|_{V}^{p}ds.\]
Therefore,
\[\frac{1}{2}\sup_{t\in[0,T]}\|(y_{n}-y)(t)\|_{H}^{2}+\int_{0}^{T}\bar{ \alpha}\|y_{n}-y\|_{V}^{p}ds\] \[\qquad\qquad\leq\lambda_{T}\int_{0}^{T}\|y_{n}-y\|_{H}^{2}ds+\sup_ {t\in[0,T]}|\int_{0}^{t}\langle G(y_{n},\cdot)\phi_{n}-G(y,\cdot)\phi,y_{n}-y \rangle ds|,\quad\forall t\in[0,T].\]
Similarly to \(I\) (see (5.8)), by using (5.10) one has
\[\lim_{n\to\infty}\sup_{t\in[0,T]}|\mathscr{I}(t)|=\lim_{n\to\infty}\sup_{t\in[ 0,T]}|\int_{0}^{t}\langle G(y_{n},\cdot)\phi_{n}-G(y,\cdot)\phi,y_{n}-y\rangle ds |=0.\]
By using again (5.10), we conclude \(y_{n}\to y\) in \(L^{p}(0,T;V)\cap C([0,T];H)\). Finally, from Proposition 3.1, \(y\) is the unique solution to (5.9). Thus, \(y=y^{\phi}\), which completes the proof of Lemma 5.1.
### On vanishing multiplicative noise \(\delta\downarrow 0\)
Let \(\{\phi^{\delta}\}_{\delta>0}\subset\mathscr{A}_{N}\) for some \(N<\infty\), see (2.4). By using Girsanov theorem ([7, Thm. 10.14]) and [7, Prop. 10.17], we obtain the existence of a \(Q\)-Wiener process, denoted by \(W^{\delta}\), with respect to \(\{\mathscr{F}_{t}\}_{t\geq 0}\) on the probability space \((\Omega,\mathscr{F},P^{\delta})\) where \(W^{\delta}(t)=W(t)+\frac{1}{\delta}\int_{0}^{t}\phi^{\delta}(s)ds, \quad t\in[0,T]\) and
\[dP^{\delta}=\exp[-\frac{1}{\delta}\int_{0}^{T}\langle\phi^{\delta}(s),dW(s) \rangle_{H_{0}}-\frac{1}{2\delta^{2}}\int_{0}^{T}\|\phi^{\delta}(s)\|_{H_{0}}^ {2}ds]dP. \tag{5.11}\]
In fact, the probability measure \(P\) and \(P^{\delta}\) are mutually absolutely continuous. By using Theorem 2.1, there exists a unique solution \((v_{\delta},r_{\delta})\) satisfying
\[v_{\delta}=g^{\delta}(W(\cdot)+\frac{1}{\delta}\int_{0}^{\cdot}\phi^{\delta}( s)ds)=g^{\delta}(W^{\delta}(\cdot)). \tag{5.12}\]
1. \(v_{\delta}\) is \(L^{p}(0,T;V)\)-adapted process with P-a.s. paths \(v_{\delta}(\omega,\cdot)\in C([0,T];H)\).
2. \(v_{\delta}(t=0)=u_{0}\quad\text{and}\quad v_{\delta}\geq\psi\) P-a.s.
3. \(P\)-a.s, for all \(t\in[0,T]\), \[v_{\delta}(t)+\int_{0}^{t}r_{\delta}ds+\int_{0}^{t}A(v_{\delta},\cdot)ds=u_{0} +\delta\int_{0}^{t}G(v_{\delta},\cdot)dW(s)+\int_{0}^{t}fds+\int_{0}^{t}G( \cdot,v_{\delta})\phi^{\delta}ds\text{ in }V^{\prime}.\] (5.13)
(4) \(r_{\delta}\) is \(L^{p^{\prime}}(0,T;V^{\prime})\)-adapted process, \(-r_{\delta}\in(V^{\prime})^{+}\) and \(\langle r_{\delta},v_{\delta}-\psi\rangle=0\) a.e. in \(\Omega_{T}\)7.
Footnote 7: Note that \(k\in(L^{p}(\Omega,V^{\prime}))^{+}\) if and only if \(k(t,\omega)\in(V^{\prime})^{+}\) a.e. in \(\Omega\times[0,T]\), see [27, Remark 5].
By using Proposition 3.1, there exists a unique random solution \((u_{\delta},k_{\delta})\) satisfying P-a.s.:
1. \((u_{\delta},-k_{\delta})\in(L^{p}(0,T;V)\cap C([0,T];H))\times(L^{p^{\prime}}( 0,T;V^{\prime}))^{+}\), \(u_{\delta}(0)=u_{0}\) and \(u_{\delta}\geq\psi\).
2. \(\langle k_{\delta},u_{\delta}-\psi\rangle=0\) a.e. in \([0,T]\times\Omega\)
3. \(P\)-a.s, for all \(t\in[0,T]\), \[u_{\delta}(t)+\int_{0}^{t}k_{\delta}ds+\int_{0}^{t}A(u_{\delta},\cdot)ds=u_{0} +\int_{0}^{t}(f+G(\cdot,u_{\delta})\phi^{\delta})ds\text{ in }V^{\prime}.\] (5.14)
Moreover, by using \(g^{0}\) defined by (3.5), we write \(u_{\delta}=g^{0}(\phi^{\delta})\) P-a.s.
**Lemma 5.2**.: _Assume \(H_{1}-H_{6}\). Let \(\{\phi^{\delta}:\delta>0\}\subset\mathscr{A}_{N}\) for some \(N<\infty\). Then_
\[\lim_{\delta\to 0}P(\sup_{t\in[0,T]}\|(v_{\delta}-u_{\delta})(t)\|_{H}>\epsilon)=0, \quad\forall\epsilon>0.\]
_Moreover, if \(H_{7}\) holds, then \(\lim_{\delta\to 0}P(|v_{\delta}-u_{\delta}|_{T}>\epsilon)=0,\quad\forall \epsilon>0\)._
Proof of Lemma 5.2.: First, let us prove some uniform estimates with respect to \(\delta\). For that, consider \(t\in[0,T]\), by using (5.13) we have
\[(v_{\delta}-\psi)(t) +\int_{0}^{t}r_{\delta}ds+\int_{0}^{t}A(v_{\delta},\cdot)ds=u_{0} -\psi(0)\] \[\quad+\delta\int_{0}^{t}G(v_{\delta},\cdot)dW(s)+\int_{0}^{t}(f- \frac{d\psi}{ds})ds+\int_{0}^{t}G(\cdot,v_{\delta})\phi^{\delta}ds.\]
Let \(\delta<1\), by using Ito formula with \(F(u)=\|u\|_{H}^{2}\) for \(v_{\delta}-\psi\), we get
\[\|v_{\delta}-\psi\|_{H}^{2}(t)+2\int_{0}^{t}\langle A(v_{\delta},\cdot),v_{\delta}-\psi\rangle ds+2\int_{0}^{t}\langle r_{\delta},v_{\delta}- \psi\rangle ds\] \[=\|u_{0}-\psi(0)\|_{H}^{2}+\delta^{2}\int_{0}^{t}\|G(\cdot,v_{ \delta})\|_{L_{2}(H_{0},H)}^{2}ds+2\int_{0}^{t}(G(\cdot,v_{\delta})\phi^{ \delta},v_{\delta}-\psi)ds\] \[\quad\quad+2\int_{0}^{t}\langle f-\frac{d\psi}{ds},v_{\delta}- \psi\rangle ds+2\delta\int_{0}^{t}(G(\cdot,v_{\delta}),v_{\delta}-\psi)dW(s).\]
By using \(H_{3}\), we get
\[(G(\cdot,v_{\delta})\phi^{\delta},v_{\delta}-\psi)+\|G(\cdot,v_{ \delta})\|_{L_{2}(H_{0},H)}^{2} \leq\|G(\cdot,v_{\delta})\|_{L_{2}(H_{0},H)}\|\phi^{\delta}\|_{H_ {0}}\|v_{\delta}-\psi\|_{H}+\|G(\cdot,v_{\delta})\|_{L_{2}(H_{0},H)}^{2}\] \[\leq C(L)(1+\|\phi^{\delta}\|_{H_{0}}^{2})\|v_{\delta}\|_{H}^{2}+ C(L)(1+\|\psi\|_{H}^{2}+\|\phi^{\delta}\|_{H_{0}}^{2}).\]
Argument detailed in the proof of Lemma 4.3 yields
\[\langle A(v_{\delta},\cdot),v_{\delta}-\psi\rangle\geq \frac{\alpha}{2}\|v_{\delta}\|_{V}^{p}-\lambda\|v_{\delta}\|_{H}^{ 2}-l_{1}-C(\psi),\]
where \(C(\psi)\in L^{1}([0,T])\). Let \(\beta>0\), by using Burkholder-Davis-Gundy inequality we get
\[\mathbb{E}\sup_{r\in[0,t]}\left|\int_{0}^{r}(G(v_{\delta},\cdot), v_{\delta}-\psi)_{H}\,dW\right| \leq\beta\mathbb{E}\sup_{r\in[0,t]}\|v_{\delta}-\psi\|_{H}^{2}(r) +\frac{C}{\beta}\mathbb{E}\int_{0}^{t}\|G(\cdot,v_{\delta})\|_{L_{2}(H_{0},H)}^ {2}ds\] \[\leq\beta\mathbb{E}\sup_{r\in[0,t]}\|v_{\delta}-\psi\|_{H}^{2}(r) +C_{\beta}Lt+C_{\beta}L\mathbb{E}\int_{0}^{t}\|v_{\delta}\|_{H}^{2}ds.\]
We recall that \(\langle r_{\delta},v_{\delta}-\psi\rangle=0\) a.e. in \(\Omega_{T}\). Thus, for any positive \(\gamma\), Young's inequality yields the existence of a positive constant \(C_{\gamma}\) that may change form line to line, such that
\[(1-\beta)\mathbb{E}\sup_{r\in[0,t]}\|v_{\delta}-\psi\|_{H}^{2}(r) +2\mathbb{E}\int_{0}^{t}\frac{\alpha}{2}\|v_{\delta}\|_{V}^{p}(s)ds\leq\mathbb{ E}\int_{0}^{t}[\lambda+C(L,\beta)(1+\|\phi^{\delta}\|_{H_{0}}^{2})]\|v_{\delta}\|_{H}^{2}( s)ds\] \[\quad+\|l_{1}+C(\psi)\|_{L^{1}([0,T])}+C_{\gamma}(f,\frac{d\psi}{ dt})+\gamma\mathbb{E}\int_{0}^{t}\|v_{\delta}-\psi\|_{V}^{p}(s)ds+C(L,\beta)(t+N).\] \[\qquad\leq C\mathbb{E}\int_{0}^{t}(1+\|\phi^{\delta}\|_{H_{0}}^{2} )\|v_{\delta}-\psi\|_{H}^{2}(s)ds+\frac{\alpha}{2}\mathbb{E}\int_{0}^{t}\|v_{ \delta}\|_{V}^{p}(s)ds+C(N,\psi,f),\]
for a suitable choice of \(\gamma\). An appropriate choice of \(\beta\) and Gronwall's lemma yield the existence of \(C:=C(\psi,f,N)>0\) independent of \(\delta\) such that
\[\mathbb{E}\sup_{r\in[0,t]}\|v_{\delta}\|_{H}^{2}(r)+\alpha\mathbb{E}\int_{0}^{t }\|v_{\delta}\|_{V}^{p}(s)ds\leq C(\psi,f,N),\quad\forall t\in[0,T]. \tag{5.15}\]
Moreover, we obtain a similar estimate for \(u_{\delta}\), namely
\[\mathbb{E}\sup_{r\in[0,t]}\|u_{\delta}\|_{H}^{2}(r)+\alpha\mathbb{E}\int_{0}^ {t}\|u_{\delta}\|_{V}^{p}(s)ds\leq C(\psi,f,N),\quad\forall t\in[0,T]. \tag{5.16}\]
Next, (5.15) and (5.16) will be used to prove Lemma 5.2.
By taking the difference between (5.13) and (5.14), we have \(P\)-a.s, for all \(t\in[0,T]\),
\[v_{\delta}(t)-u_{\delta}(t) +\int_{0}^{t}(r_{\delta}-k_{\delta})ds+\int_{0}^{t}[A(v_{\delta},\cdot)-A(u_{\delta},\cdot)]ds\] \[=\delta\int_{0}^{t}G(v_{\delta},\cdot)dW(s)+\int_{0}^{t}[G(\cdot,v_{\delta})\phi^{\delta}-G(\cdot,u_{\delta})\phi^{\delta}]ds\text{ in }V^{\prime}.\]
Let \(t\in[0,T]\). By using Ito formula with \(F(u)=\|u\|_{H}^{2}\) for \(v_{\delta}-u_{\delta}\), we get
\[\|v_{\delta}-u_{\delta}\|_{H}^{2}(t)+2\int_{0}^{t}\langle A(v_{ \delta},\cdot)-A(u_{\delta},\cdot),v_{\delta}-u_{\delta}\rangle ds+2\int_{0}^{ t}\langle r_{\delta}-k_{\delta},v_{\delta}-u_{\delta}\rangle ds\] \[=\delta^{2}\int_{0}^{t}\|G(\cdot,v_{\delta})\|_{L_{2}(H_{0},H)}^{ 2}ds+2\int_{0}^{t}(G(\cdot,v_{\delta})\phi^{\delta}-G(\cdot,u_{\delta})\phi^{ \delta},v_{\delta}-u_{\delta})ds\] \[\qquad+2\delta\int_{0}^{t}(G(\cdot,v_{\delta}),v_{\delta}-u_{ \delta})dW(s).\]
Recall that \(r_{\delta}\) and \(k_{\delta}\) satisfy
\[v_{\delta},u_{\delta}\geq\psi,\quad-r_{\delta},-k_{\delta}\in(V^{\prime})^{+} ;\quad\langle r_{\delta},v_{\delta}-\psi\rangle=0\text{ and }\langle k_{\delta},u_{\delta}-\psi\rangle=0\text{ a.e. in }\Omega_{T}. \tag{5.17}\]
Then \(\int_{0}^{t}\langle r_{\delta}-k_{\delta},v_{\delta}-u_{\delta}\rangle ds\geq 0\) a.e. in \(\Omega\). By using \(H_{2,2}\), one has
\[\int_{0}^{t}\langle A(v_{\delta},\cdot)-A(u_{\delta},\cdot),v_{\delta}-u_{ \delta}\rangle ds\geq-\lambda_{T}\int_{0}^{t}\|v_{\delta}-u_{\delta}\|_{H}^{2 }ds.\]
If moreover \(\mathsf{H}_{7}\) holds, we have
\[\int_{0}^{t}\langle A(v_{\delta},\cdot)-A(u_{\delta},\cdot),v_{\delta}-u_{ \delta}\rangle ds\geq-\lambda_{T}\int_{0}^{t}\|v_{\delta}-u_{\delta}\|_{H}^{ 2}ds+\int_{0}^{t}\bar{\alpha}\|v_{\delta}-u_{\delta}\|_{V}^{p}ds.\]
By using \(H_{3,1}\) and Young's inequality
\[2|\int_{0}^{t}(G(\cdot,v_{\delta})\phi^{\delta}-G(\cdot,u_{\delta })\phi^{\delta},v_{\delta}-u_{\delta})ds| \leq\int_{0}^{t}\|G(\cdot,v_{\delta})-G(\cdot,u_{\delta})\|_{L_{ 2}(H_{0},H)}\|\phi^{\delta}\|_{H_{0}}\|v_{\delta}-u_{\delta}\|_{H}ds\] \[\leq\int_{0}^{t}(M+\|\phi^{\delta}\|_{H_{0}}^{2})\|v_{\delta}-u_{ \delta}\|_{H}^{2}ds.\]
Thus
\[\|v_{\delta}-u_{\delta}\|_{H}^{2}(t)+\int_{0}^{t}\bar{\alpha}\|v_{\delta}-u_{ \delta}\|_{V}^{p}ds\leq\int_{0}^{t}(M+\lambda_{T}+\|\phi^{\delta}\|_{H_{0}}^{2 })\|v_{\delta}-u_{\delta}\|_{H}^{2}ds\]
\[\leq 2\mathbb{E}\sup_{r\in[0,T]}\|v_{\delta}\|_{H}^{2}(r)+2\mathbb{E} \sup_{r\in[0,T]}\|u_{\delta}\|_{H}^{2}(r)+2LT+CL\mathbb{E}\int_{0}^{T}\|v_{ \delta}\|_{H}^{2}ds\leq C,\]
thanks to (5.15) and (5.16). Therefore
\[\mathbb{E}\sup_{t\in[0,T]}\|v_{\delta}-u_{\delta}\|_{H}^{2}(t)+\mathbb{E}\int _{0}^{T}\|v_{\delta}-u_{\delta}\|_{V}^{p}ds\leq\delta C(T,N)\to 0\text{ as }\delta\to 0.\]
Finally, Markov's inequality ensures the conclusion of Lemma 5.2.
## Acknowledgements
This work is funded by national funds through the FCT - Fundacao para a Ciencia e a Tecnologia, I.P., under the scope of the projects UIDB/00297/2020 and UIDP/00297/2020 (Center for Mathematics and Applications).
|
2307.12690 | On the stability of a double porous elastic system with visco-porous
dampings | In this paper we consider a one dimensional elastic system with double
porosity structure and with frictional damping in both porous equations. We
introduce two stability numbers $\chi_{0}$ and $\chi_{1}$ and prove that the
solution of the system decays exponentially provided that $\chi_{0}=0$ and
$\chi_{1}\neq0.$ Otherwise, we prove the lack of exponential decay. Our results
improve the results of \cite{Bazarra} and \cite{Nemsi}. | Ahmed Keddi, Aicha Nemsi, Abdelfeteh Fareh | 2023-07-24T11:08:27Z | http://arxiv.org/abs/2307.12690v1 | # On the stability of a double porous elastic system with visco-porous dampings
###### Abstract
In this paper we consider a one dimensional elastic system with double porosity structure and with frictional damping in both porous equations. We introduce two stability numbers \(\chi_{0}\) and \(\chi_{1}\) and prove that the solution of the system decays exponentially provided that \(\chi_{0}=0\) and \(\chi_{1}\neq 0.\) Otherwise, we prove the lack of exponential decay. Our results improve the results of [5] and [14].
+
Footnote †: email: [email protected] [email protected]
Corresponding authors:[email protected].
_2020 Mathematics Subject Classification_: 35B35; 35B40; 35L15; 35Q74; 74F10; 93D05
_Key words and phrases_: double porosity, well-posedness, exponential decay, lack of exponential decay.
## 1 Introduction
In this paper, we are concerned with the following system
\[\left\{\begin{array}{ll}\rho u_{tt}=\mu u_{xx}+b\varphi_{x}+d\psi_{x}&\mbox{ in }(0,\pi)\times\mathbb{R}_{+},\\ \kappa_{1}\varphi_{tt}=\alpha\varphi_{xx}+\beta\psi_{xx}-bu_{x}-\alpha_{1} \varphi-\alpha_{3}\psi-\tau_{1}\varphi_{t}-\tau_{2}\psi_{t}&\mbox{in }(0,\pi)\times \mathbb{R}_{+},\\ \kappa_{2}\psi_{tt}=\beta\varphi_{xx}+\gamma\psi_{xx}-du_{x}-\alpha_{3}\varphi -\alpha_{2}\psi-\tau_{3}\varphi_{t}-\tau_{4}\psi_{t}&\mbox{in }(0,\pi)\times \mathbb{R}_{+},\end{array}\right. \tag{1.1}\]
where \(u\) is the transversal displacement of a one-dimensional porous elastic solid of length \(\pi\), \(\varphi\) and \(\psi\) are the porous unknown functions one associated to the pores in the skeleton and the other associated with the fissures in the material body. The parameter \(\rho,\kappa_{1}\) and \(\kappa_{2}\), which assumed to be strictly positive, are the mass density, and the products of the mass density by the equilibrated inertia, respectively. The coefficients \(\mu,\alpha,\beta,\gamma,\alpha_{1},\alpha_{2},\alpha_{3},b,d,\tau_{1}\), \(\tau_{2},\tau_{3}\) and \(\tau_{4}\) are parameters related on the properties of the material. We assume that they satisfy some restrictions that will be specified later.
The system considered here, represented an elastic solid with double porosity structure in the framework of the theory of elastic materials with voids developed by Nunziato-Cowin [9]. This approach has been used by Iesan and Quintanilla [12] to derive a new theory of thermoelastic solids which have a double porosity structure. In contrast to the classical
theory the new one is not based on Darcy's law, and the porosity structure in the case of equilibrium is influenced by the displacement field.
The origin of the classical theory of elastic materials with double porosity goes back to the works of Barenblatt _et al._[3, 4]. The authors introduced two liquid pressures at each point of the material which allows the body to have a double porosity structure: a macro porosity connected to pores in the body and a micro porosity connected to fissures in the skeleton.
In the last few years, a great interest has been given to the analysis of the longtime behavior of solutions of porous thermoelastic problems. A part of this interest stems from the need to have general results that explain the experimental observations of engineers. The earliest contribution in this direction was achieved by Quintanilla [17]. He considered the porous elastic system
\[\left\{\begin{array}{ll}\rho_{0}u_{tt}=\mu u_{xx}+\beta\varphi_{x}&\mbox{in }\ (0,\pi)\times(0,+\infty),\\ \rho_{0}\kappa\varphi_{tt}=\alpha\varphi_{xx}-\beta u_{x}-\xi\varphi-\tau \varphi_{t}&\mbox{in }\ (0,\pi)\times(0,+\infty),\end{array}\right. \tag{1.2}\]
where \(u\) is the transversal displacement and \(\varphi\) is the volume fraction. He used Hurwitz theorem and showed that the porous dissipation \(\tau\varphi_{t}\) is not powerful enough to produce an exponential stability.
Several dissipative mechanisms have been examined to stabilize system (1.2) exponentially. Casas and Quintanilla [8] coupled system (1.1) (for \(\tau=0\)) with the heat equation, and proved the non-exponential stability. However, if thermal and porous dissipations or micro-thermal and viscoelastic dissipations are combined then the solution decays exponentially [7, 13].
Apalara [1] considered the porous thermoelastic system
\[\left\{\begin{array}{ll}\rho u_{tt}-\mu u_{xx}-b\phi_{x}=0,&\mbox{in }\ (0,1)\times(0,+\infty)\\ J\phi_{tt}-\delta\phi_{xx}+bu_{x}+\xi\phi+\tau\phi_{t}=0,&\mbox{in }\ (0,1)\times(0,+\infty)\,,\end{array}\right.\]
with different boundary conditions. He investigated the case of equal wave speeds \(\frac{\mu}{\rho}=\frac{\delta}{J}\) and proved that the unique dissipation in the porous equation leads to an exponential stability. He also replaced the frictional damping \(\tau\phi_{t}\) by the memory term \(\int_{0}^{t}g(t-s)\phi_{xx}(s)ds\) and obtained a general rate of decay [2]. We notice that the results of [1, 2] disprove Magana and Quintanilla's claim that a porous-elastic system with a single dissipation mechanism can not be exponentially stable [13].
In the context of double porous thermoelasticity, Bazarra _et al._[5] considered the system
\[\left\{\begin{array}{ll}\rho u_{tt}=\mu u_{xx}+b\varphi_{x}+d\psi_{x}-\beta \theta_{x},\\ \kappa_{1}\varphi_{tt}=\alpha\varphi_{xx}+b_{1}\psi_{xx}-bu_{x}-\alpha_{1} \varphi-\alpha_{3}\psi+\gamma_{1}\theta-\varepsilon_{1}\varphi_{t}-\varepsilon _{2}\psi_{t},\\ \kappa_{2}\psi_{tt}=b_{1}\varphi_{xx}+\gamma\psi_{xx}-du_{x}-\alpha_{3}\varphi -\alpha_{2}\psi+\gamma_{2}\theta-\varepsilon_{3}\varphi_{t}-\varepsilon_{4} \psi_{t},\\ c\theta_{t}=\kappa\theta_{xx}-\beta u_{tx}-\gamma_{1}\varphi_{t}-\gamma_{2} \psi_{t},\end{array}\right. \tag{1.3}\]
with the boundary conditions
\[u\left(x,t\right)=\varphi_{x}\left(x,t\right)=\psi_{x}\left(x,t\right)=\theta _{x}\left(x,t\right)=0,\ x=0,\ x=\pi,\ \ \forall t\geq 0.\]
They proved that the solution decays exponentially when porous dissipation is assumed for each porous equations. If the dissipation is considered only on one porous structure,
the solution cannot be asymptotically stable in general. However, they give a sufficient conditions for which the solutions decay exponentially. See also [6].
Recently, Nemsi and Fareh [14] proved that the solution of the system
\[\left\{\begin{array}{ll}\rho u_{tt}=\mu u_{xx}+b\varphi_{x}+d\psi_{x}+\lambda u _{txx},&\text{in}\ \ (0,L)\times(0,\infty),\\ \kappa_{1}\varphi_{tt}=\alpha\varphi_{xx}+b_{1}\psi_{xx}-bu_{x}-\alpha_{1} \varphi-\alpha_{3}\psi-\tau_{1}\varphi_{t}&\text{in}\ \ (0,L)\times(0,\infty),\\ \kappa_{2}\psi_{tt}=b_{1}\varphi_{xx}+\gamma\psi_{xx}-du_{x}-\alpha_{3} \varphi-\alpha_{2}\psi-\tau_{2}\psi_{t}&\text{in}\ \ (0,L)\times(0,\infty),\end{array}\right. \tag{1.4}\]
with the boundary conditions
\[u(t,0)=u(t,L)=\varphi_{x}(t,0)=\varphi_{x}(t,L)=\psi_{x}(t,0)=\psi_{x}(t,L)=0 \ \ \ \text{in}\ (0,\infty),\]
decays exponentially without any assumption on the wave speeds.
In this paper we consider system (1.1) subjected to the initial data
\[\begin{array}{l}u\left(x,0\right)=u_{0}\left(x\right),\;u_{t}\left(x,0 \right)=u_{1}\left(x\right),\\ \varphi\left(x,0\right)=\varphi_{0}\left(x\right),\;\varphi_{t}\left(x,0 \right)=\varphi_{1}\left(x\right),\\ \psi\left(x,0\right)=\psi_{0}\left(x\right),\;\psi_{t}\left(x,0\right)=\psi_{1 }\left(x\right)\end{array} \tag{1.5}\]
for all \(x\in\left(0,\pi\right)\) and the boundary conditions
\[u_{x}\left(0,t\right)=u_{x}\left(\pi,t\right)=\varphi\left(0,t\right)=\varphi \left(\pi,t\right)=\psi\left(0,t\right)=\psi\left(\pi,t\right)=0,\;t\geq 0, \tag{1.6}\]
or
\[u\left(0,t\right)=u\left(\pi,t\right)=\varphi_{x}\left(0,t\right)=\varphi_{x }\left(\pi,t\right)=\psi_{x}\left(0,t\right)=\psi_{x}\left(\pi,t\right)=0,\;t \geq 0. \tag{1.7}\]
Note that system (1.1) coincides with (1.3) in the isothermal case (\(\beta=\gamma_{1}=\gamma_{2}=0\)) and with (1.4) for \(\lambda=0.\) Therefore, system (1.1) lacks thermal and viscoelastic dissipations. Moreover, in somehow the two porous functions can be viewed as a multivalued function \(\left(\varphi,\psi\right)\), consequently, system (1.1) can be viewed as a porous elastic system with only one dissipation. Thus, our exponential stability extends the result of [1, 2] and our stability number generalizes the wave speeds equality.
We assume that the constitutive coefficients \(\mu,\alpha,\beta,\gamma,\alpha_{1}\) and \(\alpha_{2}\) are positive and as coupling is considered the coefficients \(b,d\) must not be zero simultaneously. Next, we define the energy associated with the solution \(\left(u,\varphi,\psi\right)\) of system (1.1) by
\[E\left(t\right) :=\frac{1}{2}\int_{0}^{\pi}\left[\rho u_{t}^{2}+\kappa_{1} \varphi_{t}^{2}+\kappa_{2}\psi_{t}^{2}+\mu u_{x}^{2}+\alpha\varphi_{x}^{2}+ \gamma\psi_{x}^{2}+\alpha_{1}\varphi^{2}+\alpha_{2}\psi^{2}\right. \tag{1.8}\] \[\left.+2\beta\varphi_{x}\psi_{x}+2bu_{x}\varphi+2du_{x}\psi+2 \alpha_{3}\varphi\psi\right]dx.\]
**Remark 1**.: _To guarantee that the energy \(E\left(t\right)\) is a positive definite form, we assume that the matrix_
\[A=\left(\begin{array}{ccccc}\mu&b&d&0&0\\ b&\alpha_{1}&\alpha_{3}&0&0\\ d&\alpha_{3}&\alpha_{2}&0&0\\ 0&0&0&\alpha&\beta\\ 0&0&0&\beta&\gamma\end{array}\right)\]
_is positive definite._
_Indeed, since any principal submatrix of a positive definite matrix is also positive definite, then_
\[\left(\alpha_{1}-\frac{b^{2}}{\mu}\right)\left(\alpha_{2}-\frac{d^{2}}{\mu} \right)-\left(\alpha_{3}-\frac{bd}{\mu}\right)^{2}>0, \tag{1.9}\]
\[\alpha_{1}\mu-b^{2}>0,\,\,\,\alpha_{2}\mu-d^{2}>0,\,\alpha_{1}\alpha_{2}-\alpha_{3}^ {2}>0\text{ and }\alpha\gamma-\beta^{2}>0. \tag{1.10}\]
_Therefore,_
\[\alpha\varphi_{x}^{2}+\gamma\psi_{x}^{2}+2\beta\varphi_{x}\psi_{x}=\frac{1}{2} \left(\alpha-\frac{\beta^{2}}{\gamma}\right)\varphi_{x}^{2}+\frac{1}{2}\left( \gamma-\frac{\beta^{2}}{\alpha}\right)\psi_{x}^{2}\]
\[+\frac{\alpha}{2}\left(\varphi_{x}+\frac{\beta}{\alpha}\psi_{x}\right)^{2}+ \frac{\gamma}{2}\left(\psi_{x}+\frac{\beta}{\gamma}\varphi_{x}\right)^{2}\geq 0.\]
_Moreover, there exists a \(\varepsilon>0\) such that the matrix_
\[B=\left(\begin{array}{ccc}\mu-\varepsilon&b&d\\ b&\alpha_{1}-\varepsilon&\alpha_{3}\\ d&\alpha_{3}&\alpha_{2}-\varepsilon\end{array}\right)\]
_still positive definite. Thus,_
\[E\left(t\right)=\frac{1}{2}\int_{0}^{\pi}\left(\rho u_{t}^{2}+\kappa_{1} \varphi_{t}^{2}+\kappa_{2}\psi_{t}^{2}+\varepsilon u_{x}^{2}+\varepsilon \varphi^{2}+\varepsilon\psi^{2}\right)dx\]
\[+\frac{1}{2}\int_{0}^{\pi}\left[\left(\mu-\varepsilon\right)u_{x}^{2}+2bu_{x }\varphi+2du_{x}\psi+\left(\alpha_{1}-\varepsilon\right)\varphi^{2}+2\alpha_ {3}\varphi\psi+\left(\alpha_{2}-\varepsilon\right)\psi^{2}\right]dx\]
\[+\frac{1}{2}\int_{0}^{\pi}\left[\alpha\left(\varphi_{x}+\frac{\beta}{\alpha} \psi_{x}\right)^{2}+\gamma\left(\psi_{x}+\frac{\beta}{\gamma}\varphi_{x} \right)^{2}\right]dx\]
\[+\frac{1}{2}\left(\alpha-\frac{\beta^{2}}{\gamma}\right)\int_{0}^{\pi}\left| \varphi_{x}\right|^{2}dx+\frac{1}{2}\left(\gamma-\frac{\beta^{2}}{\alpha} \right)\int_{0}^{\pi}\left|\psi_{x}\right|^{2}dx\geq 0.\]
The rest of the paper is organized as follows: in Section 2, we prove the well posedness of the problem determined by (1.1), (1.5) and (1.7). In Section 3 we define two stability numbers \(\chi_{0}\) and \(\chi_{1}\) and prove, by the use of the multiplier method, that the solution decays exponentially, provided that \(\chi_{0}=0\) and \(\chi_{1}\neq 0\). Section 4 is devoted to the proof of the lack of the exponential decay when \(\chi_{1}=0\) or \(\chi_{0}\neq 0\).
## 2 Existence and uniqueness
In this section we prove the existence and the uniqueness of a solution to the problem determined by system (1.1) and conditions (1.5) and (1.7), the case of boundary conditions (1.6) is similar.
As Neumann boundary conditions are considered for \(\varphi\) and \(\psi\), Poincare's inequality cannot be applied. From the second and the third equations of (1.1) and boundary conditions (1.7), we have
\[\frac{d^{2}}{dt^{2}}\int_{0}^{\pi}\varphi dx=-\frac{\alpha_{1}}{\kappa_{1}} \int_{0}^{\pi}\varphi dx-\frac{\alpha_{3}}{\kappa_{1}}\int_{0}^{\pi}\psi dx- \frac{\tau_{1}}{\kappa_{1}}\frac{d}{dt}\int_{0}^{\pi}\varphi dx-\frac{\tau_{ 2}}{\kappa_{1}}\frac{d}{dt}\int_{0}^{\pi}\psi dx, \tag{2.11}\]
\[\frac{d^{2}}{dt^{2}}\int_{0}^{\pi}\psi dx=-\frac{\alpha_{3}}{\kappa_{2}} \int_{0}^{\pi}\varphi dx-\frac{\alpha_{2}}{\kappa_{2}}\int_{0}^{\pi}\psi dx- \frac{\tau_{3}}{\kappa_{2}}\frac{d}{dt}\int_{0}^{\pi}\varphi_{t}dx-\frac{\tau _{4}}{\kappa_{2}}\frac{d}{dt}\int_{0}^{\pi}\psi_{t}dx.\]
So if we set \(X=\left(\int_{0}^{\pi}\varphi dx,\int_{0}^{\pi}\varphi_{t}dx,\int_{0}^{\pi} \psi dx,\int_{0}^{\pi}\psi_{t}dx\right)^{T}\) then (2.11) can be written
\[X_{t}\left(t\right)=MX\left(t\right),\,\,X\left(0\right)=X_{0}, \tag{2.12}\]
where
\[M=\left(\begin{array}{cccc}0&1&0&0\\ -\frac{\alpha_{1}}{\kappa_{1}}&-\frac{\tau_{1}}{\kappa_{1}}&-\frac{\alpha_{3}}{ \kappa_{1}}&-\frac{\tau_{2}}{\kappa_{1}}\\ 0&0&0&1\\ -\frac{\alpha_{3}}{\kappa_{2}}&-\frac{\tau_{3}}{\kappa_{2}}&-\frac{\alpha_{2}} {\kappa_{2}}&-\frac{\tau_{4}}{\kappa_{2}}\end{array}\right)\]
and
\[X_{0}=\left(\int_{0}^{\pi}\varphi_{0}dx,\int_{0}^{\pi}\varphi_{1}dx,\int_{0}^{ \pi}\psi_{0}dx,\int_{0}^{\pi}\psi_{1}dx\right)^{T}.\]
Solving (2.12) we get
\[X\left(t\right)=\exp\left(tM\right)X_{0},\]
in particular,
\[\int_{0}^{\pi}\varphi dx=\sum_{k=1}^{4}\left(\exp\left(tM\right)\right)_{1k}X_ {0k},\ \ \int_{0}^{\pi}\psi dx=\sum_{j=1}^{4}\left(\exp\left(tM\right)\right)_{3k}X_{0k}.\]
Therefore, if we set
\[\bar{\varphi}=\varphi-\sum_{k=1}^{4}\left(\exp\left(tM\right)\right)_{1k}X_{ 0k},\ \ \bar{\psi}=\psi-\sum_{j=1}^{4}\left(\exp\left(tM\right)\right)_{3k}X_{0k},\]
then \(\left(u,\overline{\varphi},\overline{\psi}\right)\) solves (1.1) with boundary conditions (1.7), and we have
\[\int_{0}^{\pi}\overline{\varphi}dx=\int_{0}^{\pi}\overline{\psi}dx=0,\]
which allows to apply Poincare's inequality. In the sequel we will work with \(\bar{\varphi}\) and \(\bar{\psi}\) but for convenience, we write \(\varphi,\psi\) instead of \(\bar{\varphi},\bar{\psi}\) respectively.
Furthermore, as porous dissipations are considered, the weights of porous dampings \(\tau_{1},\tau_{2},\tau_{3}\) and \(\tau_{4}\) are assumed to satisfy
\[\tau_{1}>0,\ 4\tau_{1}\tau_{4}>\left(\tau_{2}+\tau_{3}\right)^{2}. \tag{2.13}\]
**Lemma 1**.: _The energy \(E\left(t\right)\) satisfies along the solution \(\left(u,\varphi,\psi\right)\) of (1.1)-(1.6) the estimate_
\[E^{\prime}\left(t\right)=-\tau_{1}\int_{0}^{\pi}\varphi_{t}^{2}dx-\tau_{4} \int_{0}^{\pi}\psi_{t}^{2}dx-\left(\tau_{2}+\tau_{3}\right)\int_{0}^{\pi} \varphi_{t}\psi dx \tag{2.14}\]
_and we have_
\[E^{\prime}\left(t\right)=-\frac{1}{2}\left(\tau_{1}-\frac{\left(\tau_{2}+ \tau_{3}\right)^{2}}{4\tau_{2}}\right)\int_{0}^{\pi}\varphi_{t}^{2}dx-\frac{1} {2}\left(\tau_{2}-\frac{\left(\tau_{2}+\tau_{3}\right)^{2}}{4\tau_{1}}\right) \int_{0}^{\pi}\psi_{t}^{2}dx\]
\[-\frac{\tau_{1}}{2}\int_{0}^{\pi}\left(\varphi_{t}+\frac{\left(\tau_{2}+\tau_ {3}\right)}{2\tau_{1}}\psi_{t}\right)^{2}dx-\frac{\tau_{2}}{2}\int_{0}^{\pi} \left(\psi_{t}+\frac{\left(\tau_{2}+\tau_{3}\right)}{2\tau_{2}}\varphi_{t} \right)^{2}dx\leq 0. \tag{2.15}\]
Proof.: Multiplying the equations of (1.1) by \(u_{t},\varphi_{t}\) and \(\psi_{t}\) respectively, then integrating with respect to \(x\) over \(\left(0,\pi\right)\) and using integration by parts and boundary conditions (1.7), the estimate (2.14) follows immediately.
To prove the well-posedness, we use a semigroup approach. First, we introduce the energy space
\[\mathcal{H}=H_{0}^{1}\left(0,\pi\right)\times L^{2}\left(0,\pi\right)\times H_{ \ast}^{1}\left(0,\pi\right)\times L_{\ast}^{2}\left(0,\pi\right)\times H_{\ast} ^{1}\left(0,\pi\right)\times L_{\ast}^{2}\left(0,\pi\right),\]
where,
\[H_{\ast}^{1}\left(0,\pi\right) =\left\{\phi\in H^{1}\left(0,\pi\right):\int_{0}^{\pi}\phi\left(x \right)dx=0\right\},\] \[L_{\ast}^{2}\left(0,\pi\right) =\left\{\phi\in L^{2}\left(0,\pi\right):\int_{0}^{\pi}\phi\left(x \right)dx=0\right\}.\]
We note that \(L_{\ast}^{2}\left(0,\pi\right)\) and \(H_{\ast}^{1}\left(0,\pi\right)\) are closed subspaces of \(L^{2}\left(0,\pi\right)\) and \(H^{1}\left(0,\pi\right)\) respectively. Thus, they are Hilbert spaces and so \(\mathcal{H}\) is.
Next, we rewrite the system (1.1) in the setting of Lumer-Phillips theorem, to do so we introduce the new variables \(v=u_{t},\phi=\varphi_{t}\) and \(w=\psi_{t}\) the system (1.1) becomes
\[\left\{\begin{array}{l}u_{t}=v\\ v_{t}=\dfrac{1}{\rho}\left(\mu u_{xx}+b\varphi_{x}+d\psi_{x}\right),\\ \varphi_{t}=\phi\\ \phi_{t}=\dfrac{1}{\kappa_{1}}\left(\alpha\varphi_{xx}+\beta\psi_{xx}-bu_{x}- \alpha_{1}\varphi-\alpha_{3}\psi-\tau_{1}\phi-\tau_{2}w\right),\\ \psi_{t}=w\\ w_{t}=\dfrac{1}{\kappa_{2}}\left(\beta\varphi_{xx}+\gamma\psi_{xx}-du_{x}- \alpha_{3}\varphi-\alpha_{2}\psi-\tau_{3}\phi-\tau_{4}w\right),\end{array}\right.\]
which can be written
\[\left\{\begin{array}{l}U_{t}=\mathcal{A}U,\\ U\left(0\right)=U_{0},\end{array}\right. \tag{2.16}\]
where \(\mathcal{A}:D\left(\mathcal{A}\right)\subset\mathcal{H}\longrightarrow \mathcal{H}\) is the operator defined by
\[\mathcal{A}=\left(\begin{array}{cccccc}0&I&0&0&0&0\\ \frac{\mu}{\rho}\partial_{xx}&0&\frac{b}{\rho}\partial_{x}&0&\frac{d}{\rho} \partial_{x}&0\\ 0&0&0&I&0&0\\ -\frac{b}{\kappa_{1}}\partial_{x}&0&\frac{\alpha}{\kappa_{1}}\partial_{xx}- \frac{\alpha}{\kappa_{1}}&-\frac{\tau_{1}}{\kappa_{1}}&\frac{\beta}{\kappa_{1 }}\partial_{xx}-\frac{\alpha_{3}}{\kappa_{1}}&-\frac{\tau_{2}}{\kappa_{1}}\\ 0&0&0&0&0&I\\ -\frac{d}{\kappa_{2}}\partial_{x}&0&\frac{\beta}{\kappa_{2}}\partial_{xx}- \alpha_{3}&-\frac{\tau_{3}}{\kappa_{2}}&\frac{\gamma}{\kappa_{2}}\partial_{xx}- \frac{\alpha_{2}}{\kappa_{2}}&-\frac{\tau_{4}}{\kappa_{2}}\end{array}\right)\]
with domain
\[D\left(\mathcal{A}\right)=\left(H^{2}\left(0,\pi\right)\cap H_{0}^{1}\left(0, \pi\right)\right)\times H_{0}^{1}\left(0,\pi\right)\times H_{\ast}^{2}\left(0, \pi\right)\times H_{\ast}^{1}\left(0,\pi\right)\times H_{\ast}^{2}\left(0,\pi \right)\times H_{\ast}^{1}\left(0,\pi\right).\]
Here \(I\) is the identity operator, \(\partial\) denotes the derivative with respect to \(x\) and
\[H_{\ast}^{2}\left(0,\pi\right)=\left\{\phi\in H^{2}\left(0,\pi\right):\phi \left(0\right)=\phi\left(\pi\right)=0\right\}.\]
The following two theorems are useful to proof our well posedness result.
**Theorem 1**.: (Lumer-Phillips) _[15, 18] Let \(\mathcal{A}:D(\mathcal{A})\subset H\longrightarrow H\) be a densely defined operator. Then \(\mathcal{A}\) generates a \(C_{0}\)-semigroup of contractions on \(H\) if and only if_
1. \(\mathcal{A}\) _is dissipative;_
2. _there exists_ \(\lambda>0\) _such that_ \(\lambda I-\mathcal{A}\) _is surjective._
**Theorem 2**.: _[_18_]_ _Let \(\mathcal{A}:D(\mathcal{A})\subset H\longrightarrow H\) be the infinitesimal generator of a C\({}_{0}\)-semigroup \(\{S(t);t\geq 0\}\). Then, for each \(\xi\in D(\mathcal{A})\) and each \(t\geq 0\), we have \(S(t)\xi\in D(\mathcal{A})\), and the mapping_
\[t\longrightarrow S(t)\xi\]
_is of class \(C^{1}\) on \([0,+\infty)\) and satisfies_
\[\frac{d}{dt}(S(t)\xi)=\mathcal{A}S(t)\xi=S(t)\mathcal{A}\xi.\]
Now, we state and prove the well-posedness theorem of the problem (1.1), (1.5) and (1.7).
**Theorem 3**.: _For any \(U_{0}=\left(u_{0},u_{1},\varphi_{0},\varphi_{1},\psi_{0},\psi_{1}\right)\in \mathcal{H}\), the problem (1.1),(1.5) and (1.7) has a unique weak solution \(\left(u,\varphi,\psi\right)\) satisfies the property:_
\[u\in C\left(\left[0,+\infty\right[;H_{0}^{1}\left(0,\pi\right) \right)\cap C^{1}\left(\left[0,+\infty\right[;L^{2}\left(0,\pi\right)\right),\right.\] \[\left.\varphi,\psi\in C\left(\left[0,+\infty\right[;H_{*}^{1}\left(0, \pi\right)\right)\cap C^{1}\left(\left[0,+\infty\right[;L_{*}^{2}\left(0,\pi \right)\right).\]
_Moreover, if \(U_{0}\in D\left(\mathcal{A}\right),\) the solution \(\left(u,\varphi,\psi\right)\) satisfies_
\[u\in C\left(\left[0,+\infty\right[;H^{2}\cap H_{0}^{1}\left(0, \pi\right)\right)\cap C^{1}\left(\left[0,+\infty\right[;H_{0}^{1}\left(0,\pi \right)\right)\cap C^{2}\left(\left[0,+\infty\right[;L^{2}\left(0,\pi\right) \right),\] \[\left.\varphi,\psi\in C\left(\left[0,+\infty\right[;H_{*}^{2}\left(0, \pi\right)\right)\cap C^{1}\left(\left[0,+\infty\right[;H_{*}^{1}\left(0,\pi \right)\right)\cap C^{2}\left(\left[0,+\infty\right[;L_{*}^{2}\left(0,\pi \right)\right).\]
Proof.: According to the Lumer-Phillips theorem, it suffices to prove that the operator \(\mathcal{A}\) is dissipative and maximal.
First, we have for any \(U\in D\left(\mathcal{A}\right)\),
\[\operatorname{Re}\left\langle\mathcal{A}U,U\right\rangle_{\mathcal{H}}=-\tau _{1}\int_{0}^{\pi}\varphi_{t}^{2}dx-\left(\tau_{2}+\tau_{3}\right)\int_{0}^{ \pi}\varphi_{t}\psi_{t}dx-\tau_{4}\int_{0}^{\pi}\psi_{t}^{2}dx\leq 0.\]
Therefore, \(\mathcal{A}\) is dissipative.
Secondly, let \(F=\left(f_{1},f_{2},f_{3},f_{4},f_{5},f_{6}\right)\in\mathcal{H}\), and find \(U\in D\left(\mathcal{A}\right)\) such that \(\mathcal{A}U=\mathcal{F}\), that is,
\[\left\{\begin{array}{l}v=f_{1}\in H_{0}^{1}\\ \mu u_{xx}+b\varphi_{x}+d\psi_{x}=\rho f_{2}\in L^{2},\\ \phi=f_{3}\in H_{*}^{1}\\ \alpha\varphi_{xx}+\beta\psi_{xx}-bu_{x}-\alpha_{1}\varphi-\alpha_{3}\psi- \tau_{1}\phi-\tau_{2}w=\kappa_{1}f_{4}\in L_{*}^{2},\\ w=f_{5}\in H_{*}^{1}\\ \beta\varphi_{xx}+\gamma\psi_{xx}-du_{x}-\alpha_{3}\varphi-\alpha_{2}\psi- \tau_{3}\phi-\tau_{4}w=\kappa_{2}f_{6}\in L_{*}^{2}.\end{array}\right.\]
From the first, the third and the fifth equations we have \(v\in H_{0}^{1}\left(0,\pi\right)\) and \(\phi,w\in H_{*}^{1}\left(0,\pi\right).\) Substituting \(v,\phi\) and \(w\) by \(f_{1},f_{3}\) and \(f_{5}\) respectively we obtain
\[\left\{\begin{array}{l}\mu u_{xx}+b\varphi_{x}+d\psi_{x}=\rho f_{2}\in L^{2}, \\ \alpha\varphi_{xx}+\beta\psi_{xx}-bu_{x}-\alpha_{1}\varphi-\alpha_{3}\psi=\kappa _{1}f_{4}+\tau_{1}f_{3}+\tau_{2}f_{5}=g_{1}\in L_{*}^{2},\\ \beta\varphi_{xx}+\gamma\psi_{xx}-du_{x}-\alpha_{3}\varphi-\alpha_{2}\psi= \kappa_{2}f_{6}+\tau_{3}f_{3}+\tau_{4}f_{5}=g_{2}\in L_{*}^{2},\end{array}\right. \tag{2.17}\]
Taking the \(L^{2}\)-product of (2.17)\({}_{1}\),(2.17)\({}_{2}\) and (2.17)\({}_{3}\) by \(u^{\ast},\varphi^{\ast}\) and \(\psi^{\ast}\) respectively, using integration by parts and adding the obtained equations, we arrive at
\[a\left(V,V^{\ast}\right)=L\left(V^{\ast}\right), \tag{2.18}\]
where, \(a\) is the bilinear form defined on \(\mathcal{W}=\left(H_{0}^{1}\left(0,\pi\right)\times H_{\ast}^{1}\left(0,\pi \right)\times H_{\ast}^{1}\left(0,\pi\right)\right)\) and for \(V=\left(u,\varphi,\psi\right),\ \left(u^{\ast},\varphi^{\ast},\psi^{\ast}\right)\in \mathcal{W}\), by
\[a\left(V,V^{\ast}\right)= \mu\int_{0}^{\pi}u_{x}u_{x}^{\ast}dx+b\int_{0}^{\pi}\varphi u_{x }^{\ast}dx+d\int_{0}^{\pi}\psi u_{x}^{\ast}dx+\alpha\int_{0}^{\pi}\varphi_{x} \varphi_{x}^{\ast}dx\] \[+\beta\int_{0}^{\pi}\psi_{x}\varphi_{x}^{\ast}dx+b\int_{0}^{\pi} u_{x}\varphi^{\ast}dx+\alpha_{1}\int_{0}^{\pi}\varphi\varphi^{\ast}dx+\alpha_{3} \int_{0}^{\pi}\psi\varphi^{\ast}dx\] \[+\beta\int_{0}^{\pi}\varphi_{x}\psi_{x}^{\ast}dx+\gamma\int_{0}^{ \pi}\psi_{x}\psi_{x}^{\ast}dx+d\int_{0}^{\pi}u_{x}\psi^{\ast}dx\] \[+\alpha_{3}\int_{0}^{\pi}\varphi\psi^{\ast}dx+\alpha_{2}\int_{0}^ {\pi}\psi\psi^{\ast}dx\]
and \(L\) is the linear form defined by
\[L\left(V^{\ast}\right)=-\rho\int_{0}^{\pi}f_{2}u^{\ast}dx-\int_{0}^{\pi}g_{1} \varphi^{\ast}dx-\int_{0}^{\pi}g_{2}\psi^{\ast}dx.\]
Clearly, \(a\) and \(L\) are continuous. Furthermore, from Remark 1, there exists \(\varepsilon>0\), such that
\[a\left(V,V\right)= \mu\int_{0}^{\pi}u_{x}^{2}dx+\alpha\int_{0}^{\pi}\varphi_{x}^{2} dx+\gamma\int_{0}^{\pi}\psi_{x}^{2}dx+\alpha_{1}\int_{0}^{\pi}\varphi^{2}dx+ \alpha_{2}\int_{0}^{\pi}\psi^{2}dx\] \[+2b\int_{0}^{\pi}u_{x}\varphi dx+2d\int_{0}^{\pi}\psi u_{x}dx+2 \beta\int_{0}^{\pi}\varphi_{x}\psi_{x}dx+2\alpha_{3}\int_{0}^{\pi}\psi\varphi dx,\] \[\geq \frac{1}{2}\left(\alpha-\frac{\beta^{2}}{\gamma}\right)\int_{0}^{ \pi}\varphi_{x}^{2}dx+\frac{1}{2}\left(\gamma-\frac{\beta^{2}}{\alpha}\right) \int_{0}^{\pi}\psi_{x}^{2}dx+\varepsilon\int_{0}^{\pi}\left(u_{x}^{2}+\varphi ^{2}+\psi^{2}\right)dx.\]
Thus,
\[a\left(V,V\right)\geq c\left\|V\right\|_{\mathcal{W}}^{2},\]
for \(c=\frac{1}{2}\min\{\alpha-\frac{\beta^{2}}{\gamma},\gamma-\frac{\beta^{2}}{ \alpha},2\varepsilon\}\), which shows that \(a\) is coercive. Therefore, Lax-Milgram theorem ensures the existence of a unique \(V=\left(u,\varphi,\psi\right)\)\(\in\mathcal{W}\) satisfying
\[a\left(V,V^{\ast}\right)=L\left(V^{\ast}\right),\ \ \forall V^{\ast}\in\mathcal{W}.\]
Now, taking \(V^{\ast}=\left(u^{\ast},0,0\right)\) in (2.18) we get
\[\mu\int_{0}^{\pi}u_{x}u_{x}^{\ast}=-\int_{0}^{\pi}\left(\rho f_{2}-b\varphi_{ x}-d\psi_{x}\right)u^{\ast}dx,\,\forall u^{\ast}\in H_{0}^{1}. \tag{2.19}\]
The elliptic regularity theory shows that
\[u\in H^{2}\left(0,\pi\right),\]
with
\[u_{xx}=\frac{1}{\mu}\left(\rho f_{2}-b\varphi_{x}-d\psi_{x}\right),\]
which solves the first equation of (2.17).
Next, let \(\varphi^{*}\in H_{0}^{1}\left(0,\pi\right)\) and define
\[\widetilde{\varphi}\left(x\right)=\varphi^{*}\left(x\right)-\int_{0}^{\pi} \varphi^{*}\left(x\right)dx,\]
clearly, \(\widetilde{\varphi}\in H_{*}^{1}\left(0,\pi\right).\) Taking \(V^{*}=\left(0,\widetilde{\varphi},0\right)\) in (2.18) we get
\[\int_{0}^{\pi}\left(\alpha\varphi_{x}+\beta\psi_{x}\right)\widetilde{\varphi} _{x}dx=-\int_{0}^{\pi}\left(g_{1}+bu_{x}+\alpha_{1}\varphi+\alpha_{3}\psi \right)\widetilde{\varphi}dx,\,\forall\widetilde{\varphi}\in H_{*}^{1}, \tag{2.20}\]
which means that
\[\alpha\varphi+\beta\psi\in H^{2}\left(0,\pi\right), \tag{2.21}\]
with
\[\alpha\varphi_{xx}+\beta\psi_{xx}=g_{1}+bu_{x}+\alpha_{1}\varphi+\alpha_{3}\psi.\]
Similarly, we obtain
\[\beta\varphi+\gamma\psi\in H^{2}\left(0,\pi\right) \tag{2.22}\]
\[\beta\varphi_{xx}+\gamma\psi_{xx}=g_{2}+du_{x}+\alpha_{3}\varphi+\alpha_{2}\psi.\]
From (2.21) and (2.22) we get
\[\varphi,\psi\in H^{2}\left(0,\pi\right).\]
To show that \(\varphi\) belongs to \(H_{*}^{2}\left(0,\pi\right)\) we take \(\varphi^{*}\in C^{1}\left(0,\pi\right)\) in (2.20) and define \(\widetilde{\varphi}\) as above, then using integration by parts, we obtain,
\[\left[\left(\alpha\varphi_{x}+\beta\psi_{x}\right)\widetilde{\varphi}\right] _{0}^{\pi}-\int_{0}^{\pi}\left(\alpha\varphi_{xx}+\beta\psi_{xx}-g_{1}-bu_{x}- \alpha_{1}\varphi-\alpha_{3}\psi\right)\widetilde{\varphi}dx=0,\,\forall \widetilde{\varphi}\in H_{*}^{1}. \tag{2.23}\]
First, we take \(\widetilde{\varphi}\in C_{0}^{1}\left(0,\pi\right),\) we get
\[\alpha\varphi_{xx}+\beta\psi_{xx}=g_{1}+bu_{x}+\alpha_{1}\varphi+\alpha_{3} \psi,\quad a.e.\,\text{in}\,\,\left(0,\pi\right).\]
Back to (2.23), we get
\[\left(\alpha\varphi_{x}\left(\pi\right)+\beta\psi_{x}\left(\pi\right)\right) \widetilde{\varphi}\left(\pi\right)-\left(\alpha\varphi_{x}\left(0\right)+ \beta\psi_{x}\left(0\right)\right)\widetilde{\varphi}\left(0\right)=0,\,\forall \widetilde{\varphi}\in H_{*}^{1}.\]
As \(\widetilde{\varphi}\) is arbitrary in \(H_{*}^{1}\left(0,\pi\right)\), we obtain
\[\alpha\varphi_{x}\left(\pi\right)+\beta\psi_{x}\left(\pi\right)=0\,\,\text{ and}\,\,\alpha\varphi_{x}\left(0\right)+\beta\psi_{x}\left(0\right)=0.\]
Similarly, we obtain
\[\beta\varphi_{x}\left(\pi\right)+\gamma\psi_{x}\left(\pi\right)=0\,\,\text{ and}\,\,\beta\varphi_{x}\left(0\right)+\gamma\psi_{x}\left(0\right)=0.\]
Therefore, \(\varphi,\psi\in H_{*}^{2}\left(0,\pi\right),\) consequently \(U\in D\left(\mathcal{A}\right),\) and \(0\in\rho\left(\mathcal{A}\right).\) Moreover, using a geometric series argument we prove that \(\lambda I-\mathcal{A}=\mathcal{A}(\lambda\mathcal{A}^{-1}-I)\) is invertible for \(\left|\lambda\right|<\left\|\mathcal{A}^{-1}\right\|\), then \(\lambda\in\rho(\mathcal{A})\), which completes the proof that \(\mathcal{A}\) is the infinitesimal generator of a \(\mathrm{C}_{0}-\)semigroup, then the Lumer-Phillips theorem ensures the existence of unique solution to the problem (1.1),(1.5) and (1.7) satisfying the statements of Theorem 3.
**Remark 2**.: _We note that if \(U_{0}\in D(\mathcal{A})\) then the solution \(U(t)=e^{t\mathcal{A}}U_{0}\in C((0,\infty);D(\mathcal{A}))\cap C^{1}((0, \infty);\mathcal{H})\) and (2.16) is satisfied in \(\mathcal{H}\) for every \(t>0\). It turns out that \(u,\varphi,\psi\) satisfy (1.1) in the strong sense._
If \(U_{0}\in\mathcal{H}\) there exists a sequence \(U_{0n}\in D(\mathcal{A})\) converging to \(U_{0}\) in \(\mathcal{H}\). Accordingly, there exists a sequence of solutions \(U_{n}(t)=e^{t\mathcal{A}}U_{0n}\) such that \(u_{n},\varphi_{n},\psi_{n}\) satisfy (1.1) in \(L^{2}\) for every \(t>0\), and for any \(T>0\), \(u_{n}\to u\) in \(C((0,T),H_{0}^{1})\cap C^{1}((0,T);L^{2})\), \(\varphi_{n}\to\varphi\) and \(\psi_{n}\to\psi\) in \(C((0,T),H_{*}^{1})\cap C^{1}((0,T);L^{2})\). Therefore, if we multiply the equations of (1.1) for \(u_{n},\varphi_{n},\psi_{n}\) by \(u^{*}\in H_{0}^{1}\) and \(\varphi^{*},\psi^{*}\in H_{*}^{1}\), respectively, then integrate by parts with respect to \(x\) and integrate with respect to \(t\), finally passing to the limit, we find that \(u,\varphi\) and \(\psi\) are weak solutions to the variational form of system (1.1).
### Exponential stability
In the present section we tackle the main objective of this paper, that is the prove of the exponential decay of the solution of (1.1). First we introduce the two following constants
\[\chi_{0}=\left(\frac{\mu\kappa_{1}}{\rho}-\alpha\right)\left(\frac{\mu\kappa_ {2}}{\rho}-\gamma\right)-\beta^{2},\]
and
\[\chi_{1}=d^{2}\left(\frac{\mu\kappa_{1}}{\rho}-\alpha\right)+b^{2}\left(\frac {\mu\kappa_{2}}{\rho}-\gamma\right)+2bd\beta.\]
Our stability result reads as follow:
**Theorem 4**.: _Let \(\left(u,\varphi,\psi\right)\) be a solution of problem (1.1) with boundary conditions (1.7). Assuming that_
\[\chi_{0}=0\text{ and }\ \chi_{1}\neq 0. \tag{2.24}\]
_Then the energy functional \(E\left(t\right)\) defined by (1.8) satisfies_
\[E\left(t\right)\leq\lambda e^{-\xi t},\ \forall t\geq 0, \tag{2.25}\]
_where \(\lambda\) and \(\xi\) are two positive constants._
**Remark 3**.: _The hypothesis (2.24) is equivalent to the following:_
_There exist two constants \(\sigma,\omega\in\mathbb{R}^{*},\) such that_
\[\begin{split}\frac{\mu}{\rho}=\frac{\sigma\alpha+\omega\beta}{ \sigma\kappa_{1}}=\frac{\sigma\beta+\omega\gamma}{\omega\kappa_{2}},& \text{if }\ \beta\neq 0,\\ \left(\frac{\mu}{\rho}=\frac{\alpha}{\kappa_{1}}\text{ and }\ b\neq 0 \right)\text{ or }\ \left(\frac{\mu}{\rho}=\frac{\gamma}{\kappa_{2}}\text{ and }\ d\neq 0\right),& \text{if }\ \beta=0.\end{split} \tag{2.26}\]
_It is clear that in the case where \(\beta\neq 0\) and \(\sigma=b,\omega=d\) solve (2.26), then \(\alpha,\beta,\gamma,\mu,\rho,\kappa_{1}\) and \(\kappa_{2}\) solve (2.24)._
The proof of Theorem 4, will be established through several lemmas.
**Lemma 2**.: _For \(\left(u,\varphi,\psi\right)\) solution of (1.1), there exist positive constants \(\hat{\alpha},\hat{\gamma},\hat{\alpha}_{1}\)and \(\hat{\alpha}_{2},\) such that the functional_
\[F_{1}\left(t\right) =\kappa_{1}\int_{0}^{1}\varphi_{t}\varphi dx+\kappa_{2}\int_{0}^{ 1}\psi_{t}\psi dx+\frac{\tau_{1}}{2}\int_{0}^{1}\varphi^{2}dx+\frac{\tau_{4}}{ 2}\int_{0}^{1}\psi^{2}dx\] \[-\frac{\rho}{\mu}\int_{0}^{1}u_{t}\left(\int_{0}^{x}\left(b \varphi+d\psi\right)\left(y\right)dy\right)dx\]
_satisfies for any \(\delta>0\), the estimate_
\[F_{1}^{\prime}\left(t\right) \leq-\hat{\alpha}\int_{0}^{1}\varphi_{x}^{2}dx-\hat{\gamma}\int_{0} ^{1}\psi_{x}^{2}dx-\frac{\hat{\alpha}_{1}}{2}\int_{0}^{1}\varphi^{2}dx-\frac{ \hat{\alpha}_{2}}{2}\int_{0}^{1}\psi^{2}dx\] \[+\delta\int_{0}^{1}u_{t}^{2}dx+m_{\delta}\int_{0}^{1}\varphi_{t}^ {2}dx+m_{\delta}\int_{0}^{1}\psi_{t}^{2}dx. \tag{2.27}\]
Proof.: The differentiation of \(F_{1}\left(t\right)\) gives
\[F_{1}^{\prime}\left(t\right) =\kappa_{1}\int_{0}^{1}\varphi_{tt}\varphi dx+\kappa_{1}\int_{0}^ {1}\varphi_{t}^{2}dx+\kappa_{2}\int_{0}^{1}\psi_{tt}\psi dx+\kappa_{2}\int_{0 }^{1}\psi_{t}^{2}dx\] \[+\tau_{1}\int_{0}^{1}\varphi\varphi_{t}dx+\tau_{4}\int_{0}^{1} \psi_{t}\psi dx-\frac{\rho}{\mu}\int_{0}^{1}u_{tt}\left(\int_{0}^{x}\left(b \varphi+d\psi\right)\left(y\right)dy\right)dx\] \[-\frac{\rho}{\mu}\int_{0}^{1}u_{t}\left(\int_{0}^{x}\left(b \varphi+d\psi\right)_{t}\left(y\right)dy\right)dx.\]
By exploiting the equations of (1.1) and using integration by parts, we get
\[F_{1}^{\prime}\left(t\right) =-\alpha\int_{0}^{1}\varphi_{x}^{2}dx-2\beta\int_{0}^{1}\psi_{x} \varphi_{x}dx-\gamma\int_{0}^{1}\psi_{x}^{2}dx\] \[-\left(\alpha_{1}-\frac{b^{2}}{\mu}\right)\int_{0}^{1}\varphi^{2 }dx-2\left(\alpha_{3}-\frac{bd}{\mu}\right)\int_{0}^{1}\psi\varphi dx-\left( \alpha_{2}-\frac{d^{2}}{\mu}\right)\int_{0}^{1}\psi^{2}dx\] \[+\kappa_{1}\int_{0}^{1}\varphi_{t}^{2}dx+\kappa_{2}\int_{0}^{1} \psi_{t}^{2}dx-\tau_{2}\int_{0}^{1}\psi_{t}\varphi dx-\tau_{3}\int_{0}^{1} \varphi_{t}\psi dx\] \[-\frac{\rho}{\mu}\int_{0}^{1}u_{t}\left(\int_{0}^{x}\left(b \varphi_{t}+d\psi_{t}\right)\left(y\right)dy\right)dx.\]
Then, using Young's and Cauchy Schwarz inequalities, we obtain
\[F_{1}^{\prime}\left(t\right) \leq-\left(\alpha-\beta\varepsilon\right)\int_{0}^{1}\varphi_{x} ^{2}dx-\left(\gamma-\frac{b}{\varepsilon}\right)\int_{0}^{1}\psi_{x}^{2}dx\] \[-\left[\left(\alpha_{1}-\frac{b^{2}}{\mu}\right)-\left(\alpha_{3 }-\frac{bd}{\mu}\right)\eta-\epsilon\right]\int_{0}^{1}\varphi^{2}dx\] \[-\left[\left(\alpha_{2}-\frac{d^{2}}{\mu}\right)-\frac{1}{\eta} \left(\alpha_{3}-\frac{bd}{\mu}\right)-\epsilon\right]\int_{0}^{1}\psi^{2}dx\] \[+m\left(1+\frac{1}{\epsilon}+\frac{1}{\delta}\right)\int_{0}^{1} \varphi_{t}^{2}dx+m\left(1+\frac{1}{\epsilon}+\frac{1}{\delta}\right)\int_{0} ^{1}\psi_{t}^{2}dx+\delta\int_{0}^{1}u_{t}^{2}dx,\]
for any \(\varepsilon,\eta,\epsilon,\delta>0\).
First, by virtue of (1.10), we can choose \(\varepsilon>0\) such that
\[\hat{\alpha}=\alpha-\beta\varepsilon>0,\text{and}\ \ \ \hat{\gamma}=\gamma-\frac{b}{ \varepsilon}>0.\]
Similarly, (1.9) allows us to choose \(\eta>0\) such that
\[\hat{\alpha}_{1}=\left(\alpha_{1}-\frac{b^{2}}{\mu}\right)-\left(\alpha_{3}- \frac{bd}{\mu}\right)\eta>0,\]
\[\hat{\alpha}_{2}=\left(\alpha_{2}-\frac{d^{2}}{\mu}\right)-\frac{1}{\eta}\left( \alpha_{3}-\frac{bd}{\mu}\right)>0.\]
Finally, we choose \(\epsilon>0\) so that
\[\hat{\alpha}_{1}-\epsilon\geq\frac{\hat{\alpha}_{1}}{2},\,\text{and}\,\,\,\, \hat{\alpha}_{2}-\epsilon\geq\frac{\hat{\alpha}_{2}}{2}.\]
Consequently, the estimate (2.27) follows.
**Lemma 3**.: _Let \(\sigma\) and \(\omega\) be two constants that satisfy (2.26), then the functional_
\[F_{2}\left(t\right):=\rho\int_{0}^{\pi}\left(\left(\sigma\alpha+\omega\beta \right)\varphi_{x}+\left(\sigma\beta+\omega\gamma\right)\psi_{x}\right)u_{t} dx+\mu\int_{0}^{\pi}\left(\sigma\kappa_{1}\varphi_{t}+\omega\kappa_{2}\psi_{t} \right)u_{x}dx\]
_satisfies along the solution \(\left(u,\varphi,\psi\right),\) the estimate_
\[\left(b\sigma+d\omega\right)F_{2}^{\prime}\left(t\right) \leq-\frac{\mu}{2}\left(b\sigma+d\omega\right)^{2}\int_{0}^{\pi}u _{x}^{2}dx\] \[+m\left(\int_{0}^{1}\varphi_{x}^{2}dx+\int_{0}^{1}\psi_{x}^{2}dx +\int_{0}^{1}\varphi_{t}^{2}dx+\int_{0}^{1}\psi_{t}^{2}dx\right). \tag{2.28}\]
Proof.: Differentiating \(F_{2}\left(t\right),\) using integration by parts and boundary conditions (1.7) we get
\[F_{2}^{\prime}\left(t\right) =\rho\left(\frac{\mu\sigma\kappa_{1}}{\rho}-\left(\sigma\alpha+ \omega\beta\right)\right)\int_{0}^{1}u_{xt}\varphi_{t}dx+\rho\left(\frac{\mu \omega\kappa_{2}}{\rho}-\left(\sigma\beta+\omega\gamma\right)\right)\int_{0}^ {1}u_{xt}\psi_{t}dx\] \[-\mu\left(b\sigma+d\omega\right)\int_{0}^{\pi}u_{x}^{2}dx+b\left( \sigma\alpha+\omega\beta\right)\int_{0}^{\pi}\varphi_{x}^{2}dx+d\left(\sigma \beta+\omega\gamma\right)\int_{0}^{\pi}\psi_{x}^{2}dx\] \[+\left(\sigma\left(d\alpha+b\beta\right)+\omega\left(d\beta+b \gamma\right)\right)\int_{0}^{\pi}\varphi_{x}\psi_{x}dx-\mu\left(\sigma\alpha _{1}+\omega\alpha_{3}\right)\int_{0}^{\pi}\varphi u_{x}dx\] \[-\mu\left(\sigma\alpha_{3}+\omega\alpha_{2}\right)\int_{0}^{\pi} \psi u_{x}-\mu\left(\sigma\tau_{1}+\omega\tau_{3}\right)\int_{0}^{\pi}\varphi _{t}u_{x}dx-\mu\left(\sigma\tau_{2}+\omega\tau_{4}\right)\int_{0}^{\pi}\psi_{ t}u_{x}dx.\]
Thus, estimate (2.28) follows immediately by taking into account (2.26), using Young's and Poincare's inequalities.
**Lemma 4**.: _Along the solution \(\left(u,\varphi,\psi\right)\) of (1.1), the functional_
\[F_{3}\left(t\right)=-\rho\int_{0}^{1}u_{t}udx\]
_satisfies_
\[F_{3}^{\prime}\left(t\right)\leq-\rho\int_{0}^{1}u_{t}^{2}dx+2\mu\int_{0}^{1} u_{x}^{2}dx+\frac{b^{2}}{2\mu}\int_{0}^{1}\varphi^{2}dx+\frac{d^{2}}{2\mu} \int_{0}^{1}\psi^{2}dx. \tag{2.29}\]
Proof.: Differentiating \(F_{3}\left(t\right)\), using integration by parts and Young's inequality, estimate (2.29) follows immediately.
**End of the proof of Theorem 4**
At this point we define the Lyapunov functional \(\mathcal{L}\left(t\right)\) as follows
\[\mathcal{L}\left(t\right)=NE\left(t\right)+N_{1}F_{1}\left(t\right)+N_{2}\left( b\sigma+d\omega\right)F_{2}\left(t\right)+F_{3}\left(t\right),\]
where \(N,N_{1}\) and \(N_{2}\) are positive constants to be properly chosen later.
First, we have
\[\left|\mathcal{L}\left(t\right)-NE\left(t\right)\right|\leq N_{1}\int_{0}^{ \pi}\left(\kappa_{1}\left|\varphi_{t}\varphi\right|+\kappa_{2}\left|\psi_{t} \psi\right|+\frac{\tau_{1}}{2}\left|\varphi\right|^{2}+\frac{\tau_{4}}{2}\left| \psi\right|^{2}\right)dx\]
\[-\frac{\rho}{\mu}\int_{0}^{\pi}\left|u_{t}\left(\int_{0}^{x}\left(b\varphi+d \psi\right)\left(y\right)dy\right)\right|dx+\rho\int_{0}^{1}\left|u_{t}u\right|dx\]
\[+N_{2}\left|b\sigma+d\omega\right|\int_{0}^{\pi}\left(\rho\left|\sigma\alpha +\omega\beta\right|\left|u_{t}\varphi_{x}\right|+\rho\left|\sigma\beta+ \omega\gamma\right|\left|u_{t}\psi_{x}\right|\right)dx\]
\[+N_{2}\int_{0}^{\pi}\left(\mu\kappa_{1}\left|b\varphi_{t}u_{x}\right|+\mu \kappa_{2}\left|d\psi_{t}u_{x}\right|\right)dx.\]
Using Young's, Cauchy Schwarz and Poincare's inequalities, we obtain
\[\left|\mathcal{L}\left(t\right)-NE\left(t\right)\right| \leq c_{0}\int_{0}^{1}\left(u_{t}^{2}+\varphi_{t}^{2}+\psi_{t}^{2 }+\left(\varphi_{x}+\psi_{x}\right)^{2}+\psi_{x}^{2}+\left(u_{x}+\varphi+\psi \right)^{2}\right)dx\] \[\leq cE\left(t\right).\]
Thus,
\[\left(N-c\right)E\left(t\right)\leq\mathcal{L}\left(t\right)\leq\left(N+c \right)E\left(t\right).\]
Secondly, substituting (2.15),(2.27),(2.28) and (2.29) in the expression of \(\mathcal{L}^{\prime}\left(t\right)\) we get
\[\mathcal{L}^{\prime}\left(t\right) \leq-\left[\frac{1}{2}\left(\tau_{1}-\frac{\left(\tau_{2}+\tau_{ 3}\right)^{2}}{4\tau_{4}}\right)N-m_{\delta}N_{1}-mN_{2}\right]\int_{0}^{1} \varphi_{t}^{2}dx\] \[-\left[\frac{1}{2}\left(\tau_{4}-\frac{\left(\tau_{2}+\tau_{3} \right)^{2}}{4\tau_{1}}\right)N-m_{\delta}N_{1}-mN_{2}\right]\int_{0}^{1} \psi_{t}^{2}dx\] \[-\mu\left(\frac{\left(\sigma b+\omega d\right)^{2}}{2}N_{2}-2 \right)\int_{0}^{1}u_{x}^{2}dx-\left(\rho-\delta N_{1}\right)\int_{0}^{1}u_{t }^{2}dx\] \[-\left(\hat{\alpha}N_{1}-mN_{2}\right)\int_{0}^{1}\varphi_{x}^{2 }dx-\left(\hat{\gamma}N_{1}-mN_{2}\right)\int_{0}^{1}\psi_{x}^{2}dx\] \[-\frac{1}{2}\left(\hat{\alpha}_{1}N_{1}-\frac{b^{2}}{\mu}\right) \int_{0}^{1}\varphi^{2}dx-\frac{1}{2}\left(\hat{\alpha}_{2}N_{1}-\frac{d^{2}}{ \mu}\right)\int_{0}^{1}\psi^{2}dx.\]
Now, we have to choose the coefficients carefully. First, we take
\[\delta=\frac{\rho}{2N_{1}}.\]
Secondly, We choose \(N_{2}\) large enough such that
\[\frac{\left(\sigma b+\omega d\right)^{2}}{2}N_{2}-2>0.\]
Next, we pick \(N_{1}\) large enough such that
\[\hat{\alpha}N_{1}-m_{\delta}N_{2}>0,\ \hat{\gamma}N_{1}-m_{\delta}N_{2}>0,\]
\[\hat{\alpha}_{1}N_{1}-\frac{b^{2}}{\mu}>0,\,\text{and}\ \ \hat{\alpha}_{2}N_{1}-\frac{d^{2}}{\mu}>0.\]
Finally, we take \(N\) large enough such that \(\mathcal{L}\left(t\right)\sim E\left(t\right)\) (i.e. \(N-c>0\)) and
\[\frac{1}{2}\left(\tau_{1}-\frac{\left(\tau_{2}+\tau_{3}\right)^{ 2}}{4\tau_{4}}\right)N-mN_{1}-m_{\delta}N_{2}>0,\] \[\frac{1}{2}\left(\tau_{4}-\frac{\left(\tau_{2}+\tau_{3}\right)^{ 2}}{4\tau_{1}}\right)N-mN_{1}-m_{\delta}N_{2}>0.\]
Therefore, there exist \(\sigma\) and \(\widetilde{\sigma}\) positives constants such that
\[\mathcal{L}^{\prime}\left(t\right) \leq-\sigma\int_{0}^{\pi}\left(\varphi_{t}^{2}+\psi_{t}^{2}+u_{t }^{2}+u_{x}^{2}+\varphi_{x}^{2}+\psi_{x}^{2}+\psi^{2}+\varphi^{2}\right)dx,\] \[\leq-\widetilde{\sigma}E\left(t\right),\qquad\forall t\geq 0.\]
Since \(E\left(t\right)\) is equivalent to \(\mathcal{L}\left(t\right),\) we infer that
\[\mathcal{L}^{\prime}\left(t\right)\leq-\omega\mathcal{L}\left(t\right),\ \ \forall t\geq 0,\]
for some positive constant \(\omega.\) Thus
\[\mathcal{L}\left(t\right)\leq\lambda_{1}\mathcal{L}\left(0\right)e^{-\omega t },\ \ \forall t\geq 0.\]
Using again the equivalence between \(\mathcal{L}\left(t\right)\) and \(E\left(t\right)\) we conclude that
\[E\left(t\right)\leq\lambda e^{-\omega t},\ \ \forall t\geq 0,\]
which completes the proof of Theorem 4.
**Remark 4**.: _The same proof is valid for the following boundary conditions_
\[\begin{array}{l}u_{x}\left(t,\pi\right)=\varphi\left(t,\pi\right)=\psi \left(t,\pi\right)=0,\\ u\left(t,0\right)=\varphi_{x}\left(t,0\right)=\psi_{x}\left(t,0\right)=0, \end{array}\ \ t\geq 0.\]
## 3 Lack of exponential decay
In this section we suppose that (2.24) does not hold, and prove that the solution \(\left(u,\varphi,\psi\right)\) of the system (1.1) lacks exponentially stability. The proof is based on the following theorem due to Gerhart-Pruss-Huang [10, 16, 11].
**Theorem 5**.: _Let \(S\left(t\right)=e^{\mathcal{A}t}\) be a \(C_{0}-\)semigroup of contractions on a Hilbert space \(\mathcal{H}\), with infinitesimal generator \(\mathcal{A}\). Then \(S\left(t\right)\) is exponentially stable if and only if:_
* \(i\mathbb{R}\subset\rho\left(\mathcal{A}\right),\)__
* \(\underset{\left|\lambda\right|\longrightarrow\infty}{\overline{\lim}}\left\| \left(\lambda I-\mathcal{A}\right)^{-1}\right\|_{\mathcal{L}\left(\mathcal{H} \right)}<\infty.\)__
Our result of non exponential stability reads as follow.
**Theorem 6**.: _Suppose that (2.24) does not hold, then the energy associated with the solution \(\left(u,\varphi,\psi\right)\) of the system (1.1) is not exponentially stable._
Proof.: It suffices to prove that there exists a sequence \(\left(F_{n}\right)\subset\mathcal{H}\) with bounded norm \(\left\|F_{n}\right\|<1\), such that
\[\underset{\left|\lambda\right|\longrightarrow\infty}{\overline{\lim}}\left\| \left(\lambda I-\mathcal{A}\right)^{-1}F_{n}\right\|_{\mathcal{H}}=\underset {\left|\lambda\right|\longrightarrow\infty}{\overline{\lim}}\left\|U_{n} \right\|_{\mathcal{H}}=\infty. \tag{3.30}\]
Let \(\left(U_{n}\right)_{n\in\mathbb{N}}\subset D\left(\mathcal{A}\right)\) be the solution of \(\left(\lambda I-\mathcal{A}\right)U_{n}=F_{n}\), then, omitting \(n\) we have
\[i\lambda u+v =f_{1}\] \[i\lambda\rho v+\mu u_{xx}+b\varphi_{x}+d\psi_{x} =\rho f_{2}\] \[i\lambda\varphi+\phi =f_{3}\] \[i\lambda\kappa_{1}\phi+\alpha\varphi_{xx}+\beta\psi_{xx}-bu_{x}- \alpha_{1}\varphi-\alpha_{3}\psi-\tau_{1}\phi-\tau_{2}\chi =\kappa_{1}f_{4}\] \[i\lambda\psi+\chi =f_{5}\] \[i\lambda\kappa_{2}\chi+\beta\varphi_{xx}+\gamma\psi_{xx}-du_{x}- \alpha_{3}\varphi-\alpha_{2}\psi-\tau_{3}\phi-\tau_{4}\chi =\kappa_{2}f_{6}.\]
Taking \(f_{1}=f_{3}=f_{4}=f_{5}=f_{6}=0\) and \(f_{2}=\dfrac{1}{\rho}\sin\left(n\pi x\right),\) then eliminating \(v,\phi\) and \(\chi\) we obtain
\[\begin{array}{c}\lambda^{2}\rho u+\mu u_{xx}+b\varphi_{x}+d\psi_{x}\\ \lambda^{2}\kappa_{1}\varphi+\alpha\varphi_{xx}+\beta\psi_{xx}-bu_{x}-\left( \alpha_{1}-i\lambda\tau_{1}\right)\varphi-\left(\alpha_{3}-i\lambda\tau_{2} \right)\psi\\ \lambda^{2}\kappa_{2}\psi+\beta\varphi_{xx}+\gamma\psi_{xx}-du_{x}-\left( \alpha_{3}-i\lambda\tau_{3}\right)\varphi-\left(\alpha_{2}-i\lambda\tau_{4} \right)\psi\end{array}=0.\]
Taking into account the boundary conditions (1.7), we are looking for \(\left(u,\varphi,\psi\right)\) of the form
\[u=A\sin\left(n\pi x\right),\,\varphi=B\cos\left(n\pi x\right),\,\psi=C\cos \left(n\pi x\right).\]
That is
\[\left\{\begin{array}{c}\left(\rho\lambda^{2}-\mu\pi^{2}n^{2}\right)A-bn\pi B -dn\pi C=1\\ -b\left(n\pi\right)A+\left[\kappa_{1}\lambda^{2}-\left(n\pi\right)^{2}\alpha- \left(\alpha_{1}-i\lambda\tau_{1}\right)\right]B-\left[\beta\left(n\pi\right) ^{2}+\left(\alpha_{3}-i\lambda\tau_{2}\right)\right]C=0\\ -d\left(n\pi\right)A-\left[\beta\left(n\pi\right)^{2}+\left(\alpha_{3}-i \lambda\tau_{3}\right)\right]B+\left[\kappa_{2}\lambda^{2}-\left(n\pi\right)^ {2}\gamma-\left(\alpha_{2}-i\lambda\tau_{4}\right)\right]C=0,\end{array}\right.\]
which can be written
\[\left(\begin{array}{ccc}p_{1}\left(\lambda\right)&-bn\pi&-dn\pi\\ -bn\pi&p_{2}\left(\lambda\right)&p_{4}\left(\lambda\right)\\ -dn\pi&p_{5}\left(\lambda\right)&p_{3}\left(\lambda\right)\end{array}\right) \left(\begin{array}{c}A\\ B\\ C\end{array}\right)=\left(\begin{array}{c}1\\ 0\\ 0\end{array}\right) \tag{3.31}\]
where
\[p_{1}\left(\lambda\right):=\rho\lambda^{2}-\mu\left(\pi n\right)^{2},\,\,p_{2 }\left(\lambda\right):=\kappa_{1}\lambda^{2}-\left(n\pi\right)^{2}\alpha-\left( \alpha_{1}-i\lambda\tau_{1}\right),\]
\[p_{3}\left(\lambda\right):=\kappa_{2}\lambda^{2}-\left(n\pi\right)^{2}\gamma- \left(\alpha_{2}-i\lambda\tau_{4}\right),\,p_{4}\left(\lambda\right):=-\beta \left(n\pi\right)^{2}-\left(\alpha_{3}-i\lambda\tau_{2}\right),\]
\[p_{5}\left(\lambda\right):=-\beta\left(n\pi\right)^{2}-\left(\alpha_{3}-i \lambda\tau_{3}\right).\]
Solving (3.31) we obtain
\[A=\dfrac{K_{1}}{p_{1}K_{1}+K_{2}},\]
where,
\[K_{1}:=p_{2}p_{3}-p_{4}p_{5},\;K_{2}:=b(n\pi)^{2}\left(dp_{4}-bp_{3}\right)-d\left( n\pi\right)^{2}\left(dp_{2}-bp_{5}\right).\]
Let \(\lambda\) be such that \(p_{1}\left(\lambda\right)=0\), then \(\left(n\pi\right)^{2}=\dfrac{\rho\lambda^{2}}{\mu}\) and
\[K_{1}=\dfrac{\rho}{\mu}\left[\left(\dfrac{\mu\kappa_{1}}{\rho}-\alpha\right) \left(\dfrac{\mu\kappa_{2}}{\rho}-\gamma\right)-\beta^{2}\right]\lambda^{4}\]
\[+i\dfrac{\rho}{\mu}\left[\left(\dfrac{\mu\kappa_{1}}{\rho}-\alpha\right)\tau_ {4}+\left(\dfrac{\mu\kappa_{2}}{\rho}-\gamma\right)\tau_{1}+\beta\left(\tau_{2 }+\tau_{3}\right)\right]\lambda^{3}+K_{3},\]
that is
\[K_{1}=\dfrac{\rho}{\mu}\chi_{0}\lambda^{4}++i\dfrac{\rho}{\mu}\left[\left( \dfrac{\mu\kappa_{1}}{\rho}-\alpha\right)\tau_{4}+\left(\dfrac{\mu\kappa_{2}} {\rho}-\gamma\right)\tau_{1}+\beta\left(\tau_{2}+\tau_{3}\right)\right]\lambda ^{3}+K_{3}\]
and
\[K_{2}=-\dfrac{\rho^{2}}{\mu^{2}}\left[b^{2}\left(\dfrac{\mu\kappa_{2}}{\rho}- \gamma\right)+d^{2}\left(\dfrac{\mu\kappa_{1}}{\rho}-\alpha\right)+2bd\beta \right]\lambda^{4}\]
\[+i\dfrac{\rho}{\mu}\left[bd\left(\tau_{2}+\tau_{3}\right)-b^{2}\tau_{4}-d^{2} \tau_{1}\right]\lambda^{3}+K_{4}\]
that is
\[K_{2}=-\dfrac{\rho^{2}}{\mu^{2}}\chi_{1}\lambda^{4}+i\dfrac{\rho}{\mu}\left[ bd\left(\tau_{2}+\tau_{3}\right)-b^{2}\tau_{4}-d^{2}\tau_{1}\right]\lambda^{3}+K_{4}\]
where \(K_{3},K_{4}\) are polynomials of degree 2 in \(\lambda\).
At this point we discuss three cases:
**1)**: Suppose that \(\chi_{0}\neq 0\) and \(\chi_{1}\neq 0\),then
\[A=\dfrac{K_{1}}{K_{2}}\approx\dfrac{\mu\chi_{0}}{-\rho\chi_{1}}\equiv c,\]
for some constant \(c\neq 0\).
**2)**: Suppose that \(\chi_{0}=\chi_{1}=0\), then
\[\dfrac{\mu\kappa_{1}}{\rho}-\alpha=-\dfrac{b\beta}{d},\;\dfrac{\mu\kappa_{2}} {\rho}-\gamma=-\dfrac{d\beta}{d}.\]
Consequently
\[\left(\dfrac{\mu\kappa_{1}}{\rho}-\alpha\right)\tau_{4}+\left(\dfrac{\mu\kappa _{2}}{\rho}-\gamma\right)\tau_{1}+\beta\left(\tau_{2}+\tau_{3}\right)\neq 0,\]
\[bd\left(\tau_{2}+\tau_{3}\right)-b^{2}\tau_{4}-d^{2}\tau_{1}\neq 0,\]
by virtue of (2.13) and
\[A=\dfrac{K_{1}}{K_{2}}\approx\dfrac{\left[\left(\dfrac{\mu\kappa_{1}}{\rho}- \alpha\right)\tau_{4}+\left(\dfrac{\mu\kappa_{2}}{\rho}-\gamma\right)\tau_{1} +\beta\left(\tau_{2}+\tau_{3}\right)\right]}{\left[bd\left(\tau_{2}+\tau_{3} \right)-b^{2}\tau_{4}-d^{2}\tau_{1}\right]}\equiv c.\]
Therefore,
\[\left\|U\right\|^{2}\geq\rho\left\|v\right\|^{2}=\rho c^{2}\left|\lambda\right|^{2 }\int_{0}^{1}\sin^{2}\left(n\pi x\right)dx=\frac{\rho c^{2}\left|\lambda\right| ^{2}}{2},\]
and consequently,
\[\lim_{\left|\lambda\right|\longrightarrow\infty}\left\|U\right\|^{2}=\infty.\]
**3)**: Suppose that \(\chi_{0}\neq 0\) and \(\chi_{1}=0\), then
\[A=\frac{K_{1}}{K_{2}}\approx\frac{\chi_{0}\lambda}{-i\left[bd\left(\tau_{2}+ \tau_{3}\right)-b^{2}\tau_{4}-d^{2}\tau_{1}\right]}\approx c\lambda,\]
\[\left\|U\right\|^{2}\geq\left\|u_{x}\right\|^{2}=A^{2}\left(n\pi\right)^{2} \int_{0}^{1}\cos^{2}\left(n\pi x\right)dx=\frac{c^{2}\left|\lambda\right|^{4} \mu}{2\rho}\]
and
\[\lim_{\left|\lambda\right|\longrightarrow\infty}\left\|U\right\|^{2}=\infty.\]
Therefore, in all cases (3.30) holds and consequently, the proof of Theorem 6 is completed.
|
2302.08016 | Unsupervised Domain Adaptation for MRI Volume Segmentation and
Classification Using Image-to-Image Translation | Unsupervised domain adaptation is a type of domain adaptation and exploits
labeled data from the source domain and unlabeled data from the target one. In
the Cross-Modality Domain Adaptation for Medical Image Segmenta-tion challenge
(crossMoDA2022), contrast enhanced T1 MRI volumes for brain are provided as the
source domain data, and high-resolution T2 MRI volumes are provided as the
target domain data. The crossMoDA2022 challenge contains two tasks,
segmentation of vestibular schwannoma (VS) and cochlea, and clas-sification of
VS with Koos grade. In this report, we presented our solution for the
crossMoDA2022 challenge. We employ an image-to-image translation method for
unsupervised domain adaptation and residual U-Net the segmenta-tion task. We
use SVM for the classification task. The experimental results show that the
mean DSC and ASSD are 0.614 and 2.936 for the segmentation task and MA-MAE is
0.84 for the classification task. | Satoshi Kondo, Satoshi Kasai | 2023-02-16T01:09:50Z | http://arxiv.org/abs/2302.08016v1 | Unsupervised Domain Adaptation for MRI Volume Segmentation and Classification Using Image-to-Image Translation
###### Abstract
Unsupervised domain adaptation is a type of domain adaptation and exploits labeled data from the source domain and unlabeled data from the target one. In the Cross-Modality Domain Adaptation for Medical Image Segmentation challenge (crossMoDA2022), contrast enhanced T1 MRI volumes for brain are provided as the source domain data, and high-resolution T2 MRI volumes are provided as the target domain data. The crossMoDA2022 challenge contains two tasks, segmentation of vestibular schwanmonia (VS) and cochlea, and classification of VS with Koos grade. In this report, we presented our solution for the crossMoDA2022 challenge. We employ an image-to-image translation method for unsupervised domain adaptation and residual U-Net the segmentation task. We use SVM for the classification task. The experimental results show that the mean DSC and ASSD are 0.614 and 2.936 for the segmentation task and MA-MAE is 0.84 for the classification task.
Keywords:Segmentation, Domain adaptation, Image-to-image translation.
## 1 Introduction
Unsupervised domain adaptation (UDA) is a type of domain adaptation and exploits labeled data from the source domain and unlabeled data from the target one [1, 2]. In the Cross-Modality Domain Adaptation for Medical Image Segmentation challenge held in MICCAI2022 conference (crossMoDA2022), a large and multi-class dataset for unsupervised domain adaptation is introduced [3]. In this challenge, contrast enhanced T1 MRI volumes for brain are provided as the source domain data, and high-resolution T2 MRI volumes are provided as the target domain data [4, 5]. The crossMoDA2022 challenge contains two tasks as followings.
a) Task 1 - Segmentation of two key brain structures (tumor and cochlea) involved in the follow-up and treatment planning of vestibular schwanmonia (VS). The diagnosis and surveillance in patients with VS are commonly performed using contrast-enhanced T1 (ceT1) MR imaging. However, there is growing interest in using non-contrast imaging sequences such as high-resolution T2 (hrT2) imaging due to improvement of patient safety and cost efficacy.
b) Task 2 - Classification of VS according to the Koos grade in hrT2 images. The Koos grading scale is a classification system for VS that characterizes the tumor and its impact on adjacent brain structures. There are four grades. Grade 1 means small intracanalicular tumor, grade 2 means small tumor with protrusion into the cerebellumcistern and no contact with the brainstem, grade 3 means tumor occupying the cerebellumcistern with no brainstem displacement, and grade 4 means large tumor with brainstem and cranial nerve displacement. Koos grading is currently performed on ceT1 scans, but hrT2 could be used.
In this report, we present our solution to the crossMoDA2022 challenge. We employ an image-to-image translation method for unsupervised domain adaptation and residual U-Net with deep super vision for the segmentation task. We use support vector machines (SVM) with hand-crafted features for the classification task.
## 2 Proposed Method
The dataset includes contrast enhanced T1 MRI (ceT1) volumes for brain as the source domain data (the segmentation and the classification labels are provided), and high-resolution T2 MRI (hrT2) volumes as the target domain data without any labels. In the proposed method, ceT1 volumes are translated to hrT2-like volumes by using an image-to-image translation method. The segmentation of VS and cochlea is performed with our segmentation model which is trained by using the translated hrT2-like volumes and the segmentation labels for corresponding ceT1 volumes. Classification of VS according to the Koos grade is conducted by using SVM with hand-crafted features obtained from the segmentation results. We will explain the details of the image-to-image translation method, the segmentation model and the classification model in the followings.
We use DCLGAN [6] as our image-to-image translation method. DCLGAN is an unsupervised image-to-image translation method. It is based on contrastive learning and a dual learning setting (exploiting two encoders) to infer an efficient mapping between unpaired data. We apply DCLGAN to translate ceT1 slices to hrT2 slices, i.e. the translation is conducted in not 3D volumes but 2D images.
We use 3D encoder-decoder networks for the segmentation task. Our base model is residual U-Net with deep super vision [7]. The input volumes for the training is translated volumes from ceT1 to hrT2 with DCLGAN. An input volume is resampled in [0.4 mm, 0.4 mm, 0.5 mm] for x, y and z direction, respectively, at first. MRI volumes are normalized with clipping. The minimum and maximum values are 26 and 486, respectively, for the clipping. In the training phase, we randomly sample 3D patches from the input volumes. The size of a 3D patch is 96 x 96 x 96 voxels. The ratio of positive, i.e., VS and cochlea, and negative patches in the sampling for one input volume is 1:1:1. We apply intensity shift within 5 % for augmentation.
The loss function is adaptive t-vMF Dice loss [9]. The parameter \(\lambda\) for the adaptive t-vMF Dice loss is set to 256. We also employ deep super vision for loss calculation. Intermediate outputs from several layers in the decoder of the model are up-sampled,
loss value is calculated for each up-sampled output, and then the loss values are aggregated. The number of layers used in the deep super vision is three.
We train multiple models. Each model is trained independently using different combinations of training and validate datasets, and the inference results are obtained by ensemble of the outputs from the models. The final likelihood score is obtained by averaging the likelihood scores from the models. We use five models in our experiments.
We conduct the classification of VS according to the Koos grade by using SVM with hand-crafted features. The features used for the classification are the volume of VS and the size of the bounding box of VS in x, y and z directions, where we use the segmentation results of VS for calculating these features. We use linear-type SVM and the SVM is trained by using the segmentation label of VS for ceT1 volumes.
## 3 Experiments
Our method is implemented by mainly using PyTorch [10], PyTorch Lightning and MONAI libraries. We use three Nvidia RTX3090 GPUs for training.
The crossMoDA2022 dataset contains 210 ceT1 volumes (source) with segmentation and classification labels and 210 hrT2 volumes (target) without labels for training, and 64 hrT2 volumes for validation.
For the training of DCLGAN, we randomly selected about 4,000 slices from ceT1 volumes and hrT2 volumes, respectively. The optimizer for the training of DCLGAN is Adam [7] and the learning rate changes with cosine annealing. The initial learning rate is 0.0002. The number of epoch is 200. The model at the last epoch is selected as the final model. Figure 1 shows an example of ceT1 to hrT2 translation with DCLGAN.
Figure 1: An example of image-to-image translation results. (a) Input ceT1 slice. (b) Translated hrT2-like slice with DCLGAN.
As for the hyper-parameter tuning in DCLGAN, we changed the number of slices from about 500 to about 12,000 for each modality. The learning rates were changed from 1e-5 to 1e-2.
For the training of the segmentation model, the optimizer is Adam and the learning rate changes with cosine annealing. The initial learning rate is 0.001. The number of epoch is 300. The model taking the lowest loss value for the validation dataset is selected as the final model.
As for the hyper-parameter tuning in the segmentation model, the learning rates were change from 1e-5 to 1e-2. We also tried different loss function such as Dice \(+\) cross entropy loss.
We evaluated our method with the evaluation system provided by the organizers of crossMoDA2022. For task 1 (segmentation), the Dice Score (DSC) and the Average Symmetric Surface Distance (ASSD) are used as evaluation metrics. For task 2 (classification), macro-averaged mean absolute error (MA-MAE) is used as evaluation metrics.
The results of our submission in task 1 are the mean/std DSC values are \(0.450\pm 0.286\) and \(0.779\pm 0.051\) for VS and cochlea, respectively. And the mean/std ASSD values are \(5.61\pm 8.21\) and \(0.264\pm 0.156\) for VS and cochlea, respectively. The result of our submission in task 2 is that MA-MAE is 0.84.
## 4 Conclusions
In this report, we presented our solution for the crossMoDA2022 challenge. We employ an image-to-image translation method for unsupervised domain adaptation and residual U-Net the segmentation task. We use SVM for the classification task. The experimental results show that the mean DSC and ASSD are 0.614 and 2.936 for the segmentation task and MA-MAE is 0.84 for the classification task.
|
2301.11017 | Chromatic aberrations correction of attosecond high-order harmonic beams
by flat-top spatial shaping of the fundamental beam | Attosecond pulses created by high-order harmonic generation in gases often
exhibit strong chromatic aberrations, arising from the broad bandwidth and
wavelength-dependent nonlinear light-matter interaction. When the driving laser
intensity varies spatially, as for Gaussian driving beams, the apparent source
position of the harmonics differs significantly from one order to the next,
thus affecting the achievable intensity and duration of the attosecond pulses
when they are focused on a target. We show that these chromatic aberrations can
be reduced by spatially shaping the fundamental beam to generate high-order
harmonics with a driver having a flat-top profile inside the gas medium. By
measuring both the intensity profile and wavefront for each harmonic in a
plane, we access the extreme ultra-violet (XUV) beam properties and investigate
these properties near focus. We observe that controlling chromatic aberrations
by flat-top spatial shaping strongly reduces the variation of the XUV spectrum
on the beam axis during propagation and, in return, the longitudinal
sensitivity of both the temporal profiles and the temporal shifts of the
focused attosecond pulses. | K. Veyrinas, M. Plach, J. Peschel, M. Hoflund, F. Catoire, C. Valentin, P. Smorenburg, H. Dacasa, S. Maclot, C. Guo, H. Wikmark, A. Zair, V. Strelkov, C. Picot, C. Arnold, P. Eng-Johnsson, A. L Huillier, E. Mevel, E. Constant | 2023-01-26T10:11:14Z | http://arxiv.org/abs/2301.11017v1 | Chromatic aberrations correction of attosecond high-order harmonic beams by flat-top spatial shaping of the fundamental beam
###### Abstract
Attosecond pulses created by high-order harmonic generation in gases often exhibit strong chromatic aberrations, arising from the broad bandwidth and wavelength-dependent nonlinear light-matter interaction. When the driving laser intensity varies spatially, as for Gaussian driving beams, the apparent source position of the harmonics differs significantly from one order to the next, thus affecting the achievable intensity and duration of the attosecond pulses when they are focused on a target. We show that these chromatic aberrations can be reduced by spatially shaping the fundamental beam to generate high-order harmonics with a driver having a flat-top profile inside the gas medium. By measuring both the intensity profile and wavefront for each harmonic in a plane, we access the extreme ultra-violet (XUV) beam properties and investigate these properties near focus. We observe that controlling chromatic aberrations by flat-top spatial shaping strongly reduces the variation of the XUV spectrum on the beam axis during propagation and, in return, the longitudinal sensitivity of both the temporal profiles and the temporal shifts of the focused attosecond pulses.
Keywords: attosecond pulses, high-order harmonics, chromatic aberration, flat-top, spatial shaping.
## 1 Introduction
High-order harmonic generation (HHG) in gases is a source of phase-locked broadband extreme ultra-violet (XUV) pulses that are now commonly used in applications requiring femtosecond and/or attosecond resolution [1], [2], [3], [4]. Attosecond dynamics is for instance accessible via XUV - Infrared (IR) [5], [6] or XUV-XUV pump-probe experiments [7], [8]. XUV-IR experiments use either a single attosecond pulse (streaking technique) [10] or a train of pulses (Reconstruction of Attosecond Beating by two-photon Transition, or RABBIT technique)[11]. Since both approaches
are mainly based on phase variation measurements of an oscillatory signal, the spatial properties of the XUV beams are not crucial [12]. Analyses are indeed often performed by averaging over space, thus neglecting the influence of any possible spatio-dependent effect such as chromatic aberrations. Attosecond XUV-XUV pump-probe experiments, on the other hand, require high focused intensities. Hence, focusing XUV beams to small spots while maintaining their attosecond temporal structure is crucial [13, 14, 15, 16, 17]. These properties are also essential in high resolution imaging using broadband XUV radiation [18].
To achieve intense attosecond pulses on target, it is necessary to focus all frequency components at the same position. This requires all harmonics to have similar spatial properties, which is often not the case due to intrinsic chromatic aberrations [19, 20]. The origin of the chromatic aberrations lies in the fact that harmonic dipole phases [21], inherent to HHG, is dependent upon the interplay between the harmonic order and laser intensity [22]. Furthermore, the dispersive generating medium can have an index that varies with space via the laser intensity and medium ionization yield. In the generating medium, the phase of the emitted XUV radiations is therefore space and wavelength-dependent and evolves radially when the laser intensity presents a radial dependence. With Gaussian beams, the radial laser intensity variations in the generating medium induces a wavefront curvature that changes with harmonic order and causes chromatic aberrations [19], [20, 23].
Order-dependent far field spatial profiles have been observed in many different generating media, e. g., jets [24], cells [25, 26], semi-infinite cells and filaments [27, 28] or gas filled capillary [29, 30]. In addition, experimental measurements have shown evidence that harmonics originate from different source points which depend strongly on the process order [19, 20, 31, 32, 23, 26]. When refocused, the harmonics are therefore focused on different, longitudinally separated, positions. As a consequence, the attosecond temporal profile varies along the propagation axis [33, 34, 24, 26]. A scientific effort is therefore increasingly devoted to exploring and controlling the spatial properties of high-order harmonics and attosecond pulses [34, 35, 24, 36, 37, 19, 20, 31, 32, 38, 39, 26, 23, 40] which is the aim of the study presented here.
In this work, we generate high-order harmonics with an IR beam having a flat-top profile near focus [41, 42, 43]. While flat-top beams can be obtained from Gaussian beams after propagation in long media [44] when ionization induces intensity shaping, we chose, here, to shape the flat-top beam directly with a phase mask to disentangle shaping and propagation effects. The beam-shaping apparatus is robust, stable, and versatile and allows us to achieve beam profiles that can be super-Gaussian, flat-top or annular, fine-tuned around the flat-top configuration by opening or closing an iris. With flat-top shaping and a short medium, the transverse intensity gradient is reduced in most of the generating volume and the harmonic properties become less dependent on the generating conditions than with a Gaussian fundamental beam. After refocusing the harmonics, we characterize the XUV wavefront curvature and spatial profile with the Spectral Wavefront Optical Reconstruction by Diffraction (SWORD) technique [45] and thereby measure the position and sizes of the XUV focus for each harmonic generated in a gas jet (see supplementary material, SM, for a gas cell). We observe that the harmonic beams generated with a flat-top shaped fundamental beam are spatially much narrower in the far field [42] than with the Gaussian beam which implies that the XUV foci (or apparent sources) are larger. This has a large impact on the attosecond XUV beam quality as we find that the different harmonics can be focused much closer to each other (relative to the XUV confocal parameter) using a flat-top driving field as compared to the standard generation with Gaussian beams. Chromatic aberrations can therefore be controlled thus improving the spatio-temporal characteristics of the attosecond pulses. These results are compared to numerical simulations consistent with our observations.
## 2 Experimental method
The experimental setup is schematically shown in Fig. 1. High-order harmonics are generated at 10 Hz repetition rate in a gas medium (jet or cell) with a 40 fs high-energy (up to 45 mJ after compression) titanium sapphire IR laser driver centered at \(\lambda=808\) nm that is spatially filtered and wavefront-corrected. Wavefront control is performed with a deformable mirror located under vacuum. The IR beam with waist W = 27 mm, is truncated by a motorized iris and focused with an f = 8.7 m focal length mirror in a gas medium (pulsed gas jet with 250 \(\upmu\)m jet nozzle and 5 bar backing pressure or 1 cm long gas cell). The IR focus position vs the gas medium is adjusted by controlling the curvature of the deformable mirror [26]. The IR beam shape at focus is observed with a camera located at a position that mimics the gas medium position. This observation is performed in air with an attenuated beam that is transmitted through a folding mirror (Fig. 1).
A phase mask can be inserted in the path of the fundamental beam to achieve spatial shaping near the IR focus. The mask is an anti-reflection coated, 3 mm thick, SiO\({}_{2}\) plate (see Fig. 1) with an additional 880nm thick, 20 mm diameter central area. The thickness of the SiO\({}_{2}\) central part is chosen to create a \(\pi\) dephasing between the central and outer parts of the IR beam. It induces destructive interferences at focus on the beam axis between the inner and outer beams when both are focused [42, 46]. These interferences redistribute light away from the axis and leads to a flat-top profile in the radial dimension under proper conditions. When the iris is closed to a diameter
of 20 mm, the phase mask has no shaping effect, and the IR beam is a truncated Gaussian beam. The beam size at focus is then approximately 300 \(\upmu\)m FWHM. Furthermore, this iris diameter is sufficiently large to provide optimum conditions for HHG with a regular apertured Gaussian beam. When the iris is opened to larger diameters, shaping occurs leading to a near flat-top beam at IR focus as detailed in the following.
The generated harmonics are reflected by an SiO\({}_{2}\) plate that attenuates the IR, filtered by an Al foil and reflected by two toroidal mirrors placed in a Wolter configuration focusing the XUV beam and providing a 35-fold demagnification of the harmonic source [47]. The XUV focus is located at the entrance of a slit-less flat-field spectrometer that consists in a variable line spacing Hitachi grating and an MCP detector imaged with a CCD camera and enables for the characterization of the full beam.
An additional 120 \(\upmu\)m wide slit, perpendicular to the grating grooves and mobile in the direction of the grooves, can be inserted between the XUV focus and the spectrometer grating to select a small portion of the beam. The transmitted beam hits the MCP at a position that depends on the XUV radiation wave vector at the slit position. Observing the impact position as a function of the slit position provides the radial evolution of the wave vector orientation and thereby the radius of curvature of the XUV beam in the slit plane. The XUV beam intensity profile in this plane is also measured by integrating the transmitted XUV signal as a function of the slit position. This SWORD measurement [45], performed for each harmonic, provides the radial intensity profile and wavefront curvature in the slit plane for each harmonic.
_Fig. 1. Experimental setup used for high-order harmonic generation and characterization. The spatially filtered IR laser beam is wavefront corrected by a deformable mirror (DM) and truncated by a motorized iris before focusing on a gas target. A phase mask (anti-reflection coated SiO\({}_{2}\) plate with an additional 20 mm diameter, 880 nm thick SiO\({}_{2}\) step on the central part) can be inserted to spatially shape the IR beam near its focus. Harmonics are generated in a gas jet located at the IR focus (for gas cell see supplementary material, SM). The XUV beam is then filtered spectrally by a fused silica (FS) plate and a 200 nm thick Al filter before being refocused by Wolter optics at the entrance of a slit-less flat-field spectrometer. An additional slit can be translated in the direction of the spectrometer grating grooves to perform a SWORD measurement, thus providing the wavefront and intensity profiles of each harmonic beam in the slit plane. Z is the longitudinal propagation coordinate in the application chamber and Z = 0 is arbitrarily set at the focus position of H15._
Shaping of the fundamental beam is achieved with the phase plate and controlled by a motorized iris. Fig. 2 shows a cut of the shaped IR intensity profiles at focus as a function of the iris diameter. When the iris is closed to 20 mm diameter (lower curve) the IR intensity profile at focus resembles a Gaussian profile. This configuration represents our reference Gaussian configuration in the following. When the iris is opened to 27 mm diameter, the beam exhibits an annular shape with a local intensity minimum in the center of the beam. For intermediate iris diameters, typically between 24 and 26 mm, the beam is shaped to a flat-top or super-Gaussian profile. This shaping arises near focus from on-axis destructive interferences between the inner and outer parts of the beam that are very dependent on the exact beam shape and on the centering of the plate. It is therefore difficult to obtain a perfect symmetry, but the observed beam profile nicely follows our previous simulations [46] (see SM fig. S2). This approach provides a reference Gaussian shape for \(\phi_{\mathrm{iris}}\) = 20 mm and near flat-top beams for \(\phi_{\mathrm{iris}}\) \(\geq\) 24 mm.
The beam size (FWHM) at focus, increases with the iris diameter and changes by a factor of approximately 2 between the reference case (\(\phi_{\mathrm{iris}}\) = 20 mm, FWHM = 300 \(\upmu\)m) and the flat-top beam (FWHM = 500 to 540 \(\upmu\)m). The pulse energy increases slightly when the iris is opened. We measure a reduction of the focused laser intensity due to beam shaping by a factor of \(\sim\) 3.
Fig. 2. Normalized cuts of the IR beam intensity profile at focus as a function of the iris diameter ranging from 20 mm to 27 mm. The reference Gaussian beam (thin grey line) is obtained when the iris is closed to 20 mm diameter in which case the beam is not shaped by the phase mask_
## 3 Results
### Experimental results
Harmonics are generated in argon with both flat-top and Gaussian beams. The IR intensity decreases with the shaping and only harmonics 11 (noted H11) to 19 (H19) are observed with the flat-top beam while harmonics with higher orders are easily obtained with the truncated Gaussian beam. In the following, the same laser energies are used with flat-top and Gaussian beams and only harmonics that are generated in both configurations (H11 to H19) are considered.
_Fig. 3. XUV beam size (FWHM) on the MCP detector for harmonic orders 13 to 19 for several laser energies with the flat-top spatial shaping (\(\phi_{\mathrm{vis}}=\) 25 mm, circles, blue curves) and the reference Gaussian beam (\(\phi_{\mathrm{vis}}=\) 20 mm, diamond markers, grey curves). The gas jet is located at the IR focus._
Fig. 3 shows that harmonics generated with a Gaussian beam (grey curves, diamond markers) have a size increasing with the harmonic order at a given intensity. This is typical when harmonics generated at focus via the short quantum path are detected. It can also be observed that the XUV beam sizes increase with the intensity. When harmonics are generated with a flat - top shaped beam (blue curves), the XUV beams are smaller by approximately a factor of two [42, 43, 48] and, opposite to the Gaussian beam case, the beam size decreases with the harmonic order at a given intensity. The evolution of the XUV beam size with intensity is also less pronounced with the flat-top than with the Gaussian beam. Since the flat-top spatial shaping is achieved near the IR focus on a limited longitudinal range [46], these observations are performed with the jet located at the IR focus. It is however known that with Gaussian beams, the XUV beam divergence changes with the generating medium position. In general, locating the medium at the focus of the Gaussian fundamental beam does not lead to minimum divergence [19, 20, 49]. We therefore measured the divergence of the XUV beam generated with the Gaussian beam for several longitudinal positions of the jet relative to the IR focus. The XUV divergence remained larger than that observed with the flat-top driver and we observed that the positions of minimum divergence changes with harmonic order [19, 20].
In the shaped beam case, we observed similar beam sizes for all harmonics. This indicates that the impact of the order dependent spatial phase variation is reduced by the shaping. This is expected as the atomic phase, \(\phi_{\mathrm{q}}(\mathrm{r})\), depends on the intensity and on the harmonic order but its variations are reduced when the intensity is independent on r, the radial coordinate. Flat-top spatial shaping also reduces the radial evolution of the phase term, \(\phi_{\mathrm{q}}(\mathrm{r})\), accumulated by the fundamental during propagation in a partially ionized medium as ionization is also independent on the radial distance with a flat-top beam. This reduces plasma-induced defocusing that is also known to affect the divergence of harmonics [50]. At the low intensity used here for the flat-top beam, ionization must be very limited. For the Gaussian case, the IR intensity is higher, and simulations show that ionization affects the XUV beams (see supplementary material, SM).
These observations are corroborated by a spectrally integrated direct observation of the XUV beam on an X-ray CCD camera (Andor). We systematically observe that the XUV beams are smaller when generated with a flat-top beam than when generated with the reference Gaussian beam (see SM). These observations show that the flat-top shaping is a way to minimize the influence of the intensity dependent spatial phase variation on the harmonic beam divergence.
The SWORD measurements show also noticeable differences between the wavefront radii of curvature of the harmonics generated by flat-top and Gaussian beams. Fig. 4 presents the outcome of the measurements performed with the Gaussian and flat-top IR beams when harmonics are generated in a gas jet (see SM for gas cell, Fig. S5). The radii of curvature, measured in the slit plane, are on the order of the geometrical distance between the Wolter focus and the slit (approximately 12 cm) but their evolution with harmonic order is different. Harmonics generated with a Gaussian beam exhibit radii of curvature, R\({}_{\mathrm{q}}\), that decrease with increasing harmonic order, q. In contrast, when harmonics are generated with a flat-top beam, R\({}_{\mathrm{q}}\) increases with q. Measurements performed with a gas cell show similar trends (see SM Fig. S5.).
The exact values of R\({}_{\mathrm{q}}\) extracted from these measurements strongly depend on the calibration and the accuracy of the measurement, limited by the spatial resolution on the MCP.
However, the relative evolution of \(\rm R_{q}\) with harmonic order is less sensitive to calibration and we estimate that a relative error of less than 1% is achieved. The error bars can also be directly estimated from a comparison between the first and second orders of diffraction of the XUV grating which should give the same radii of curvature. Indeed, observed differences are less than 0.5% of the radii of curvature.
### Analysis within Gaussian approach
We use these measurements to estimate the positions of the harmonic foci. Harmonic beams have profiles that are close to Gaussian (see SM Fig. S3, S10 and S11) and we use the Gaussian approach presented in Quintard _et al._[19] to extract the positions and sizes of the harmonic foci (a similar approach is presented in Wikmark _et al._[20]). From the positions and sizes of the foci, the spatial properties of the harmonic beam (beam size and radius of curvature) can be determined in any plane. Our model relies on the assumption that the XUV beams are ideal Gaussian beams with quadratic wavefront which correspond well to our observations. This approach also assumes that the \(\rm M^{2}\) factor of each harmonic beam is equal to 1 which is not measured here but provides a good fit of the experimental data (see SM figures S10 and S11). Under these assumptions, the size \(\rm W_{0q}\) and position \(\rm Z_{0q}\) of the harmonic waist with respect to the slit position are uniquely defined from the measurement of \(\rm R_{q}\), the radius of curvature (Fig. 4 (a)) and \(\rm W_{0q}\) the beam size in the slit plane (Fig. 4 (b)). With \(\lambda_{q}\) the wavelength of the harmonic, we have:
\[W_{0q}=\sqrt{\frac{\left[\frac{R_{q}2q}{\pi W_{q}}\right]^{2}}{1+\left[\frac{R _{q}2q}{\pi W_{q}^{2}}\right]^{2}}}\]
Eq. (1)
\[Z_{0q}=-\frac{R_{q}}{1+\left[\frac{R_{q}2q}{\pi W_{q}^{2}}\right]^{2}}\]
Eq. (2)
The foci sizes and positions, represented in Fig. 5 (a) and (b), change with harmonic order. Positions are represented with reference to the \(15^{\rm th}\) harmonic focus. The XUV foci sizes obtained with a Gaussian fundamental beam are found to be in the range of 2 to 3 \(\rm\upmu\)m, in agreement with former measurements on this source [26, 47, 51]. In the case of a flat-top fundamental beam, the waist sizes are larger and reach 4 to 5 \(\rm\upmu\)m. From these values, we can estimate the beam size in any plane. Those estimated in the plane of the MCP agree well with the measured beam profiles (see SM, Fig. S10 and S11) which further validates our approach.
Figure 4: (a) Radii of curvature, \(R_{\varphi}\) and (b) beam sizes (FWHM) of the harmonics generated in a gas jet with a Gaussian beam (grey diamond markers) or with a shaped flat-top beam (blue circle symbols). Harmonics are generated in a gas jet. The measurements are performed in the slit plane located approximately 12 cm after the Wolter focus. The results obtained with a 1 cm gas cell show a similar trend (see SM). Dashed lines are a linear fit representing a guide for the eye.
Fig. 5 (c) and (d) show the harmonic intensity along the propagation axis when harmonics are generated in a gas jet with the reference Gaussian beam (c) and with a flat-top beam (d) with the experimentally measured characteristics. The harmonic foci are separated longitudinally in both cases, but the separation is smaller than the XUV confocal parameters for the flat-top driver while it is larger for the Gaussian beam. Fig. 6 shows the average XUV photon energy along the propagation axis and the XUV bandwidth. The bandwidth is estimated to be 2.35 \(\sigma\) (FWHM = 2.35\(\sigma\) for a Gaussian distribution) where \(\sigma\) is the root mean square difference of the photon energy distribution. Far from focus, the on-axis bandwidth is constant and does not evolve significantly with propagation. Near the focus, we observe a change of the mean photon energy and of the XUV bandwidth. With the Gaussian beam, the bandwidth changes by almost a factor 2 (from 4.8 to 8 eV) with longitudinal position while for the flat-top beam it changes only by 26 % (5.4 to 7.3 eV). In both cases, the on-axis bandwidth decreases near focus.
Fig. 5: _(a) Waist size and (b) focus shift, Z\({}_{\omega,\varphi}\) divided by the Rayleigh range of each harmonic, Z\({}_{\tau,\varphi}\) as a function of harmonic order (blue: flat-top, grey: Gaussian) simulated with the measured beam sizes and radii of curvatures. Subfigures c) and d) illustrate how the harmonics are focused. It shows the simulated on-axis XUV intensity for the harmonic orders 11 to 19 for the fundamental Gaussian beam (c) and for the flat-top beam (d) respectively. Z = 0 is arbitrarily set to the position of the focus of harmonic 15._
_Fig. 6. Evolution of the average photon energy (thick line) and bandwidth (light line) for (a) Gaussian and (b) flat-top shaped fundamental beams_
These observations show that chromatic aberrations are present in XUV harmonic beams and are reduced when the generating beam has a flat-top shape in the generating medium. Chromatic aberrations impact the focusing and local bandwidth and consequently the attosecond pulse structure as illustrated in the following.
The duration of attosecond pulses depends on the bandwidth and on the dephasing between its frequency components ("atto chirp") [52]. Measuring the spatial characteristics of each harmonic beam and their relative amplitude allows us to estimate the temporal profile along the propagation axis using the measured harmonic amplitudes and including the Gouy dephasing. We assume that all harmonics are in phase at infinity and present no attochirp (Fourier limited pulses). This represents the ideal case where the attosecond pulse duration is the shortest compatible with its spectrum. We observe that the pulse duration changes near the focus by a large fraction of its asymptotic value when the harmonics are generated with a Gaussian beam (here 31% with estimated durations between \(\tau_{\text{asympt}}=280\) as and \(\tau_{\text{max}}=362\) as) while the typical variation is of the order of 10 to 20 % for a flat-top shaped fundamental beam (here 16 % with estimated durations between \(\tau_{\text{asympt}}=290\) as and \(\tau_{\text{max}}=336\) as). Similar effects are observed with harmonic generation in a gas cell (see SM, Fig. S6).
The change of pulse temporal profile with longitudinal propagation is illustrated in Fig. 7 which shows the on-axis temporal intensity profile as a function of the propagation coordinate after normalization. This normalization suppresses the on-axis intensity evolution that is due to beam divergence. We observe a significant distortion of the pulse near focus and a pulse duration changing with propagation when harmonics are generated with the Gaussian beam (Fig. 7 (a)). The flat-top shaping of the fundamental beam has a strong spatial smoothing effect on the XUV beam and the pulse duration on axis changes only very slightly with propagation (Fig. 7 (b)).
These results also reveal that the chromatic aberrations impact the relative dephasing between harmonics which evolves with propagation. The Gouy phase shift evolution affects the relative harmonic dephasing and is the strongest near the XUV foci. It affects the timing of the XUV pulse maximum as compared to t = 0 that represents the center of the XUV pulse at asymptotic positions. Near XUV focus, we observe a shift of the center of the pulse by more than 100 as when Gaussian beams are used. This shift reduces to 30 as when a flat-top beam is used. For pump - probe experiment involving XUV pulses and IR fundamental, the observed shifts near focus can affect the temporal resolution.
_Fig. 7. Spatial evolution of the attosecond temporal profile for harmonics 11 to 19 emitted in a jet with a Gaussian fundamental beam (a) or a flat-top shaped fundamental beam (b). The corresponding temporal profiles are shown in sub-figure (c) for five longitudinal positions (Z = -1.5 mm, -0.5 mm, 0 mm, 0.5 mm and + 1.5 mm) as indicated by the dashed line through the temporal profile. The blue line is the profile obtained with the flat-top fundamental and the grey line is obtained for the Gaussian beam. The results obtained with a 1 cm gas cell can be found in the SM, Fig. S7._
3.3 TDSE based simulations
More advanced simulations are performed with accurate atomic response calculations and going beyond the Gaussian model discussed previously. The simulations of the experimental geometry assume the generating IR field being either a truncated Gaussian beam or a truncated Gaussian beam with the phase mask. In the latter case the field at the target has a radial flat-top spatial profile similar to the one shown in Fig 2 (a) with \(\phi_{\rm iris}=25\) mm. The microscopic response is calculated via 3D Time dependent Schrodinger equation (TDSE) for a model argon atom. The generating medium is approximated by an infinitely thin plane layer. The amplitude and phase of the atomic response are calculated at a set of transverse positions. The IR pulse duration is 40 fs, and the peak IR intensity on axis is 2x10\({}^{14}\) W/cm\({}^{2}\) in one set of calculations and 3.5x10\({}^{14}\) W/cm\({}^{2}\) in another one (see SM). Finally, the effect of the focusing Wolter optics is also simulated.
The simulated XUV divergence is found to be smaller with the flat-top beam compared to the Gaussian beam, as observed experimentally. Figure 8 shows that order dependent XUV foci shifts are present. They are larger with a Gaussian than with a flat-top driver. For the Gaussian beam, the foci of lower order harmonics are longitudinally shifted downstream with respect to the ones of the high-order harmonics by about 1 mm. This can be observed in the experimental data shown in Fig. 5. The foci longitudinal separation for the Gaussian beam leads to a more pronounced bandwidth and pulse duration evolution along the propagation axis in the range where the harmonics are focused (Fig. 8 (c) and (d)), in agreement with the predictions of the Gaussian model. Contrary to these predictions, however, the distances over which the harmonics are focused are smaller with the flat-top than with the Gaussian. As the XUV divergence is smaller for harmonics generated with the flat-top beam, this is a signature of the poor optical quality for the XUV beam generated with the Gaussian beam. Simulations also show a net asymmetry in the intensity evolution along propagation. The differences between the Gaussian model and the TDSE results illustrate also the range of validity of the (simplified) Gaussian model. For instance, this model cannot reproduce structured XUV foci which are obtained at high intensity in TDSE simulations (see Fig. S13). Also, it does not include plasma-induced defocusing or ionization induced intensity reshaping that can be significant for high intensity and long/dense media [44], [53], [54], [55]. Intensity reshaping due to propagation, which is not monitored in the present work, is expected to be weak as the medium is very thin and the intensity not too high. TDSE simulations show nevertheless that plasma induced defocusing can have an impact at high intensity on the XUV wavefront as it changes the IR wavefront and thereby also the XUV divergence (see SM). In these simulations, the XUV beam profile is found to be more regular when generated with a flat-top driver than with a Gaussian driver.
## 4 Discussion
The study is performed here on a limited spectral range (H11 to H19) that corresponds to the bandwidth generated by the flat-top beam. Similar chromatic aberrations are observed with Gaussian beams over the full bandwidth (see SM, Fig. S8 and S9) and in general chromatic aberrations increase with the emitted bandwidth. This may become a strong limiting factor to achieving shorter attosecond pulses with very large bandwidth [56], [57]. The work presented in this article shows that shaping the fundamental beam can be an efficient way to reduce chromatic aberrations. More complex spatial shaping can be performed with advanced technology such as spatial light modulator (SLM) [48] opening the possibility to reduce chromatic aberrations over a much larger bandwidth. Furthermore, the study of flat-top HHG at higher intensities is of particular interest to see over which maximum bandwidth the foci shift can be neglected. In return, this would give an estimate of the shortest attosecond pulses that can be used near focus without considering the beam spatial properties. Increasing the intensity for the flat-top IR driver would also increase the XUV flux. This is technically possible and would require the use of shorter focal lengths or higher laser energies.
Figure 8: Evolution of the XUV intensity on the axis as a function of the longitudinal position for harmonics generated with a truncated Gaussian beam (a) or with a flat-top beam (b). Z = 0 is arbitrarily set to the position of the focus of harmonic 15 (c-d) Corresponding evolution of average photon energy (thick line) and the bandwidth (thin line) on the beam axis as a function of the longitudinal position. These simulations are performed with TDSE (see text).
The chromatic aberrations studied here arise from the fact that inside the generating medium the harmonics have spatial characteristics (wavefront and beam size) evolving with the harmonic order. We observe the same effect in a gas jet and a 1 cm cell (see SM) that are both minor compared to the IR confocal parameter. It would also be interesting to study chromatic aberrations of harmonics generated with longer media or in a guided configuration [58, 59]. In a longer medium, ionization induced reshaping of the IR intensity profile can occur and can also lead to flat-top intensity profile after propagation [44]. It would be particularly interesting to see to which extent both flat-top configurations are equivalent and if both can help controlling the chromatic aberrations. In a guided configuration, the wavefront of the fundamental beam is flat, as it is the case here at the IR focus, but the XUV phases remain dependent on the IR intensity and on the plasma density that both change radially. Even in a guided configuration, the XUV wavefront will therefore not be flat and will depend on the harmonic order if the harmonics are not guided which is usually the case. In fact, when defocusing is neglected and for plateau harmonics that are significantly reabsorbed, there is little difference between HHG in a guided configuration and HHG at focus in a gas medium that is smaller than the IR confocal parameter as performed here. We therefore anticipate that similar chromatic aberrations should also exist in XUV beams generated in a guided configuration and that the observed phenomenon is very general.
## 5 Conclusion
In summary, we show that performing HHG with a flat-top spatially shaped fundamental beam provides control of the XUV beam properties and allows us to reduce XUV chromatic aberrations as compared to the usual case where HHG is performed with a truncated Gaussian beam. The position of each harmonic focus is measured by the SWORD technique showing that the harmonic foci are separated longitudinally. The chromatic aberrations are strong enough to affect the XUV bandwidth locally and the attosecond temporal profiles simulated on axis show a pulse duration that changes significantly during propagation when Gaussian beams are used for HHG. For harmonics generated by a flat-top shaped beam, focus separation still exists but the XUV beam divergence is strongly reduced while the apparent size of the harmonic sources is increased. In this case, the focus offsets are smaller than the XUV Rayleigh length and the impact of XUV chromatic aberrations is reduced. These chromatic aberrations are associated with a strong longitudinal evolution of the XUV bandwidth and of the attosecond temporal profiles when harmonics are generated with a truncated Gaussian beam especially near the XUV foci. The variation of the attosecond pulse duration along the propagation axis and the associated temporal shift are strongly reduced when flat-top spatially shaped fundamental laser beams are used for HHG. This spatial-shaping-induced control will be highly beneficial to study attosecond dynamics with high temporal resolution and to achieve high XUV intensities.
## Acknowledgements
The research leading to these results has received funding from LASERLAB-EUROPE (grant agreement no. 654148, European Union's Horizon 2020 research and innovation programme). The authors acknowledge support from the Swedish Research Council, the European Research Council (advanced grant QPAP, 884900), the Knut and Alice Wallenberg Foundation and Region Nouvelle-Aquitaine through the 'OFIMAX' project (contract N\({}^{*}\) 184289). AL is partly supported by the Wallenberg Center for Quantum Technology (WACQT) funded by the Knut and Alice Wallenberg foundation. M. P. acknowledges the support of the Helmholtz Foundation through the Helmholtz-Lund International Graduate School (HELIOS, HIRS-0018). V.S. acknowledges support from Theoretical Physics and Mathematics Advancement Foundation "BASIS". We acknowledge the expert assistance from Anders Persson.
## Data availability statement
The data that support the findings of this study are available upon request from the authors.
|
2304.05864 | Scale-Equivariant Deep Learning for 3D Data | The ability of convolutional neural networks (CNNs) to recognize objects
regardless of their position in the image is due to the
translation-equivariance of the convolutional operation. Group-equivariant CNNs
transfer this equivariance to other transformations of the input. Dealing
appropriately with objects and object parts of different scale is challenging,
and scale can vary for multiple reasons such as the underlying object size or
the resolution of the imaging modality. In this paper, we propose a
scale-equivariant convolutional network layer for three-dimensional data that
guarantees scale-equivariance in 3D CNNs. Scale-equivariance lifts the burden
of having to learn each possible scale separately, allowing the neural network
to focus on higher-level learning goals, which leads to better results and
better data-efficiency. We provide an overview of the theoretical foundations
and scientific work on scale-equivariant neural networks in the two-dimensional
domain. We then transfer the concepts from 2D to the three-dimensional space
and create a scale-equivariant convolutional layer for 3D data. Using the
proposed scale-equivariant layer, we create a scale-equivariant U-Net for
medical image segmentation and compare it with a non-scale-equivariant baseline
method. Our experiments demonstrate the effectiveness of the proposed method in
achieving scale-equivariance for 3D medical image analysis. We publish our code
at https://github.com/wimmerth/scale-equivariant-3d-convnet for further
research and application. | Thomas Wimmer, Vladimir Golkov, Hoai Nam Dang, Moritz Zaiss, Andreas Maier, Daniel Cremers | 2023-04-12T13:56:12Z | http://arxiv.org/abs/2304.05864v1 | # Scale-Equivariant Deep Learning for 3D Data
###### Abstract
The ability of convolutional neural networks (CNNs) to recognize objects regardless of their position in the image is due to the translation-equivariance of the convolutional operation. Group-equivariant CNNs transfer this equivariance to other transformations of the input. Dealing appropriately with objects and object parts of different scale is challenging, and scale can vary for multiple reasons such as the underlying object size or the resolution of the imaging modality. In this paper, we propose a scale-equivariant convolutional network layer for three-dimensional data that guarantees scale-equivariance in 3D CNNs. Scale-equivariance lifts the burden of having to learn each possible scale separately, allowing the neural network to focus on higher-level learning goals, which leads to better results and better data-efficiency. We provide an overview of the theoretical foundations and scientific work on scale-equivariant neural networks in the two-dimensional domain. We then transfer the concepts from 2D to the three-dimensional space and create a scale-equivariant convolutional layer for 3D data. Using the proposed scale-equivariant layer, we create a scale-equivariant U-Net for medical image segmentation and compare it with a non-scale-equivariant baseline method. Our experiments demonstrate the effectiveness of the proposed method in achieving scale-equivariance for 3D medical image analysis.1
Footnote 1: We publish our code using PyTorch for further research and application:
[https://github.com/wimmerth/scale-equivariant-3d-convnet](https://github.com/wimmerth/scale-equivariant-3d-convnet)
Keywords:Scale-Equivariance Data Efficiency Segmentation
## 1 Introduction
One of the greatest advantages of convolutional neural networks (CNNs) is their _equivariance_ under spatial shifts (translations), i.e. a mathematically guaranteed ability to recognise objects and object parts at any positions they might appear at in images. Recent methods additionally provide equivariance under other transformations of the input such as rotation or scaling, in order to guarantee that features get detected well at different orientations or sizes. Scale-equivariance, i.e. guaranteed detection of features across various scales/sizes, is a beneficial
property of neural networks because an object or object part can have different sizes in different images, and because combining datasets with different image resolutions, for example in medical imaging, allows for constructing richer, more informative training datasets than if large parts of the data are left out. In practice, scale-equivariance often leads to better results [24, 25, 35].
In deep learning, data augmentation is often used as a means to improve the recognition of scaled features. However, data augmentation does not guarantee equivariance, it merely tries to approximate it, and imposes an additional burden on the neural network to learn each possible scale of each feature separately. In many cases, results of such learned equivariance are worse than with guaranteed equivariance [3, 6].
Therefore, intensive research has been conducted in recent years on the development of scale-equivariant neural networks [16, 25]. So far, these methods have been limited to the two-dimensional case. Since the detection of features of different scales is an important aspect in 3D machine learning tasks as well, in the present work we extend the concept of scale-equivariance to three dimensions. We propose novel scale-equivariant neural network layers for 3D data, including convolutions, normalization and pooling, that can be used instead of usual 3D neural network layers to achieve scale-equivariance. Medical image analysis, such as brain tumor segmentation, has been shown to benefit from aggressive data augmentation to approximate scale-equivariance [12] and as 3D data in many applications are available at various image resolutions, it is of high relevance to extend scale-equivariance to the 3D setting.
Scale-equivariant neural networks have been shown to outperform other neural networks especially in the low data regime [24, 35], which is particularly interesting for 3D applications such as MRI and datasets of rare diseases, as there is usually much less training data available than for example photographs in the two-dimensional case. In this paper, we first present the theoretical foundations for scale-equivariant convolutions and review previous works in this field. We lift the concept of scale-equivariant convolutions to the three-dimensional space and evaluate the performance of the proposed layers through a series of experiments based on the brain tumor segmentation task. To demonstrate the efficacy of our approach, we evaluate it against a baseline method based on standard convolutions which are only translation-equivariant.
## 2 Related Work
When image features can appear at a variety of sizes and locations, a primitive approach is to train a neural network that is neither equivariant nor is specifically designed to easily approximate equivariance. Instead, primitive methods tediously learn to approximate equivariance, and their training dataset must contain differently transformed features. That dataset might be obtained by data augmentation (random transformations) if the original dataset lacks such diversity. The additional burden of learning every feature at every possible size distracts
the training from the main learning goals, and in practice yields suboptimal results.
Better results can be achieved by methods that are designed to slightly facilitate the learning of approximate equivariance. An example is to use a branch of a neural network that decides how to transform (for example scale) the input before passing it to another neural-network branch, thus facilitating (but not guaranteeing) equivariance only for features that scale jointly but not independently [10, 13]. Another example are capsule networks, which encourage the separation of visual features and their poses (for example scale) in the latent representations by computing deeper features in ways that benefit from such a separation [23]. Other examples are scale-dependent pooling of latent features [33], or the addition of downsampling and/or upsampling branches in the network [4].
The best results are usually obtained by methods that mathematically guarantee equivariance. Examples include translation-equivariance that is achieved by using convolutional networks, equivariance under 2D rotations and translations [6, 8, 29], 3D rotations and translations [17, 21, 26, 28, 21], or 2D scalings and translations [14, 15, 16, 18, 22, 24, 25, 31, 32, 35]. Special cases of scale-equivariant neural networks use invariant rather than more generally equivariant layers not only as the last layer but also throughout [9, 15], which does not allow information about the relative scale of different features to be used to compute deeper features. Another special case keeps information about different scales separated until the last network layer [14, 16, 32], with a similar effect.
Neural networks achieve scale-equivariance by using differently scaled versions of convolutional filters (or, equivalently, of the feature maps). Recent works reduce discretization artifacts during scaling by representing filters using certain truncated bases, for example Hermite polynomials with Gaussian envelopes [25, 35], Gaussian derivatives [14, 16], or radial harmonics [9, 22]. In the medical domain, scale-equivariant neural networks have so far successfully been applied for histopathology image segmentation [34] and 2D MRI reconstruction [11].
The present paper extends the theory of scale-equivariant neural networks to 3D. We base our method on the work by [25] for the 2D case as it achieved state-of-the-art performance while being the most flexible approach.
## 3 Methods
In this section, we present the theory behind scale-equivariant convolutions and use them to propose a novel scale-equivariant layer for 3D data. We furthermore propose a novel scale-equivariant 3D U-Net that can be used in tasks like medical image segmentation.
A _group_\((G,\cdot)\) is defined as a set \(G\) closed under an associative binary operation \(\cdot:G\times G\to G\), with an identity element \(e\in G\), and where every element has an inverse also in the set. A group \((G,\cdot)\) is often simply denoted by \(G\), with the binary operation \(\cdot\) implied. A mapping \(\Phi:\mathcal{X}\rightarrow\mathcal{Y}\) is _equivariant_ under actions of the group \(G\) when
\[\Phi(L_{g}[f])=L_{g}^{\prime}[\Phi(f)]\quad\forall f\in\mathcal{X}\quad\forall g \in G, \tag{1}\]
where \(f\) is the input of \(\Phi\), for example a medical image or latent feature map, and \(L_{g}\) and \(L_{g}^{\prime}\) are the actions of \(g\in G\) on \(\mathcal{X}\) and \(\mathcal{Y}\), respectively. When \(L_{g}^{\prime}\) is the identity for all \(g\in G\), then \(\Phi\) is called invariant under actions of the group \(G\).
The _scaling group_ can be defined as \(H=(\mathbb{R}_{>0},\cdot)\), i.e. consisting of positive scaling factors and with multiplication \(\cdot\) as the binary operation. A scale transformation \(L_{s}\) of a \(d\)-dimensional real-valued image \(f:\mathbb{R}^{d}\to\mathbb{R}\) (for example a 3D MRI volume with \(d=3\)) by a scaling factor \(s\in H\) (i.e. \(s\in\mathbb{R}_{>0}\)) can be formalized as a scale transformation of image coordinates \(x\in\mathbb{R}^{d}\) and can then be formulated as \([L_{s}[f]](x)=f(s^{-1}x)\). We refer to the transformation as upscale if \(s>1\) and downscale if \(s<1\).
We want our neural network to be equivariant under scalings and translations. By combining the scaling group \(H\) with the group \(T\) of translations using the semi-direct product, we obtain the _group of scalings and translations_\(HT=\{(s,t)\mid s\in H,\;t\in T\}\) with \(s\in\mathbb{R}_{>0},\;t\in\mathbb{R}^{d}\), sometimes denoted as \(HT\cong\mathbb{R}_{>0}\ltimes\mathbb{R}^{d}\)[31], with the identity element \((1,0)\), the binary operation
\[(s_{2},t_{2})\cdot(s_{1},t_{1})=(s_{2}s_{1},s_{2}t_{1}+t_{2}), \tag{2}\]
and the inverse \((s,t)^{-1}=(s^{-1},-s^{-1}t)\). As can be seen in this definition, the order of applying scaling and translation matters, i.e. \((s,t)=(s,0)\cdot(1,t)\neq(1,t)\cdot(s,0))\).
The group actions \(L_{(s,t)}\) of \((s,t)\in HT\) on functions \(f:\mathbb{R}^{d}\to\mathbb{R}^{C}\) (input images) and on functions \(h:HT\to R^{C}\) (latent feature maps) are defined as follows:
\[\begin{split} L_{(s,t)}[f](x)&=f(s^{-1}(x-t)),\\ L_{(s,t)}[h](s^{\prime},t^{\prime})&=h((s,t)^{-1}( s^{\prime},t^{\prime}))=h(s^{-1}s^{\prime},s^{-1}(t^{\prime}-t)).\end{split} \tag{3}\]
A _group-convolution_\(\star_{G}\)[6; 7; 25], using a locally compact group \(G\), of a function \(f:X\to\mathbb{R}^{C}\), for example a medical image (e.g. \(X=\mathbb{Z}^{3}\)) or a latent feature map (\(X=G\)), with a function \(\psi:X\to\mathbb{R}^{C}\) (e.g. a convolutional filter), where \(C\in\mathbb{N}\) is referred to as the number of _channels_ of the input \(f\), is defined as
\[[f\star_{G}\psi](g)=\int_{x\in X}f(x)[L_{g}[\psi]](x)\,\mathrm{d}\mu(x), \tag{4}\]
where \(g\in G\) with its according group action \(L_{g}\), and \(\mu(x)\) is the Haar measure. Thus, the output of a group convolution is a feature map defined on the group \(G\). When that feature map is used as an input to a subsequent group convolution, the filters must also be defined on \(G\), because the input and filter in a group convolution are defined on the same space.
Group-convolutional network layers generalize group convolutions from having several input channels (which we included in the definition of group convolutions above) to additionally having several output channels. This is achieved by performing a group convolution of the input with each of several filters, each yielding a separate output channel. Several channels are used in neural networks to disentangle different features. The filters in these layers are represented as a weighted sum of fixed (truncated-)basis functions. The weights in that sum are the trainable parameters of the layer.
Note that when the group \(G\) is the group \(T=(\mathbb{R}^{d},+)\) of translations, with addition \(+\) as the binary operation, and the group action \(L_{t}\) of a translation \(t\in T\) is \(L_{t}[f](x)=f(x-t)\), then Eq. (4) is the well-known standard convolution.
Group convolution using the (continuous) scaling group computes a value for each of the infinitely many elements of the group. In order to make the computational and output memory requirements finite, usually a discrete subgroup of the continuous group is used, and the subgroup is additionally truncated to a semigroup if it is still infinite [3, 25]. A discrete subgroup of the scaling group can be constructed as \(\{...,s^{-1},1,s^{1},s^{2},...\}\) with a base scale \(s\), for example \(s=0.9\), and truncated for example to \(\{1,s,s^{2},s^{3}\}\). The truncation breaks the equivariance under scales beyond the truncation boundary, but is still locally correct, i.e. guarantees equivariance under discrete scales within the truncated discretized group of scales [31].
We propose scale-equivariant 3D convolutional layers by performing group-convolutions using the discretized truncated version of the group. For filters, truncated bases based on Hermite polynomials with a Gaussian envelope (see Fig. 1) were shown to work the best in practice for scale-equivariant deep learning due to reduced interpolation artifacts during scaling [25]. We generalize the construction of such bases to 3D. We multiply combinations of 1D Hermite polynomials of increasing order (with the maximum order being a hyperparameter) along each of the three dimensions and apply 3D Gaussian envelopes.
Figure 1: Basis functions defined as Hermite polynomials \(H_{n}\) (of different degrees \(n\)) with Gaussian envelopes, scaled with different scales \(\sigma\). The kernel basis for three-dimensional scale-equivariant convolutions is formed from the multiplication of three oriented basis functions (oriented in the \(x\), \(y\), and \(z\) directions) with equal or different degrees of Hermite polynomials.
We used replicate-padding along the "scale dimension" of feature maps on the group because this technique is a good compromise between the truncation of the group and the imprecision of equivariance due to padding [35]. We also create additional scale-equivariant layers, namely the addition of bias terms, pointwise nonlinearities, max- and average-pooling over subgroups (e.g. the group of scalings), and normalization (batch normalization, instance normalization) for the 3D setting, analogously to the 2D setting [25]. Neural networks consisting of combinations of these layers are equivariant as well [6]. For segmentation tasks, the last layer of a scale-equivariant network is a pooling layer over the scale-dimension.
Finally, we propose scale-equivariant transposed convolutions (upconvolutions). Transposed convolutions have been shown to be equivariant in the group-convolution setting in general but have so far only been used in the rotation-equivariant setting [30]. Transposed convolutions can be helpful in creating CNN architectures for medical image segmentation, such as the U-Net [5].
## 4 Experiments
### Experimental Setup
We perform brain tumor segmentation on the BraTS 2020 dataset. The inputs are four MRI contrasts (T1w, T1w with gadolinium, T2w, and T2w-FLAIR) with
Figure 2: Comparison of 2D slices of the 4D/3D output of scale-equivariant and non-scale-equivariant convolution. Scaling \(L_{s}\) of the input \(f\) followed by scale-equivariant convolution \(\star_{HT}\) with a filter \(\psi\) (first row) yields almost the same result as scale-equivariant convolution followed by scaling and selection \([\cdot]_{s}\) of the respective “scale slice” along the scale dimension of the feature map on \(HT\) (second row). This shows that \(\star_{HT}\) is scale-equivariant apart from small interpolation artifacts. On the other hand, the ordinary (i.e. non-scale-equivariant) convolution \(\star_{T}\) followed by scaling (third row) yields a different result, and thus is not scale-equivariant. Note that due to the three-dimensional nature of the data, the 2D slices are not just scaled versions of each other but also strongly influenced by values of neighbouring slices when scaled, depending on the scale.
image registration applied. The output targets are voxel-wise annotations of three different tumor classes (Gd-enhancing tumor, peritumoral edema, and necrotic and non-enhancing tumor core) obtained through annotations of up to four raters and approved by neuroradiologists [1, 2, 19]. In our experiments, we restricted the output target to binary labels representing healthy tissue or tumors. As we did not have access to the validation and test set of the BraTS challenge, our full dataset consists of 369 samples of which we used up to 250 for the training of our models. We applied instance normalization to the samples for feature scaling, which performed slightly better than other feature scaling methods. Since training-data augmentation by random scaling has been shown [25] to be beneficial also for scale-equivariant methods (possibly due to introducing intermediate scales not present in the discretized group, and increasing training-data diversity due to interpolation artifacts), we study the effect of scaling training samples with a random scale between 0.7 and 1.0 in every training step in additional dedicated experiments.
We base our model on the U-Net architecture [5] using four downsampling and upsampling blocks (with 4, 8, 16, and 32 channels from top to bottom). Down- and upsampling are performed using strided and transposed scale-equivariant convolutional layers, respectively (kernel size 5, stride 2), each followed by two scale-equivariant convolutional layers, and residual connections are used within blocks. Skip-connections are used between blocks of the same resolution except for the uppermost layer. The truncated scaling group used in the scale-equivariant network is \(\{0.9^{0},0.9^{1},0.9^{2},0.9^{3}\}=\{1.0,0.9,0.81,0.729\}\) and the kernel basis consists of 27 basis functions, constructed as described in Section 3. Experiments were carried out to compare max- vs. avg-pooling over the scaling group in the last layer.
We compare our model to a baseline using the same architecture but with 16, 32, 64 and 128 channels in the respective network blocks. The network width and depth, learning rate and other hyperparameters were tuned using manual search over a wide range separately for the baseline method and the scale-equivariant methods. Training was performed for up to 160 epochs, depending on the size of the training set, using an NVIDIA RTX 8000 GPU, up to 5 GB VRAM and 10 GB RAM. The training lasted about 2 hours on average.
The loss used is a sum of the soft Dice loss [20] and the binary cross-entropy, and the Adam optimizer with a learning rate of 0.01 was used for training with an exponential learning rate decay.
### Results and Discussion
Our proposed method benefits from its inherent scale-equivariance which is robust against scale changes as can be seen in Fig. 2. It outperformed the baseline on all scalings of the test data (see Fig. 3). The scale-equivariant network using average-pooling and no data augmentation reached a Dice score of \(0.882\pm 0.104\) on the (non-scaled) test set, and thus outperforms the method using max-pooling (\(0.876\pm 0.110\)) and the non-scale-equivariant baseline method (\(0.846\pm 0.128\)) that are using training-data augmentation (see Tab. 1).
Training on an augmented dataset increased the performance of the baseline model in dealing with scaled versions of the test data. Data augmentation proved to be beneficial for the scale-equivariant method as well, as it stabilizes training and improves the networks' capability to deal with interpolation artifacts introduced through artificial scaling of the data. This is evidenced by the better performance of the models on scaled test data (see Fig. 3) and is consistent with observations for scale-equivariant methods in the two-dimensional domain [25]. Our method trained with data augmentation and using max-pooling outperforms the baseline trained with and without data augmentation on all scalings of the test data. Visualizations of the segmentation results are shown in Fig. 4.
In a separate experiment, we evaluate the data efficiency of the proposed method. The scale-equivariant method was less affected by a reduction in training data than the baseline method: for a small number of training data with only 10 or 15 training samples, the proposed method performed up to three times better than the baseline method. Thus, it is a valuable tool in the medical setting, where training data is often limited due to high data-acquisition costs, privacy, or rare diseases.
Figure 3: Comparison of methods in terms of quality of brain tumor segmentation. (Left:) Performance comparison when trained on less training data. The Dice scores are averaged over all test data scalings \(\{0.7,0.8,0.9,1.0\}\); results are similar for each individual scaling factor. Scale-equivariant neural networks considerably outperform the baseline method when trained on only few samples. (Right:) Generalization of trained methods to scaled test data. The scale-equivariant models consistently outperform the baseline method.
Figure 4: Visualization of the ground truth segmentation and the predictions of the scale-equivariant method and the baseline method. True positives are shown in yellow, false positives in blue, and false negatives in red. Our proposed method generally performs better even with complex lesion shapes.
## 5 Conclusions
Our method shows strong results and consistently outperforms the baseline. Max- and average-pooling yield slightly different results depending on the context. Average-pooling uses information from various scales simultaneously and thus can use "fractal (multi-scale) properties" [27] of the image. Due to relying on several scales at once, these image properties get easily destroyed by interpolation artifacts, or move beyond the truncation boundary of the scale dimension (see Section 3), when scaling the training data or test data. This might explain why average-pooling trained without data augmentation outperforms all other methods on unscaled test data, but is more negatively affected by scaling of test data. On the other hand, max-pooling is not affected at all by artifacts at the truncation boundary if they have smaller magnitude than the maximal values selected by max-pooling.
We propose a range of scale-equivariant neural network layers that can be used to analyse medical images, but can also be applied in other fields with 3D voxelized data. The formulation of scale-equivariant networks could further be extended to point cloud data or other 3D data representations. Our proposed method outperformed the non-scale-equivariant baseline and showed its efficiency in a low-resource setting, which can be especially helpful in the medical area.
|
2308.12015 | Experimental and Phenomenological Investigations of the MiniBooNE
Anomaly | This thesis covers a range of experimental and theoretical efforts to
elucidate the origin of the $4.8\sigma$ MiniBooNE low energy excess (LEE). We
begin with the follow-up MicroBooNE experiment, which took data along the BNB
from 2016 to 2021. This thesis specifically presents MicroBooNE's search for
$\nu_e$ charged-current quasi-elastic (CCQE) interactions consistent with
two-body scattering. The two-body CCQE analysis uses a novel reconstruction
process, including a number of deep-learning-based algorithms, to isolate a
sample of $\nu_e$ CCQE interaction candidates with $75\%$ purity. The analysis
rules out an entirely $\nu_e$-based explanation of the MiniBooNE excess at the
$2.4\sigma$ confidence level. We next perform a combined fit of MicroBooNE and
MiniBooNE data to the popular $3+1$ model; even after the MicroBooNE results,
allowed regions in $\Delta m^2$-$\sin^2 2_{\theta_{\mu e}}$ parameter space
exist at the $3\sigma$ confidence level. This thesis also demonstrates that the
MicroBooNE data are consistent with a $\overline{\nu}_e$-based explanation of
the MiniBooNE LEE at the $<2\sigma$ confidence level. Next, we investigate a
phenomenological explanation of the MiniBooNE excess combining the $3+1$ model
with a dipole-coupled heavy neutral lepton (HNL). It is shown that a 500 MeV
HNL can accommodate the energy and angular distributions of the LEE at the
$2\sigma$ confidence level while avoiding stringent constraints derived from
MINER$\nu$A elastic scattering data. Finally, we discuss the Coherent
CAPTAIN-Mills experiment--a 10-ton light-based liquid argon detector at Los
Alamos National Laboratory. The background rejection achieved from a novel
Cherenkov-based reconstruction algorithm will enable world-leading sensitivity
to a number of beyond-the-Standard Model physics scenarios, including
dipole-coupled HNLs. | Nicholas Kamp | 2023-08-23T09:14:52Z | http://arxiv.org/abs/2308.12015v1 | # Experimental and Phenomenological Investigations of the MiniBooNE Anomaly
###### Abstract
The author hereby grants to MIT a nonexclusive, worldwide, irrevocable, royalty-free license to exercise any and all rights under copyright, including to reproduce, preserve, distribute and publicly display copies of the thesis, or release the thesis under an open-access license.
Authored by
Department of Physics
May 23, 2023
Certified by
Janet M. Conrad
Professor
Thesis Supervisor
Accepted by
Lindley Winslow
Associate Department Head of Physics
## References
**Experimental and Phenomenological Investigations of the MiniBooNE Anomaly**
by
Nicholas Kamp
Submitted to the Department of Physics
on May 23, 2023, in partial fulfillment of the
requirements for the degree of
Doctor of Philosophy
### Abstract
The \(4.8\sigma\) excess of electron neutrino-like events reported by the MiniBooNE experiment at Fermilab's Booster Neutrino Beam (BNB) is one of the most significant and longest standing anomalies in particle physics. This thesis covers a range of experimental and theoretical efforts to elucidate the origin of the MiniBooNE low energy excess (LEE). We begin with the follow-up MicroBooNE experiment, which took data along the BNB from 2016 to 2021. The detailed images produced by the MicroBooNE liquid argon time projection chamber enable a suite of measurements that each test a different potential source of the MiniBooNE anomaly. This thesis specifically presents MicroBooNE's search for \(\nu_{\rm e}\) charged-current quasi-elastic (CCQE) interactions consistent with two-body scattering. The two-body CCQE analysis uses a novel reconstruction process, including a number of deep-learning based algorithms, to isolate a sample of \(\nu_{\rm e}\) CCQE interaction candidates with 75% purity. The analysis rules out an entirely \(\nu_{\rm e}\)-based explanation of the MiniBooNE excess at the \(2.4\sigma\) confidence level. We next perform a combined fit of MicroBooNE and MiniBooNE data to the popular 3 + 1 model; even after the MicroBooNE results, allowed regions in \(\Delta\)m\({}^{2}\)-\(\sin^{2}2_{\theta_{\mu\rm e}}\) parameter space exist at the \(3\sigma\) confidence level. This thesis also demonstrates that, due to nuclear effects in the low-energy cross section behavior, the MicroBooNE data are consistent with a \(\overline{\nu}_{\rm e}\)-based explanation of the MiniBooNE LEE at the < \(2\sigma\) confidence level. Next, we investigate a phenomenological explanation of the MiniBooNE excess involving both an eV-scale sterile neutrino and a dipole-coupled MeV-scale heavy neutral lepton (HNL). It is shown that a 500 MeV HNL can accommodate the energy and angular distributions of the LEE at the \(2\sigma\) confidence level while avoiding stringent constraints derived from MINER\(\nu\)A elastic scattering data. Finally, we discuss the Coherent CAPTAIN-Mills (CCM) experiment-a 10-ton light-based liquid argon detector at Los Alamos National Laboratory. The background rejection achieved from a novel Cherenkov-based reconstruction algorithm will give CCM world-leading sensitivity to a number of beyond-the-Standard Model physics scenarios, including dipole-coupled HNLs.
Thesis Supervisor: Janet M. Conrad
Title: Professor
## References
## Acknowledgments
Development from a starry-eyed first year graduate student into a competent researcher is like the MSW effect: it doesn't happen in a vacuum. There are many people who have helped me along the way in both physics and life, without whom I would never have gotten to the point of writing this thesis.
First and foremost, I owe an immense debt of gratitude to Janet Conrad. Janet has been an incredible mentor to me during my time as a graduate student; her advice and wisdom have helped me become the scientist I wanted to be when I came to MIT four years ago. I'd like to specifically thank Janet for taking my interests seriously and working with me to develop research projects that matched them. Creativity, ingenuity, enthusiasm, and kindness run rampant in the Conrad group-I will always be grateful for being offered a spot in it. I look forward to many years of fruitful collaboration to come.
To my partner, Wenzer: thank you for the love, support, and patience over the last two years. Life is not so hard when we can act as a restoring force for one another-I am grateful for having been able to rely on it while writing this thesis. I look forward with great excitement to our future adventures together.
Thank you to my past mentors: Christine Aidala for introducing me to particle physics through POLARIS, Robert Cooper for asking me about my research at APS DNP 2016, Bill Louis for answering my many questions about neutrino physics in my first summer at LANL, Richard Van de Water for teaching me to follow the data (which greatly influenced my choice of graduate research), and Josh Spitz for helping develop confidence as a neutrino physicist on JSNS\({}^{2}\). I'd also like to thank Christopher Mauger and the rest of the Mini-CAPTAIN team for an introduction to what it takes to run a particle physics experiment at an accelerator.
Thank you to members of the Conrad group past and present: those I worked with, Lauren Yates, Adrian Hourlier, Jarrett Moon, Austin Schneider, Darcy Newmark, Alejandro Diaz, and John Hardin, for being fantastic collaborators, and those I did not, Loyd Waits, Joe Smolsky, Daniel Winklehner, Philip Weigel, and Josh Villareal, for making MIT a brighter place. Thank you especially to Austin for your infinite patience in answering my many questions on statistics and software-each of our projects has been a great pleasure.
To my MicroBooNE collaborators not mentioned above, Taritree Wongjirad, Katie Mason, Joshua Mills, Polina Abratenko, Ran Itay, Mike Shaevitz, Georgia Karagiorgi, Davio Cianci, Rui An, and everyone else: thank you for your excellent teamwork in putting together the Deep Learning analysis.
To my CCM collaborators not mentioned above, Edward Dunton, Mayank Tripathi, Adrian Thompson, Will Thompson, Marisol Chavez Estrada, and everyone else: thank you for the invigorating research and discussion over the last couple of years, and best of luck with CCM200!
Thank you to Carlos Arguelles, Mike Shaevitz, Matheus Hostert, Stefano Vergani, and Melissa Uchida for your excellent mentorship and collaboration in our phenomenological endeavors together. Thank you specifically to Carlos for giving me a welcome introduction to neutrino phenomenology, Mike for ensuring the robust
ness of each analysis, and Matheus for patiently answering my many model-building questions.
To all of my friends not mentioned above; Jack, Ryan, Alexis, Melissa, Ben, Bhaamati, Charlie, Vincent, Patrick, Caolan, Ouali, Artur, Sam, Zhiquan, Rebecca, Felix, Lila, Rahul, Brandon, Field, Kelsey, Woody, Joey, Rory, Cooper, Daniel, Kaliroe, Elena, and everyone else: thank you for all of the great memories-climbing, hiking, skiing, playing music, eating, drinking, commiserating, and laughing-over the past four years. Thank you especially to the last three for making preparation for the oral exam significantly more enjoyable.
Thank you to the MIT administrative staff, including (but not limited to) Lauren Saragosa, Karen Dow, Catherine Modica, Sydney Miller, Alisa Cabral, and Elsye Luc, for helping make graduate school a more manageable endeavor. Thank you also to the rest of my thesis committee, Joseph Formaggio and Washington Taylor, for helping me get through the final part of graduate school.
Finally, thank you to my parents, Jim and Carla, my siblings, Serafina and Daniel, and the rest of my family. From elementary school science fairs to Saturday Morning Physics to today, none of this would have been possible without your love and support.
###### Contents
* 1 Introduction
* 1.1 A Brief History of the Neutrino
* 1.2 Neutrinos in the Standard Model
* 1.3 Massive Neutrinos
* 1.4 Anomalies in the Neutrino Sector
* 2 The MiniBooNE Experiment
* 2.1 Overview of MiniBooNE
* 2.1.1 The Booster Neutrino Beam
* 2.1.2 The MiniBooNE Detector
* 2.2 The MiniBooNE Low Energy Electron-Like Excess
* 3 The MicroBooNE Detector
* 3.1 Liquid Argon Time Projection Chamber
* 3.1.1 Cryogenics
* 3.1.2 LArTPC Drift System
* 3.1.3 Light Collection System
* 3.2 TPC Signal Processing
* 3.2.1 Noise Filtering
* 3.2.2 Deconvolution
* 4 The MicroBooNE Electron Neutrino Analysis: Overview and Selection
* 4.1 Dataset and Simulation
* 4.2 The Electron Low Energy Excess Template
* 4.3 Philosophy Behind the Two-Body CCQE Analysis
* 4.4 Reconstruction
* 4.4.1 Convolutional Neural Networks in LArTPCs
* 4.4.2 Vertex and Track Reconstruction
* 4.4.3 Publication: _Electromagnetic shower reconstruction and energy validation with Michel electrons and \(\pi^{0}\) samples for the deep-learning-based analyses in MicroBooNE_
* 4.5 1e1p Event Selection
* 4.5.1 Basic Data Selection Criteria
* 4.5.2 Boosted Decision Tree Ensemble
* 5
4.5.3 Particle Identification Cuts * 4.5.4 The Final 1e1p Sample
* 5 The MicroBooNE Electron Neutrino Analysis: Results and Discussion
* 5.1 First Results from the Two-Body CCQE Analysis
* 5.1.1 Background Estimation
* 5.1.2 Evaluation of Systematic Uncertainties
* 5.1.3 Constraint from the 1\(\mu\)1p Sample
* 5.1.4 Blinded Analysis Approach
* 5.2 Statistical Interpretation
* 5.2.1 Goodness of Fit
* 5.2.2 Two Hypothesis Test
* 5.2.3 Signal Strength Scaling Test
* 5.3 Discussion and Outlook
* 5.3.1 Publication: _MiniBooNE and MicroBooNE Combined Fit to a \(3+1\) Sterile Neutrino Scenario_
* 5.3.2 Publication: _Implications of MicroBooNE's low sensitivity to electron antineutrino interactions in the search for the MiniBooNE excess_
* 6 Neutrissimos: Heavy Neutral Leptons with a Dipole Moment
* 6.1 Dipole-Portal Neutrissimos
* 6.2 Overview of the Mixed Model
* 6.3 Neutrissimos in MiniBooNE
* 6.3.1 Simulation in LeptonInjector
* 6.3.2 Fits to the MiniBooNE Excess
* 6.4 Publication: _Dipole-coupled neutrissimo explanations of the MiniBooNE excess including constraints from MINERvA data_
* 7 The Coherent CAPTAIN-Mills Experiment
* 7.1 The CCM Beamline and Detector
* 7.2 Cherenkov Light Reconstruction
* 7.2.1 Simulation-Based Sensitivity Estimation
* 7.2.2 Identifying Cherenkov Light in Data
* 7.3 Neutrissimos in CCM
* 8 Conclusions and Future Prospects
* A Publication: _Convolutional neural networks for shower energy prediction in liquid argon time projection chambers_
* B Signal Event Displays from the Two-Body CCQE Analysis
List of Figures
* 1 Telegram from Fred Reines and Clyde Cowan informing Wolfgang Pauli of their detection of neutrinos from a nuclear reactor.
* 2 The deficit of the observed solar \(\nu_{\rm e}\) flux compared with the theoretical expectation. The Homestake experiment is shown on the far left; follow-up solar neutrino measurements confirming the deficit are also shown, including the 2002 SNO result which brought forth a solution to the solar neutrino problem. Figure from Ref. [1].
* 3 The up-down asymmetry measured in SuperK as a function of lepton momentum, separated into e-like and \(\mu\)-like events as well as fully-contained (FC) and partially-contained (PC) events. The dashed line indicates the best fit to \(\nu_{\mu}\rightarrow\nu_{\tau}\) oscillations. Figure from Ref. [2].
* 4 Measurement of the solar \({}^{8}\)B flux from the SNO collaboration, broken down into the \(\nu_{\rm e}\) and \(\nu_{\mu,\tau}\) sub-components. Measurement of the CC, NC, and ES channels show up as slices in the two-dimensional flux parameter space. Figure from Ref. [3].
* 5 Diagrams contributing to \(\nu{\rm e}^{-}\) elastic scattering
* 6 Diagrams contributing to \(\overline{\nu}{\rm e}^{-}\) elastic scattering
* 7 Diagrams contributing to neutrino-nucleon charged-current quasielastic scattering
* 8 CC inclusive neutrino and antineutrino nucleon scattering cross sections as a function of neutrino energy. Figure from Ref. [4].
* 9 The LSND excess of \(\overline{\nu}_{\rm e}\) events on top of the predicted SM background (green and red regions). The blue region indicates the best fit to \(\overline{\nu}_{\mu}\rightarrow\overline{\nu}_{\rm e}\) oscillations via a sterile neutrino state. Figure from Ref. [5]
* 10 The MiniBooNE electron-like channel data and SM background prediction for the entire neutrino mode dataset, as a function of the reconstructed neutrino energy.
* 11 Data contributing to the reactor antineutrino anomaly, indicating the \(\sim 5\%\) flux deficit observed by short-baseline reactor neutrino experiments. The red line indicates the prediction incorporating SM neutrino oscillations only, while the blue line shows an example prediction including a sterile neutrino. Figure from Ref. [6].
* 12 Data contributing to the gallium anomaly, indicating the \(\sim 20\%\) deficit in the \({}^{71}\)Ge production rate observed by SAGE, GALLEX, and BEST. Figure from Ref. [7].
* 13 Preferred regions in \(\sin^{2}2\theta_{\rm ee}\)-\(\Delta\)m\({}^{2}\) parameter space to explain the RAA [8] (green contour) and gallium anomaly [9] (blue regions). The total excluded region from other experiments (grey region) is also shown. Figure from Ref. [9].
* 14 Preferred regions in \(\sin^{2}2\theta_{\rm\mu e}\)-\(\Delta\)m\({}^{2}\) parameter space to explain the LSND anomaly [5] (filled contours) and MiniBooNE anomaly [10] (open contours). Figure from Ref. [10].
* 15 Graphical representation of the tension observed in 3+1 global fits between different subsets of the experimental landscape. Figure 1-15a shows the tension between \(\nu_{\rm e}\) appearance experiments and \(\nu_{\rm e}/\nu_{\mu}\) disappearance experiments observed in Ref. [11]. Figure 1-15b shows the tension between allowed regions from \(\nu_{\rm e}\) appearance (lower right), \(\nu_{\rm e}\) disappearance (upper right), and \(\nu_{\mu}\) disappearance (upper left) experiments observed in Ref. [12], which includes the latest results from the BEST experiment.
* 2-1 A schematic depiction of the BNB at Fermilab, including the downstream MiniBooNE detector. Figure from Ref. [13].
* 2-2 Breakdown of the neutrino flux at the BNB in neutrino (left) and antineutrino (right) mode. Figure from Ref. [14].
* 2-3 The MiniBooNE detector situated in the cylindrical detector hall (left) and an image of the interior of the MiniBooNE detector (right), showing the PMTs in both the signal and veto regions. Figure from Ref. [15].
* 2-4 Visual representations of particle identification in MiniBooNE. Figure 2-4a shows a schematic representation of the detector signature from the three main particle classes in MiniBooNE: muons, electrons, and neutral pions. Figure 2-4b shows the MiniBooNE log-likelihood-ratio between the e-like and \(\mu\)-like hypothesis as a function of reconstructed neutrino energy, considering both simulated \(\nu_{\rm e}\) CCQE (top) and \(\nu_{\mu}\) CCQE (bottom) interactions.
* 2-5 The E\({}_{\nu}^{\rm QE}\) distribution of the MiniBooNE e-like excess in the total neutrino mode (figure 2-5a) and antineutrino mode (figure 2-5b) datasets. The observation and SM prediction in each bin are shown by the data points and colored histograms, respectively.
* 2-6 The lepton visible energy (figure 2-6a) and \(\cos\theta\) (figure 2-6b) distributions of the MiniBooNE e-like excess in the total neutrino mode dataset. The observation and SM prediction in each bin are shown by the data points and colored histograms, respectively. Figures from Ref. [10].
* 2-7 The MiniBooNE and LSND excesses as a function of the ratio L/E. The MiniBooNE data is separated into neutrino and antineutrino mode. Figure from Ref. [10].
* 3-1 Schematic depictions of the MicroBooNE LArTPC. Figure 3-1a shows the detection process for charged particles from a neutrino interaction in a MicroBooNE-like LArTPC. Figure 3-1b shows a cross-sectional view of the MicroBooNE detector along the -\(\hat{\mathrm{z}}\) direction. Figures from Ref. [16].
* 3-2 Figure 3-2a shows a close-up image of the cathode plane of the MicroBooNE LArTPC. The stainless steel field cage tubes can also be seen surrounding the active volume. Figure 3-2b shows a cross-sectional map of the electric field at the edge of the active volume, considering a cathode plane voltage of -128 kV. The legend shows the field strength in units of V/m. Figures from Ref. [16].
* 3-3 Figure 3-3a shows a photograph of a single wire carrier board with 32 mounted wires. Figure 3-3b shows the fully-assembled MicroBooNE LarTPC, highlighting the anode plane mounted on the stainless steel frame. Figures from Ref. [16].
* 3-4 The LAr scintillation light spectrum, TPB ultra-violet absorption spectrum, TPB emission spectrum, PMT quantum efficiency, and PMT surface transmission efficiency as a function of photon wavelength. Figure from Ref. [17].
* 3-5 Figure 3-5a shows a photograph of a single PMT system used in the MicroBooNE detector. The acrylic window here has not yet been coated with TPB. Figure 3-5b shows the PMT signal from a stopped cosmic muon (top) that decays to a Michel (bottom). Figures from Ref. [16].
* 3-6 2D displays of the signal from wires in one of the induction planes in a single data event, before and after the application of the offline noise filters. Figure from Ref. [18].
* 3-7 A U plane event display of a candidate neutrino interaction in the MicroBooNE data. The impact of the 1D and 2D deconvolution algorithms on the post-noise-filtering signal is shown. Figure from Ref. [19].
* 4-1 Figure 4-1a shows the unfolded \(\nu_{\mathrm{e}}\) prediction in MiniBooNE calculated using the first \(6.46\times 10^{20}\) POT of neutrino mode data [20]. Figure 4-1b shows the unfolded eLEE model weights derived from the first \(12.84\times 10^{20}\) POT of MiniBooNE neutrino mode data, which constitute the MicroBooNE eLEE model.
* 4-2 The expected distribution of \(\nu_{\mathrm{e}}\) interactions in MicroBooNE as a function of the true \(\nu_{\mathrm{e}}\) energy. The dotted line shows the expectation from the MicroBooNE eLEE model of the MiniBooNE excess discussed in section 4.2.
* 4-3 An example candidate 1e1p data event in MicroBooNE, including the raw LArTPC collection plane image (left) and the pixel labels assigned from SparseSSNet (right).
* 4-4 The relationship between the final state lepton (left) and proton (right) energy and scattering angle, for different neutrino energies, in \(\nu_{\mu}\) (top) and \(\nu_{\mathrm{e}}\) (bottom) CCQE scattering.
* Figure 4-5a shows a diagram of the SparseSSNet U-ResNet architecture. Figure 4-5b shows the SparseSSNet pixel labels on a simulated image in the training dataset. Figures from Ref. [21].
* Figure 4-6a shows a diagram of the MPID architecture. Figure 4-6b shows the MPID image scores on an example simulated event. Figures from Ref. [22].
* The fraction of EM showers reconstructed to within 5% of the true deposited energy as a function of the fraction of unresponsive wires in the LArTPC collection plane, considering three different methods. The ResNet and Inception network significantly outperform the traditional linear calibration between charge and energy. Figure from Ref. [23].
* Figure 4-8a shows the angular metric that is minimized to find a 3D neutrino vertex candidate. Figure 4-8b shows the iterative track reconstruction algorithm, which relies on calculating distances (L\({}_{1}\) and L\({}_{2}\)) and angles (\(\theta\) and \(\phi\)) with respect to the previous point and the end of the track. Figures from Ref. [24].
* An example of an event that fails the shower energy consistency cut, because the EM shower passes through an unresponsive region of the collection plane.
* Figure 4-10a shows the distribution of the fractional shower energy consistency variable for events with an old 1e1p BDT score above/below 0.7. Figure 4-10b shows the efficiency with which events above/below the old 1e1p BDT cutoff pass the fractional consistency cut as a function of the chosen upper bound.
* The F score of each variable for one of the 1e1p BDTs in the ensemble from run 1, run2, and run 3.
* Figure 4-12a and figure 4-12b show the MC distribution of the 1e1p BDT ensemble average score over all three run periods for the full [0,1] range and zoomed in to the [0.95,1] range, respectively.
* Figure 4-13a and figure 4-13b show the predicted \(\nu_{\mathrm{e}}\) event rate in run period 2 and run period 3, respectively, using both the run period 2 ensemble and the run period 3 ensemble.
* Figure 4-14a and figure 4-14b show the fractional difference in average BDT score \((\mathrm{S_{n}-S_{0}})/\mathrm{S_{0}}\) as a function of the number of omitted BDTs n over the simulation from run period 2 and run period 3, respectively. The red histogram shows the actual distribution of the number of omitted BDTs over the run period 2 and run period 3 simulation samples, respectively. Scores are calculated using the run period 3 and run period 2 BDT ensemble, respectively.
* Figure 4-15a and figure 4-15b show the MPID electron and muon score, respectively, as a function of the reconstructed electron energy in intrinsic \(\nu_{\mathrm{e}}\) MC events.
* The \(\mathrm{E_{\nu}^{range}}\) distribution for the 1e1p signal sample, showing only the predicted event rate from the MC. The prediction from the eLEE model is shown in the dashed blue line.
* 17 Figure 4-17a shows the distribution of the fractional error on the neutrino energy for MC events in the 1e1p signal sample, restricted to 200 \(<\) E\({}_{\nu}^{\rm Range}\) [MeV] \(<\) 1200. Figure 4-17b shows the 2D distribution of fractional error as a function of the true neutrino energy.
* 18 Figure 4-18a shows the post-vertex-identification efficiency of true \(\nu_{\rm e}\) CCQE selection for subsequent stages of the 1e1p cuts. Figure 4-18b shows the true \(\nu_{\rm e}\) CCQE event rates over the full run 1-3 dataset after subsequent stages of the 1e1p cuts.
* 17 Top: pixel intensity (color scale is in PIU as defined in section 4.4); Bottom: SparseSSNet labels; Left to Right: U, V, Y, planes. The white circle indicates the reconstructed vertex. The horizontal axis corresponds to the wire plane direction and the vertical axis corresponds to the electron drift direction, which is measured using the arrival time of charge on the wires.
* 12 The 1e1p sample E\({}_{\nu}\) distribution, comparing data (black points) to the unconstrained prediction (stacked histogram) in the 200 \(<\) E\({}_{\nu}\)\(<\) 1200 MeV region. The eLEE model prediction is represented by the dashed blue line. The prediction is presented in terms of both interaction type (figure 5-2a) and final state topology (figure 5-2b).
* 12 Average 1e1p BDT ensemble score distribution comparing data to the unconstrained prediction.
* 12 Comparison between data and unconstrained prediction in the E\({}_{\rm e}\) (figure 5-4a), E\({}_{\rm p}\) (figure 5-4b), \(\theta_{\rm e}\) (figure 5-4c), and E\({}_{\nu}^{\rm QE-\ell}\) (figure 5-4d) distributions of the 1e1p sample.
* 12 The fit to the \(\nu_{\mu}\) background distribution to the 1e1p analysis. The shape fit is performed at a loose BDT score cutoff of 0.7 (figure 5-5a) and scaled to the signal cutoff of 0.95 (figure 5-5b). Blue points represent the prediction from the simulation, with error bars representing the Gaussian approximation of the statistical error (quadrature sum of event weights). The orange line and corresponding shaded region represent prediction and uncertainty, respectively, coming from the Landau+linear fit.
* 12 The data and MC prediction for events with a 1e1p BDT score inside \([0.7,0.95]\). Good agreement is observed between data and prediction. The prediction incorporating the Landau+linear background fit is shown by the red line.
* 12 The uncertainty in each bin of the E\({}_{\nu}^{\rm range}\) distribution of the 1e1p (figure 5-7a) and 1\(\mu\)1p (figure 5-7b) samples.
* 12 The joint covariance (figure 5-8a) and correlation (figure 5-8b) matrices for the E\({}_{\nu}^{\rm range}\) distribution of the 1e1p and 1\(\mu\)1p samples.
* 13 The E\({}_{\nu}^{\rm range}\) distribution in the 1\(\mu\)1p channel, comparing data to the MC prediction.
* 14 Fractional systematic uncertainty in the 1e1p E\({}_{\nu}^{\rm range}\) distribution before and after the 1\(\mu\)1p constraint.
* 15
* 5-11 Comparison between data and prediction in the 1e1p \(\mathrm{E}_{\nu}^{\mathrm{range}}\) distribution after applying the 1\(\mu\)1p constraint procedure.
* 5-12 Comparison between data and prediction in the 1e1p BDT ensemble average score distribution within the range \([0.7,0.95]\).
* 5-13 Distributions of the \(\Delta\chi^{2}\) test statistic defined in equation (5.7) for \(\mathrm{H}_{0}\) (red) and \(\mathrm{H}_{1}\) (blue), calculated by generating \(10^{5}\) pseudo-experiments under each hypothesis. The \(\Delta\chi^{2}\) value of the data is also shown.
* 5-14 Confidence intervals on \(\mathrm{x}_{\mathrm{LEE}}\) calculating using the Feldman-Cousins procedure. The solid and dotted lines indicate the confidence level with which a given \(\mathrm{x}_{\mathrm{LEE}}\) is disfavored, calculated using the Feldman-Cousins method [25] and Wilks theorem [26], respectively. The MiniBooNE statistical and systematic errors are shown as a band around \(\mathrm{x}_{\mathrm{LEE}}\) = 1. 137
* 5-15a shows the observation compared to the nominal (\(\mathrm{H}_{0}\)) prediction in all four signal channels from the three MicroBooNE \(\nu_{\mathrm{e}}\) analyses, including statistical errors on the data points and systematic errors on the prediction. The eLEE prediction (\(\mathrm{H}_{1}\)) is also indicated by the red line. Figure 5-15b shows the observed 1\(\sigma\) and 2\(\sigma\) confidence intervals on \(\mathrm{x}_{\mathrm{LEE}}\) from all four signal channels. The 2\(\sigma\) expected sensitivity of each channel is shown in red.
* 6-1 Feynman diagram depicting the effective dipole operator of equation (6.1).
* 6-2 New interactions involving the neutrissimo that are enabled by the dipole operator in equation (6.1), including three-body \(\pi^{0}\) decay (figure 6-2a), Primakoff-like upscattering (figure 6-2b), and neutrissimo decay (figure 6-2c).
* 6-3 3+1 global fits including MiniBooNE, considering global (left), appearance-only (middle), and disappearance-only (right) experiments. The allowed regions in \(3+1\) parameter space at the 90%, 95%, and 99% confidence levels are shown by the red, green, and blue points, respectively. The best-fit point is indicated by the star.
* 6-4 3+1 global fits without MiniBooNE, considering global (left), appearance-only (middle), and disappearance-only (right) experiments. The allowed regions in \(3+1\) parameter space at the 90%, 95%, and 99% confidence levels are shown by the red, green, and blue points, respectively. The best-fit point is indicated by the star.
* 6-5 Schematic depiction of the neutrissimo model in MiniBooNE as simulated using LeptonInjector. Figure 6-5a shows the simulation of Primkaoff upscattering along the beamline, and figure 6-5b shows an example of upscattering, neutrissimo decay and pair-production within the MiniBooNE detector.
* 6-6 Allowed regions at the 95% and 3\(\sigma\) confidence level in \(\mathrm{d}_{\mu\mathcal{N}}\)-\(\mathrm{m}_{\mathcal{N}}\) obtained through fits to the MiniBooNE excess in the \(\mathrm{E}_{\nu}^{\mathrm{QE}}\) and \(\cos\theta\) distributions. Existing 2\(\sigma\) constraints on this model are indicated by the grey regions.
* 6-7
Figure 6-7a and figure 6-7b show the \(\mathrm{E_{\nu}^{QE}}\) and \(\cos\theta\) distributions of the MiniBooNE excess, respectively, compared with the prediction from the neutrissimo model indicated by the black star in figure 6-6. The oscillation contribution from the \(3+1\) global fit without MiniBooNE is also shown.
* 6-8 The added time delay in MiniBooNE for a neutrissimo with the indicated parameters, as calculated in Ref. [27].
* 7-1 Schematic depiction of the Lujan TMRS Figure from Ref. [28].
* 7-2 Figure 7-3a, from Ref. [29], shows the energy distribution of \(\pi^{+}\) decay-at-rest neutrinos from the Lujan beam dump source. Figure 7-2b, from Ref. [30], shows the timing distribution of particles produced in the Lujan beam dump source after traveling through the TMRS.
* 7-3 Figure 7-3a shows a schematic 3D rendering of the CCM200 detector. Figure 7-3b shows an image of the interior of the CCM200 detector.
* 7-4 Two of the veto PMT assemblies constructed at MIT, including the 1-inch PMT, base circuit board, and TPB-coated acrylic window.
* 7-5 Section 7.1 shows one of the veto PMTs across from the LED in the light-tight box. Section 7.1 shows the average response of the 20 veto PMTs to 1 V (top) and 2 V (bottom) LED pulses.
* 7-6 The timing distribution of photons from the Lujan source (solid black line) compared with that of neutrons measured in CCM120 (dashed red line). Figure from Ref. [30].
* 7-7 Figure 7-7a shows the integration in equation (7.1) as a function of \(\lambda_{1}\) for \(\lambda_{2}\) = 700 nm and z = 1. Figure 7-7b shows the Cherenkov cone angle \(\cos\theta_{\mathrm{C}}\) for an electron as a function of the photon wavelength and the electron kinetic energy.
* 7-8 Templates of the average number of p.e. detected in each CCM PMT within the first 8 ns of an electron event. Different templates are shown for electron kinetic energies of T\({}_{\mathrm{e}^{-}}\) = 1 MeV and T\({}_{\mathrm{e}^{-}}\) = 5 MeV, both with and without Cherenkov photons. Coated (uncoated) PMTs are indicated by the solid (dashed) circles. Grey PMTs indicate those which registered no hits within the first 8 ns across all simulations. The dimensions on each axis are in units of cm.
* 7-9 Example event displays showing the total number of p.e. detected in each CCM PMT within the first 8 ns of a single simulated electron event. Displays are shown for electron kinetic energies of T\({}_{\mathrm{e}^{-}}\) = 1 MeV and T\({}_{\mathrm{e}^{-}}\) = 5 MeV. Coated (uncoated) PMTs are indicated by the solid (dashed) circles. Grey PMTs indicate those which registered no hits in the first 8 ns of this specific simulation.
* 7-10 Distributions of the test statistic in equation (7.6) over all \(10^{4}\) simulations, considering either all photons or scintillation photons only. Distributions are shown for T\({}_{\mathrm{e}^{-}}\) = 1 MeV and T\({}_{\mathrm{e}^{-}}\) = 5 MeV. The vertical line indicates the lower bound requirement which can reject 99% of scintillation-only backgrounds.
* 7.11 Curves of the efficiency to retain events with Cherenkov light ("Ring efficiency") v.s. the fraction of events without Cherenkov light that can be rejected ("No-Ring Rejection Factor"), generated by considering successively larger lower bounds on \(\Delta\log\mathcal{L}\). Different curves are shown for \(\mathrm{T_{e^{-}}}\in\{1,2,3,4,5\}\) MeV.
* 7.12 Figure 7-12a shows a schematic depiction of the derivative-based pulse definition in the current CCM reconstruction. Figure 7-12b shows an example of this pulse finder in a data event, from Ref. [30].
* 7.13 Two example waveforms from CCM200 beam data. The regions identified by the derivative filter are indicated in red and the result of the fit to equation (7.7) is indicated by the purple curves.
* 7.14 Figure 7-14a shows an image of the six CosmicWatch pairs on top of the CCM detector. Section 7.2.2 shows a schematic diagram of the cosmic muon trigger in CCM200.
* 7.15 Figure 7-15a shows the summed waveform across all PMTs in CCM for a single example cosmic muon trigger. The delayed signal from a Michel electron can also be seen. Section 7.2.2 shows the difference in rise times between coated and uncoated PMT signals in the top and bottom halves of the barrel of the detector (labeled "sides-top" and "sides-bottom", respectively), as described in the text.
* 7.16 Schematic depiction of prompt \(\nu_{\mu}\) from \(\pi^{+}\) decay in the Lujan target upscattering to neutrissimos within shielding along the path to CCM200 and decaying to photons in the detector. The pink circle represents the TMRS shown in figure 7-1.
* 7.17 Figure 7-17a shows the distribution of background and signal prediction in CCM200 for \(\mathrm{m_{\mathcal{N}}}=20.35\) MeV and \(\mathrm{d_{\mu\mathcal{N}}}=3\times 10^{-7}\) GeV\({}^{-1}\), considering a background reduction factor of \(10^{-3}\) compared to CCM120. Section 7.3 shows the background-subtracted plot, with a red band indicating the expected statistical uncertainty on the background.
* 7.18 Figure 7-18a shows the expected sensitivity of CCM200 to the neutrissimo model, where the blue band corresponds to a background reduction factor between \(10^{-4}\) ("CCM200 Low Background") and \(10^{-2}\) ("CCM200 High Background"). The MiniBooNE \(\mathrm{E_{\nu}^{QE}}\) allowed region (pink) and existing constraints (grey) come from Ref [31]. Figure 7-18b shows the same plot, but considering \(\mathcal{N}\rightarrow\nu_{\tau}\gamma\) decays with \(\mathrm{d_{\tau\mathcal{N}}}=\mathrm{d_{\mu\mathcal{N}}}\mathrm{m_{\tau}}/ \mathrm{m_{\mu}}\).
* 7.19 The time delay of neutrissimo single photon decays with the CCM detector for the indicated mass and coupling, as calculated using LeptonInjector.219
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
## References
List of Tables
* 4.1 The definition of kinematic variables used throughout the two-body CCQE analysis.
* 4.2 The suite of variables used to isolate and analyze the 1e1p and 1\(\mu\)1p samples. Variables used in the BDT ensemble for each sample are specified. The "\(*\)" character indicates that the variable is calculated in the rest frame of the struck nucleon. The mathematical definitions of many of these variables appear in Table 4.1.
* 4.3 The specific used to define the 1\(\mu\)1p and 1e1p samples. For definitions of kinematic variables, see Table 4.1.
* 5.1 Breakdown of MC events in the "background" category of figures 5-3 and 5-4 over the range 200 \(<\) E\({}_{\nu}\)\(<\) 1200 MeV. The events are partitioned both by the interaction channel and the event topology.
* 5.2 Results from goodness-of-fit tests comparing observed 1e1p data to the H\({}_{0}\) and H\({}_{1}\) predictions, reported via the \(\chi^{2}_{\text{CNP}}\) test statistic and the frequentist p-value. The top half of the table considers the nominal prediction and uncertainties before the 1\(\mu\)1p constraint described in section 5.1.3, while the bottom half considers the post-constraint prediction and uncertainties.
* 6.1 Relevant parameters of the ten most abundant nuclei in the Earth's upper crust according to [32].
## References
## Chapter 1 Introduction
We begin with a brief primer on neutrinos, the surprises they have given physicists throughout recent history, and the mysteries that remain today. Readers already familiar with the mathematical details of massive neutrinos and the Standard Model may wish to read only section 1.1 and section 1.4 before continuing.
### 1.1 A Brief History of the Neutrino
The first indication of what would come to be known as the neutrino came from Wolfgang Pauli in 1930 [33]. Addressing the "radioactive ladies and gentlemen" of Tubingen, Germany, he appealed to the existence of new electrically neutral particles to save the law of energy conservation in nuclear beta decays. This idea was developed further by Enrico Fermi in 1934, who calculated the transition probability for \(\beta\)-decay with a neutrino in the final state [34]. Fermi's theory represents the first study of the weak interaction-the only Standard Model gauge group under which neutrinos are charged.
As the name "weak interaction" suggests, neutrinos interact very feebly with particles in the Standard Model. Thus, it wasn't until 1956 that the neutrino was observed in an experimental setting for the first time. A team of scientists from Los Alamos Scientific Laboratory, led by Frederick Reines and Clyde Cowan, detected a free neutrino from a nuclear reactor via the inverse beta decay interaction (\(\bar{\nu}_{\rm e}{\rm p}\to{\rm e}^{+}{\rm n}\)) [35, 36]. Though it was not known at the time, they had detected electron antineutrinos (\(\bar{\nu}_{\rm e}\)). Electron (anti)neutrinos represent one of the three weak-flavor eigenstates neutrinos can occupy in the Standard Model-specifically, the eigenstate that couples to the \({\rm e}^{\pm}\) charged leptons through the charged-current weak interaction. Upon confirmation of their discovery, Reines and Cowan sent the telegram shown in figure 1.1 to Pauli, alerting him of the definitive existence of the neutral particles he proposed in Tubingen.
Shortly after this, the phenomenon of neutrino oscillations-periodic transitions between different types of neutrinos-started to appear in the literature. In 1958, Bruno Pontecorvo discussed the possibility of mixing between right-handed antineutrinos \(\bar{\nu}_{\rm R}\) and "sterile" right-handed neutrinos \(\nu_{\rm R}\), in analogy with \({\rm K}^{0}\)-\(\bar{\rm K}^{0}\) mixing observed in
the quark sector [37]. A second possible source of neutrino oscillations came following the 1962 experimental discovery of a second neutrino weak-flavor eigenstate-the muon neutrino (\(\nu_{\mu}\)) [38]. After this, the notion of mixing between neutrino flavor and mass eigenstates was introduced by Ziro Maki, Masami Nakagawa, and Shoichi Sakata [39]. In a 1967 paper [40], Pontecorvo introduced the possibility of vacuum \(\nu_{\rm e}\)-\(\nu_{\mu}\) oscillations, even predicting a factor of two suppression in the total solar neutrino flux before such a deficit would actually be observed [41].
The aforementioned deficit, known as the "solar neutrino problem", was established in 1968 through a now-famous experiment at the Homestake Mine in South Dakota led by Raymond Davis [42]. Davis and his colleagues detected the capture of electron neutrinos from the sun on \({}^{37}\)Cl nuclei, allowing a measurement of the solar \(\nu_{\rm e}\) flux. Their result was about a factor of \(\sim 1/3\) lower than the leading prediction from John Bachall [43]. This is shown in figure 1-2, including confirmations of the deficit following the Homestake experiment. The solution was not a mistake in the experimental measurement or theoretical prediction, as physicists expected at the time; rather, it was a deficiency in our understanding of neutrinos. This was the first piece of the puzzle that would eventually lead to the discovery of neutrino oscillations and nonzero neutrino masses.
The next piece of the puzzle came from atmospheric neutrinos, i.e. neutrinos coming from the decay of mesons created from the interactions of primary cosmic rays in the atmosphere. Around the mid-1980s, two water Cherenkov detectors, IMB-3 [44] and Kamiokande [45], began to measure the interactions of atmospheric \(\nu_{\mu}\) and \(\nu_{\rm e}\) events (initially just as a background for their main physics goal, the search for nucleon decay). The ratio of \(\nu_{\mu}:\nu_{\rm e}\) interactions was found to be lower than the theoretical expectation by a factor of \(\sim 2/3\)[46]. This was known as the "atmospheric neutrino anomaly". The source of this anomaly was not clear at the time; it could have been a deficit of muon neutrinos, an excess of electron neutrinos, or some of both. Systematic issues in the flux prediction or muon identification were also suggested [46].
Figure 1-1: Telegram from Fred Reines and Clyde Cowan informing Wolfgang Pauli of their detection of neutrinos from a nuclear reactor.
It was far from clear that neutrino oscillations could be responsible for the observed deficit.
The solution to the atmospheric neutrino anomaly came from the Super-Kamiokande (SuperK) experiment [47]. SuperK was a much larger version of the Kamiokande detector, allowing the detection of higher energy muons (up to E\({}_{\mu}\sim\) 5 GeV). SuperK also measured the up-down asymmetry of muon-like and electron-like events in their detector, (N\({}_{\text{up}}\) - N\({}_{\text{down}}\))/(N\({}_{\text{up}}\) + N\({}_{\text{down}}\)). Upward-going events have traveled a much longer distance than downward-going events before reaching the SuperK detector-thus positive detection of an asymmetry would be smoking-gun evidence for a baseline-dependent effect like neutrino oscillations. This is precisely what SuperK observed [2]. As shown in figure 1-3, an up-down asymmetry is observed in the muon-like channel, the magnitude of which increases with the observed muon momentum. Such behavior is consistent with muon neutrino oscillations to a third flavor eigenstate, \(\nu_{\tau}\) (the mathematical details of neutrino oscillations will be described in section 1.3). No such effect was observed in the electron-like channel. Thus, the atmospheric neutrino anomaly is a result of muon neutrino disappearance, specifically coming from \(\nu_{\mu}\rightarrow\nu_{\tau}\) oscillations.
The solution to the solar neutrino problem came in 2002 from the Sudbury Neutrino Observatory (SNO) [48]. The SNO experiment used a heavy water Cherenkov detector, specifically relying on the use of deuterium target nuclei to be sensitive to three different neutrino interactions,
\[\begin{array}{ll}\nu_{\text{e}}+\text{d}\rightarrow\text{p}+\text{p}+\text{ e}^{-}&\text{(CC)},\\ \nu_{\text{x}}+\text{d}\rightarrow\text{p}+\text{n}+\nu_{\text{x}}&\text{(NC)},\\ \nu_{\text{x}}+\text{e}^{-}\rightarrow\nu_{\text{x}}+\text{e}^{-}&\text{(ES)}.\end{array} \tag{1.1}\]
Charged-current (CC), neutral-current (NC), and elastic scattering (ES) interactions
Figure 1-2: The deficit of the observed solar \(\nu_{\text{e}}\) flux compared with the theoretical expectation. The Homestake experiment is shown on the far left; follow-up solar neutrino measurements confirming the deficit are also shown, including the 2002 SNO result which brought forth a solution to the solar neutrino problem. Figure from Ref. [1].
were separated based on the visible energy and scattering angle of the final state particles. NC events were further separated by tagging the 6.25 MeV photon released from neutron capture on deuterium. By measuring all three channels, SNO was able to measure the \({}^{8}\)B solar neutrino flux broken down into the \(\nu_{\rm e}\) and \(\nu_{\mu,\tau}\) components. SNO's 2002 result showed that the missing neutrinos from the Homestake experiment were in fact showing up in the \(\nu_{\mu,\tau}\) component [3]. Figure 1-4 shows the flux of each component as constrained by the measured CC, NC, and ES interaction rate. The flavor transitions here come not from vacuum oscillations but rather from matter-enhanced resonant behavior as neutrinos travel through the dense solar medium-a phenomenon known as the Mikheyev-Smirnov-Wolfenstein (MSW) effect [49, 50]. The MSW effect still, however, requires mixing between the neutrino flavor and mass eigenstate as well as non-zero squared differences between the mass eigenstates. It is worth noting here that the KamLAND reactor neutrino experiment was essential in determining the oscillation parameters which led to the SNO observation [51]. Thus, the SNO solution to the solar neutrino problem and the SuperK solution to the atmospheric neutrino anomaly were both evidence for the existence of neutrino oscillations and thus non-zero neutrino masses. The collaborations shared the 2015 Nobel Prize in physics for this discovery [52, 53].
Since SuperK and SNO, neutrino oscillations have been measured extensively by a global program of reactor, accelerator, atmospheric, and solar neutrino experiments. The mixing angle and mass-squared splittings of the three Standard Model neutrinos have been measured to few-percent-level precision in most cases [54, 55, 56]. There are a number of open questions in the standard three-neutrino mixing paradigm, including the ordering of the three mass eigenstates and the value of the charge-parity-violating complex phase \(\delta_{\rm CP}\). Though preliminary results exist on both fronts [54, 55, 56, 57, 58], definitive answers to each will come from next-generation neutrino experiments, including Hyper-K [59], DUNE [60] and JUNO [61].
### 1.2 Neutrinos in the Standard Model
The arguments and notation presented in this section follow closely from chapter 2 of Ref. [62].
The interactions of the known fundamental particles of our Universe are described by a specific quantum field theory known as the Standard Model (SM). Above the electroweak scale, there are three gauge groups contained within the SM:
* \(\rm SU(3)_{\rm c}\), which governs the gluon-mediated "strong interactions" of color-charged fields.
* \(\rm SU(2)_{\rm L}\), one part of the "electro-weak interaction", mediated by the \(\rm W^{\pm}_{\mu}\) and \(\rm W^{0}_{\mu}\) vector bosons.
* \(\rm U(1)_{\rm Y}\), the other part of the "electro-weak interaction", mediated by the \(\rm B_{\mu}\) gauge boson.
After electro-weak symmetry breaking (EWSB) via the Higgs mechanism, the \(\rm SU(2)_{\rm L}\times U(1)_{\rm Y}\) subgroup breaks down to \(\rm U(1)_{\rm Q}\), which describes the electromagnetic (EM)
Figure 1-3: The up-down asymmetry measured in SuperK as a function of lepton momentum, separated into e-like and \(\mu\)-like events as well as fully-contained (FC) and partially-contained (PC) events. The dashed line indicates the best fit to \(\nu_{\mu}\to\nu_{\tau}\) oscillations. Figure from Ref. [2].
interactions of charged fields mediated by the A\({}_{\mu}\) gauge boson, also known as the photon.
Of the three fundamental interactions of the SM, neutrinos are only charged under the weak SU(2)\({}_{\rm L}\) gauge group-they are singlets under the SU(3)\({}_{\rm c}\) and U(1)\({}_{\rm Q}\) gauge groups. Thus, neutrinos only appear in the electro-weak part of the SM Lagrangian, which is given by
\[{\cal L}=\frac{\rm g}{\sqrt{2}}({\rm J}^{\mu}{\rm W}_{\mu}^{+}+{\rm J}^{\mu \dagger}{\rm W}_{\mu}^{-})+\frac{\rm g}{\cos\theta_{\rm W}}{\rm K}^{\mu}{\rm Z} _{\mu}, \tag{1.2}\]
where g = e/\(\cos\theta_{\rm W}\) is the SU(2)\({}_{\rm L}\) gauge coupling of W\({}_{\mu}\) and Higgs field, \(\theta_{\rm W}\) is the Weinberg angle describing the rotation that occurs during EWSB between the neutral parts of the SU(2)\({}_{\rm L}\) and U(1)\({}_{\rm Y}\) gauge boson fields, and W\({}_{\mu}^{\pm}\) (Z\({}_{\mu}\)) is the charged (neutral) piece of SU(2)\({}_{\rm L}\) after EWSB. The currents coupled to W\({}_{\mu}^{\pm}\) and Z\({}_{\mu}\) bosons are given by
\[\begin{split}{\rm J}^{\mu}&=\left(\overline{\rm u} ^{0}\quad\overline{\rm c}^{0}\quad\overline{\rm t}^{0}\right)\gamma^{\mu}{\rm P }_{\rm L}\begin{pmatrix}{\rm d}^{0}\\ {\rm s}^{0}\end{pmatrix}+\left(\overline{\nu_{\rm e}}\quad\overline{\nu_{\mu}} \quad\overline{\nu_{\tau}}\right)\gamma^{\mu}{\rm P}_{\rm L}\begin{pmatrix} {\rm e}\\ \mu\\ \tau\end{pmatrix}\\ {\rm K}^{\mu}&=\sum_{\rm f}\overline{\rm f}\gamma^{\mu}[{\rm I}_{3 \rm L}{\rm P}_{\rm L}-\sin^{2}\theta_{\rm W}{\rm Q}_{\rm f}]{\rm f}\\ &=\sum_{\rm q}[\epsilon_{\rm L}({\rm q})\overline{\rm q}\gamma_{ \mu}{\rm P}_{\rm L}{\rm q}+\epsilon_{\rm R}({\rm q})\overline{\rm q}\gamma_{ \mu}{\rm P}_{\rm R}{\rm q}]\\ &+\frac{1}{2}\sum_{\alpha\in\{{\rm e},\mu,\tau\}}[\overline{\nu_{ \alpha}}\gamma^{\mu}{\rm P}_{\rm L}\nu_{\alpha}+\overline{\ell}_{\alpha}\gamma _{\mu}(\rm g_{\rm V}^{\alpha}-\gamma_{5}g_{\rm A}^{\alpha})\ell_{\alpha}], \end{split} \tag{1.3}\]
where P\({}_{\rm R}\)(P\({}_{\rm L}\)) = \((1\pm\gamma^{5})/2\) is the projection operator onto the right-handed (left-handed) chiral state, and the subscript 0 on the quark fields indicates that these are the weak flavor eigenstates rather than the mass eigenstates. The first generation coupling constants in K\({}^{\mu}\), which derive from the specified EM charge and SU(2)\({}_{\rm L}\) representation of each field, are given by
\[\begin{split}\epsilon_{\rm L}({\rm u})&=\frac{1}{2}- \frac{2}{3}\sin^{2}\theta_{\rm W}\quad\ \epsilon_{\rm R}({\rm u})=-\frac{2}{3}\sin^{2}\theta_{\rm W}\\ \epsilon_{\rm L}({\rm d})&=-\frac{1}{2}+\frac{1}{3} \sin^{2}\theta_{\rm W}\quad\epsilon_{\rm R}({\rm d})=\frac{1}{3}\sin^{2} \theta_{\rm W}\\ {\rm g_{\rm V}^{e}}&=-\frac{1}{2}+2\sin^{2}\theta_{ \rm W}\quad\quad{\rm g_{\rm A}^{e}}=-\frac{1}{2}.\end{split} \tag{1.4}\]
The Lagrangian in equation (1.2) can be used to calculate cross sections for the various SM interactions of the neutrino. The first term describes the charged-current interactions of neutrinos such as nuclear beta decay, while the second term describes neutral current interactions such as \(\nu_{\mu}\)e\({}^{-}\) elastic scattering. At energy scales below the electro-weak scale, one can integrate out the W\({}_{\mu}\) and Z\({}_{\mu}\) gauge bosons and describe
interactions in terms of the dimensional Fermi constant
\[\text{G}_{\text{F}}=\frac{\text{g}^{2}}{4\sqrt{2}\text{M}_{\text{W}}^{2}}=1.166 \times 10^{-5}\text{ GeV}^{-2}. \tag{1.5}\]
The low-energy Lagrangian describing 4-fermion interactions can be derived from equation (1.2) as
\[\mathcal{L}_{\text{4f}}=\frac{\text{-4G}_{\text{F}}}{\sqrt{2}}[\text{J}_{\mu} \text{J}^{\mu\dagger}+\text{K}_{\mu}\text{K}^{\mu}]. \tag{1.6}\]
As an example, we consider low-energy neutrino electron elastic scattering (ES) (\(\nu\text{e}^{-}\to\nu\text{e}^{-}\)). This is a purely leptonic process and is therefore relatively clean; specifically, ES models do not need to account for the complex dynamics of the nuclear medium. The Feynman diagrams for the contributing interactions are shown in figure 1-5. Both the charged-current (CC) and neutral-current (NC) diagrams contribute to \(\nu_{\text{e}}\text{e}^{-}\) scattering, while only the NC diagram contributes to \(\nu_{\mu,\tau}\text{e}^{-}\) scattering. Using the Feynman rules associated with equation (1.6), one can calculate the cross sections to be [62]
\[\begin{split}\sigma_{\nu_{\text{e}}\text{e}^{-}\to\nu_{\text{e}} \text{e}^{-}}(\text{E}_{\nu})&=\frac{\text{G}_{\text{F}}^{2} \text{m}_{\text{e}}\text{E}_{\nu}}{2\pi}\bigg{[}(2\sin^{2}\theta_{\text{W}}+1) ^{2}+\frac{4}{3}\sin^{4}\theta_{\text{W}}\bigg{]}\\ &\approx 0.9\times 10^{-43}\bigg{(}\frac{\text{E}_{\nu}}{10\text{ MeV}}\bigg{)}\text{cm}^{2}\\ \sigma_{\nu_{\mu,\tau}\text{e}^{-}\to\nu_{\mu,\tau}\text{e}^{-}}( \text{E}_{\nu})&=\frac{\text{G}_{\text{F}}^{2}\text{m}_{\text{e }}\text{E}_{\nu}}{2\pi}\bigg{[}(2\sin^{2}\theta_{\text{W}}-1)^{2}+\frac{4}{3} \sin^{4}\theta_{\text{W}}\bigg{]}\\ &\approx 0.15\times 10^{-43}\bigg{(}\frac{\text{E}_{\nu}}{10 \text{ MeV}}\bigg{)}\text{cm}^{2},\end{split} \tag{1.7}\]
which is valid for \(\text{E}_{\nu}\) >> \(\text{m}_{\text{e}}\). Similarly, one can calculate the cross section for antineutrino electron ES (\(\overline{\nu}\text{e}^{-}\to\overline{\nu}\text{e}^{-}\)). The diagrams contributing for this process are shown in figure 1-6, and the cross section is given by [62]
\[\begin{split}\sigma_{\overline{\nu}\text{e}^{-}\to\overline{\nu} \text{e}^{-}}(\text{E}_{\nu})&=\frac{\text{G}_{\text{F}}^{2} \text{m}_{\text{e}}\text{E}_{\nu}}{2\pi}\bigg{[}\frac{1}{3}(2\sin^{2}\theta_{ \text{W}}+1)^{2}+4\sin^{4}\theta_{\text{W}}\bigg{]}\\ &\approx 0.378\times 10^{-43}\bigg{(}\frac{\text{E}_{\nu}}{10 \text{ MeV}}\bigg{)}\text{cm}^{2}\\ \sigma_{\overline{\nu}\mu,\tau}\text{e}^{-}\to\overline{\nu} \mu,\tau}\text{e}^{-}(\text{E}_{\nu})&=\frac{\text{G}_{\text{F}}^ {2}\text{m}_{\text{e}}\text{E}_{\nu}}{2\pi}\bigg{[}\frac{1}{3}(2\sin^{2}\theta _{\text{W}}-1)^{2}+4\sin^{4}\theta_{\text{W}}\bigg{]}\\ &\approx 0.14\times 10^{-43}\bigg{(}\frac{\text{E}_{\nu}}{10 \text{ MeV}}\bigg{)}\text{cm}^{2}.\end{split} \tag{1.8}\]
We now turn to the interaction at the core of this thesis: neutrino-nucleon charged-current quasi-elastic (CCQE) scattering. The relevant Feynman diagrams for this process are shown in figure 1-7. Unlike ES, models of CCQE do need to account for
the nuclear environment surrounding the target nucleon. As the final state nucleon travels through the nuclear medium, it may scatter off of other nucleons and/or produce additional mesons through a process known as final state interactions (FSIs). As shown in figure 1-8, CCQE is dominant for \(\mathrm{E}_{\nu}\lesssim 1~{}\mathrm{Ge\kern-1.0ptV}\). Above this energy, nucleon resonance processes start to take over, in which Delta resonances decay to final state mesons. In the regime \(\mathrm{E}_{\nu}\gtrsim 10~{}\mathrm{Ge\kern-1.0ptV}\), neutrinos start to undergo deep inelastic scattering (DIS) off of the constituent quarks within the nucleon.
In order to calculate the CCQE cross section, one considers a theory containing nucleon degrees of freedom. The original calculation for free nucleons (i.e., not bound within a nucleus) was carried out by Llewellyn-Smith in 1972; the differential cross section as a function of the squared four-momentum transfer \(\mathrm{Q}^{2}\) is given by [63, 4]
\[\frac{\mathrm{d}\sigma}{\mathrm{d}\mathrm{Q}^{2}}=\frac{\mathrm{G}_{\mathrm{F} }^{2}\mathrm{M}^{2}|\mathrm{V}_{\mathrm{ud}}|^{2}}{8\pi\mathrm{E}_{\nu}^{2}} \bigg{[}\mathrm{A}\pm\frac{\mathrm{s-u}}{\mathrm{M}^{2}}\mathrm{B}+\frac{ \mathrm{(s-u)}^{2}}{\mathrm{M}^{4}}\mathrm{C}\bigg{]}, \tag{1.9}\]
where +(-) refers to (anti)neutrino scattering, \(\mathrm{M}\) is the nucleon mass, \(\mathrm{m}\) is the lepton mass, \(\mathrm{(s-u)}=4\mathrm{ME}_{\nu}-\mathrm{Q}^{2}-\mathrm{m}^{2}\), and \(\mathrm{A}\), \(\mathrm{B}\), and \(\mathrm{C}\) are functions of the vector, axial-vector, and pseudoscalar form factors of the nucleon (see equations 58, 59, and 60 of Ref. [4] for complete expressions). These form factors describe the composite nature of nucleons under interactions with different Lorentz structures.
For \(\mathrm{E}_{\nu}\) <-\(\mathrm{M}\), the \(\nu_{\mathrm{e}}\) CCQE cross section is approximately [62]
\[\begin{split}\sigma_{\nu_{\mathrm{e}}\mathrm{n}\to\mathrm{e^{-} p}}(\mathrm{E}_{\nu})&\approx\frac{\mathrm{G}_{\mathrm{F}}^{2} \mathrm{E}_{\nu}^{2}}{\pi}(\mathrm{g}_{\mathrm{V}}^{2}+3\mathrm{g}_{\mathrm{ A}}^{2})\\ &\approx 9.75\times 10^{-42}\bigg{[}\frac{\mathrm{E}_{\nu}}{10~{} \mathrm{Me\kern-1.0ptV}}\bigg{]}^{2}~{}\mathrm{cm}^{2}.\end{split} \tag{1.10}\]
In the regime \(\mathrm{E}_{\nu}\gtrsim 1~{}\mathrm{Ge\kern-1.0ptV}\), the \(\nu_{\mathrm{e}}\) and \(\nu_{\mu}\) CCQE cross sections are no longer suppressed by threshold effects and are thus the same, approximately \(10^{-38}~{}\mathrm{cm}^{2}\)[4]. This cross section is significantly larger than the elastic scattering and lower energy \(\nu_{\mathrm{e}}\) CCQE cross sections and is the dominant neutrino interaction for many accelerator-based neutrino experiments, including two at the heart of this thesis: MiniBooNE and MicroBooNE. Finally, we note that the cross section for antineutrino CCQE tends to be smaller; this will be important in chapter 6.
### 1.3 Massive Neutrinos
The arguments and notation presented in this section follow closely from section 2.5, chapter 4, and chapter 5 of Ref. [62] as well as chapter 11 of Ref. [64].
Neutrinos are massless in the SM. To see this, we will exhaust the two possible forms for a neutrino mass term in the SM Lagrangian: Dirac and Majorana. These refer to the two possible fermionic spinor representations in which neutrinos can be found. Dirac spinors in general have four complex components, or degrees of freedom, while Majorana spinors have only two. The critical question is whether
Figure 1-7: Diagrams contributing to neutrino-nucleon charged-current quasielastic scattering
Figure 1-6: Diagrams contributing to \(\overline{\nu}\)e\({}^{-}\) elastic scattering
the right-handed chiral projection of the neutrino field, \(\nu_{\rm R}\), is the same as \(\overline{\nu}_{\rm R}\), the right-handed chiral projection of the antineutrino field (Majorana case), or if it is a new degree of freedom (Dirac case).
The definition of a free Dirac fermion field is
\[\psi({\rm x})=\int\frac{{\rm d}^{3}{\rm p}}{\sqrt{(2\pi)^{3}2{\rm E}_{\rm P}}} \sum_{{\rm s}=\pm\frac{1}{2}}\Big{(}{\rm f}_{\rm s}({\bf p}){\rm u}_{\rm s}({ \bf p}){\rm e}^{-i{\bf p}\cdot{\bf x}}+\overline{{\rm f}_{\rm s}}^{\dagger}({ \bf p}){\rm v}_{\rm s}({\bf p}){\rm e}^{i{\bf p}\cdot{\bf x}}\Big{)}, \tag{1.11}\]
where \({\rm f}_{\rm s}({\bf p})\) annihilates a single particle of momentum \({\bf p}\) while \(\overline{{\rm f}_{\rm s}}^{\dagger}\) creates the corresponding antiparticle state, and \({\rm u}_{\rm s}({\bf p})\) and \({\rm v}_{\rm s}({\bf p})\) are spinors with positive and negative energy, respectively, satisfying the Dirac equations
\[\begin{split}&(\gamma^{\mu}{\rm p}_{\mu}-{\rm m}){\rm u}_{\rm s}({ \bf p})=0\\ &(\gamma^{\mu}{\rm p}_{\mu}+{\rm m}){\rm v}_{\rm s}({\bf p})=0, \end{split} \tag{1.12}\]
where \(\gamma^{\mu}\) are a set of Lorentz-indexed matrices satisfying \(\{\gamma^{\mu},\gamma^{\nu}\}\) = \(2{\rm g}^{\mu\nu}\). There are many possible representations for the \(\gamma\)-matrices. We consider the Weyl basis, in which [64]
\[\gamma_{\mu}=\begin{pmatrix}0&\sigma_{\mu}\\ \overline{\sigma}_{\mu}&0\end{pmatrix}, \tag{1.13}\]
where \(\sigma_{\mu}\) = \((1,\overline{\sigma})\), \(\overline{\sigma}_{\mu}\) = \((1,-\overline{\sigma})\), and \(\vec{\sigma}\) = \((\sigma_{1},\sigma_{2},\sigma_{3})\) are the Pauli matrices. This representation is convenient for understanding the different chiral components of the Dirac spinor \(\psi\). The Lorentz generators \({\rm S}^{\mu\nu}\equiv\frac{{\rm i}}{4}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(\gamma^{5}\equiv{\rm i}\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}\), which takes the form
\[\gamma^{5}=\begin{pmatrix}-\mathbb{1}&0\\ 0&\mathbb{1}\end{pmatrix} \tag{1.15}\]
in the Weyl basis. We can define projection operators \({\rm P}_{\rm L}=\frac{1}{2}(1-\gamma^{5})\) and \({\rm P}_{\rm R}=\frac{1}{2}(1+\gamma^{5})\) such that \({\rm P}_{\rm L}\psi\) = \(\psi_{\rm L}\) and \({\rm P}_{\rm R}\psi_{\rm R}\). It is worth noting that while the behavior of these projection operators is especially clear in the Weyl representation, they will isolate the chiral components of \(\psi\) in any representation of \(\gamma^{\mu}\).
Dirac mass terms couple left-handed and right-handed chiral states. To see this, consider the Dirac equation in the Weyl basis, which takes the form [64]
\[(\gamma^{\mu}{\rm p}_{\mu}-{\rm m})\psi=\begin{pmatrix}-{\rm m}&\sigma^{\mu}{ \rm p}_{\mu}\\ \overline{\sigma}^{\mu}{\rm p}_{\mu}&-{\rm m}\end{pmatrix}\begin{pmatrix}\psi _{\rm L}\\ \psi_{\rm R}\end{pmatrix}=0. \tag{1.16}\]
It is evident that this matrix equation mixes the left-handed and right-handed components of \(\psi\). Dirac mass terms take the form \({\rm m}\psi_{\rm L}^{\dagger}\psi_{\rm R}\) and \({\rm m}\psi_{\rm R}^{\dagger}\psi_{\rm L}\), thus requiring both chiral components. After EWSB, the non-neutrino fermions in the SM acquire a Dirac mass from their interactions with the Higgs field. Neutrinos, however, do not have a right-handed chiral state in the SM; therefore, the SM cannot include a Dirac mass term for neutrinos.
Now we turn to the Majorana mass term. The expression for a Majorana field is the same as equation (1.11), subject to a condition relating particles and antiparticles. We see that the expression for \(\psi^{*}({\rm x})\) would involve \({\rm f}_{\rm S}^{\dagger}({\bf p})\), which creates a particle state, and \(\overline{\rm f}_{\rm S}({\bf p})\), which annihilates an antiparticle state. It turns out the relationship \(\psi({\rm x})=\psi^{*}({\rm x})\) is not Lorentz invariant [62]. To remedy this, we must define the conjugate Dirac field
\[\psi^{\rm C}({\rm x})\equiv\gamma_{0}{\rm C}\psi^{*}({\rm x}), \tag{1.17}\]
where the representation-dependent conjugation matrix \({\rm C}\) is defined by the equation
\[\begin{split}&\gamma_{0}{\rm C}\sigma^{*}_{\mu\nu}=-\sigma_{ \mu\nu}\gamma_{0}{\rm C},\\ &\sigma_{\mu\nu}\equiv\frac{{\rm i}}{2}[\gamma_{\mu},\gamma_{\nu}].\end{split} \tag{1.18}\]
In the Weyl representation, for example, \({\rm C}={\rm i}\gamma_{2}\gamma_{0}\). This requirement for \({\rm C}\) ensures that \(\psi^{\rm C}({\rm x})\) transforms in the same way as \(\psi({\rm x})\) under the Lorentz group [62]. The Lorentz-invariant Majorana condition specifically requires
\[\psi({\rm x})={\rm e}^{{\rm i}\theta}\psi^{\rm C}({\rm x}), \tag{1.19}\]
where \(\theta\) is an arbitrary phase, which we can take to be \(\theta\) = 0. It is important to note that this condition can only be satisfied for fields that carry no additive quantum numbers [64].
In the Weyl basis, equation (1.19) relates the left-handed and right-handed com
ponents of \(\psi(\mathrm{x})\) such that [64]
\[\psi(\mathrm{x})=\begin{pmatrix}\psi_{\mathrm{L}}\\ \mathrm{i}\sigma_{2}\psi_{\mathrm{L}}^{*}\end{pmatrix}, \tag{1.20}\]
where the number of degrees of freedom has been reduced from four to two. Since \(\mathrm{i}\sigma_{2}\psi_{\mathrm{L}}^{*}\) transforms like a right-handed spinor, we can now write mass terms of the form \(\mathrm{im}\psi_{\mathrm{L}}^{\dagger}\sigma_{2}\psi_{\mathrm{L}}^{*}\) and \(\mathrm{im}\psi_{\mathrm{L}}^{\dagger}\sigma_{2}\psi_{\mathrm{L}}^{*}\). These are Majorana mass terms. Note that they couple the same chiral component of the fermion.
The impossibility of a neutrino Majorana mass term is a bit more nuanced. Majorana mass terms for neutrinos in the SM contain the bi-linear expression \(\nu_{\mathrm{L}}^{\mathrm{T}}\sigma_{2}\nu_{\mathrm{L}}\). However, \(\nu_{\mathrm{L}}\) belongs to an \(\mathrm{SU(2)}_{\mathrm{L}}\) doublet in the SM, thus this Majorana mass term transforms as a triplet under \(\mathrm{SU(2)}_{\mathrm{L}}\). It also breaks lepton number by two units, hence it also violates baryon minus lepton number (B - L), which is conserved to all orders of the SM gauge couplings [62]. Therefore, neutrinos also cannot have a Majorana mass term in the SM.
Despite these arguments, neutrino oscillations have given physicists definitive evidence that at least two of the three SM neutrino masses are nonzero (as discussed in section 1.1). This requires the presence of physics beyond the Standard Model (BSM). The minimal extension of the SM which can accommodate nonzero neutrino masses introduces additional right-handed neutrino states \(\mathrm{N_{R}}\)[62, 64]. These fields, which are singlets under the SM gauge group, can generate both Dirac and Majorana mass terms for neutrinos. The most general expression for the neutrino mass Lagrangian is then
\[\text{\text{\text{\text{\text{--}}}}}\mathcal{L}_{\mathrm{mass}}=\frac{1}{2} \left(\overline{\nu}_{\mathrm{L}}\quad\overline{\mathrm{N_{L}^{C}}}\right) \begin{pmatrix}0&\mathrm{M}\\ \mathrm{M^{T}}&\mathrm{B}\end{pmatrix}\begin{pmatrix}\nu_{\mathrm{R}}^{\mathrm{ C}}\\ \mathrm{N_{R}}\end{pmatrix}+\mathrm{h.c.}, \tag{1.21}\]
where M and B are the Dirac and Majorana mass matrices of the neutrino sector, respectively, and \(\nu_{\mathrm{L}}\) and \(\mathrm{N_{R}}\) are column vectors containing the left-handed and right-handed projections of each neutrino generation.
In order to obtain the mass eigenstates of this theory, one must diagonalize the mass matrix in equation (1.21). If we assume one generation of neutrinos, the eigenvalues of this mass matrix are
\[\mathrm{m_{1,2}}=\frac{1}{2}(\sqrt{\mathrm{B}^{2}+4\mathrm{M}^{2}}\mp\mathrm{ B}). \tag{1.22}\]
In the limit B >> M, the eigenvalues are approximately given by
\[\mathrm{m_{1}}\approx\frac{\mathrm{M}^{2}}{\mathrm{B}},\quad\mathrm{m_{2}} \approx\mathrm{B}. \tag{1.23}\]
This is the famous "seesaw mechanism" for neutrino mass generation [65]. One can see that if B is at roughly the GUT scale (\(10^{16}\) GeV) and M is at roughly the electro-weak scale (\(100\) GeV), we see that \(\mathrm{m_{1}}\) < 1 eV. This is the right order-of-magnitude regime predicted by neutrino oscillation data and is consistent with existing upper bounds on the neutrino mass from KATRIN [66]. Thus, this model is an elegant explanation
of the observed neutrino oscillation phenomenon, though experimental confirmation of right-handed neutrino fields at the GUT scale is probably not feasible for quite a long time.
While we do not know the mechanism through which neutrinos acquire mass, it is relevant to ask whether the resulting mass terms are Dirac or Majorana in nature. An extensive worldwide experimental program is currently underway to answer this question by searching for neutrino-less double beta decay, a rare decay process in which a nucleus undergoes two simultaneous beta decays without emitting any neutrinos in the final state [67, 68, 69]. A positive observation would imply that neutrinos are Majorana.
As discussed in section 1.1, perhaps the most famous consequence of massive neutrinos is the phenomenon of neutrino oscillations [37, 39, 40]. This arises because the three weak flavor eigenstates \(\nu_{\alpha}\) are not aligned with the three mass eigenstates \(\nu_{\text{i}}\). The two bases are related by the unitary Pontecorvo-Maki-Nakagawa-Sakata (PMNS) mixing matrix \(\text{U}_{\alpha\text{i}}\),
\[\begin{pmatrix}\nu_{\text{e}}\\ \nu_{\mu}\\ \nu_{\tau}\end{pmatrix}=\begin{pmatrix}\text{U}_{\text{e}1}&\text{U}_{\text{ e}2}&\text{U}_{\text{e}3}\\ \text{U}_{\mu 1}&\text{U}_{\mu 2}&\text{U}_{\mu 3}\\ \text{U}_{\tau 1}&\text{U}_{\tau 2}&\text{U}_{\tau 3}\end{pmatrix} \begin{pmatrix}\nu_{1}\\ \nu_{2}\\ \nu_{3}\end{pmatrix}. \tag{1.24}\]
As seen in equation (1.2), neutrinos interact in the weak flavor eigenstates \(\nu_{\alpha}\). Thus, a neutrino a produced alongside a charged anti-lepton \(\overline{\ell}\) is in the state
\[\ket{\nu(\text{t}=0)}=\ket{\nu_{\ell}}=\sum_{\text{i}\in\{1,2,3\}}\text{U}_{ \ell\text{i}}\ket{\nu_{\text{i}}}. \tag{1.25}\]
Neutrinos propagate, however, in their mass eigenstates. Each mass eigenstate \(\nu_{\text{i}}\) is associated with a mass \(\text{m}_{\text{i}}\) and four-momentum \((\text{p}_{\text{i}})_{\mu}\) = \((\text{E}_{\text{i}},\vec{\text{p}_{\text{i}}})\) satisfying the on-shell requirement \((\text{p}_{\text{i}})^{2}\) = \(\text{m}_{\text{i}}^{2}\). Thus, after a time t, the neutrino will be in the state
\[\ket{\nu(\text{t})}=\sum_{\text{i}}\text{e}^{-\text{i}\text{p}_{\text{i}}\cdot \text{x}}\text{U}_{\ell\text{i}}\ket{\nu_{\text{i}}}. \tag{1.26}\]
The overlap with a different weak flavor eigenstate \(\nu_{\ell^{\prime}}\neq\nu_{\ell}\) is non-trivial, given by the expression
\[\begin{split}\bra{\nu_{\ell^{\prime}}}\ket{\nu(\text{t})}& =\sum_{\text{i},\text{j}}\bra{\nu_{\text{j}}}\mathrm{U}_{\text{j} \ell^{\prime}}^{\dagger}\text{e}^{-\text{i}\text{p}_{\text{i}}\cdot\text{x}} \text{U}_{\ell\text{i}}\ket{\nu_{\text{i}}}\\ &=\sum_{\text{i}}\text{e}^{-\text{i}\text{p}_{\text{i}}\cdot \text{x}}\text{U}_{\ell\text{i}}\text{U}_{\ell^{\prime}\text{i}}^{*},\end{split} \tag{1.27}\]
where we have invoked the orthonormality of the mass basis in the last line. The probability of finding a neutrino in flavor eigenstate \(\nu_{\ell^{\prime}}\) given an initial \(\nu_{\ell}\) state is
then
\[\begin{split}\mathrm{P}_{\nu_{\ell}\to\nu_{\ell^{\prime}}}(\mathrm{t})& =|\bra{\nu_{\ell}^{\prime}}\!\ket{\nu(\mathrm{t})}|^{2}\\ &=\sum_{\mathrm{i},\mathrm{j}}|\mathrm{U}_{\ell\mathrm{i}}\mathrm{ U}_{\ell^{\prime}\mathrm{i}}^{*}\mathrm{U}_{\ell^{\prime}\mathrm{j}}^{*}\mathrm{U}_{ \ell^{\prime}\mathrm{j}}|\mathrm{e}^{-\mathrm{i}(\mathrm{P}_{\mathrm{i}}\!- \mathrm{P}_{\mathrm{j}})\cdot\mathbf{x}+\mathrm{i}\phi_{\ell\ell^{\prime} \mathrm{j}}}.\end{split} \tag{1.28}\]
where \(\phi_{\ell\ell^{\prime}\mathrm{i}\mathrm{j}}\equiv\arg(\mathrm{U}_{\ell \mathrm{i}}\mathrm{U}_{\ell^{\prime}\mathrm{i}}^{*}\mathrm{U}_{\ell^{\prime} \mathrm{j}}^{*}\mathrm{U}_{\ell^{\prime}\mathrm{j}})\).
We now make a simplifying assumption, in which all neutrino mass eigenstates propagate with the same momentum, i.e. \(\vec{\mathrm{p}_{\mathrm{i}}}=\vec{\mathrm{p}_{\mathrm{j}}}\equiv\vec{ \mathrm{p}}\forall\mathrm{i},\mathrm{j}\). This treatment is not necessarily physical. However, for the parameters relevant to most laboratory neutrino experiments, it leads to the same result as the correct but complicated full treatment of the quantum mechanical neutrino wave packet [70]. Given this assumption along with the approximation that \(\mathrm{m_{i}}<<\mathrm{p_{i}}\) (which should hold for all existing and near-future experiments), we can show
\[\begin{split}(\mathrm{p_{i}}-\mathrm{p_{j}})\cdot\mathrm{x}& =(\mathrm{E_{i}}-\mathrm{E_{j}})\mathrm{t}\\ &=\Big{(}\sqrt{\vec{\mathrm{p}}^{2}+\mathrm{m_{i}^{2}}}-\sqrt{ \vec{\mathrm{p}}^{2}+\mathrm{m_{j}^{2}}}\Big{)}\mathrm{t}\\ &\approx\frac{\Delta\mathrm{m_{ij}^{2}}\mathrm{t}}{2|\vec{ \mathrm{p}}|},\end{split} \tag{1.29}\]
where \(\Delta\mathrm{m_{ij}^{2}}=\mathrm{m_{i}^{2}}-\mathrm{m_{j}^{2}}\). Working in natural units (c = h = 1), we note that ultra-relativistic neutrinos satisfy \(|\vec{\mathrm{p}}|\approx\mathrm{E}\) and \(\mathrm{t}\approx\mathrm{L}\), where \(\mathrm{L}\) is the distance traveled by the neutrino. Taking only the real part of the exponential in equation (1.28), we have
\[\mathrm{P}_{\nu_{\ell}\to\nu_{\ell^{\prime}}}(\mathrm{t})=\sum_{\mathrm{i}, \mathrm{j}}|\mathrm{U}_{\ell\mathrm{i}}\mathrm{U}_{\ell^{\prime}\mathrm{i}}^{ *}\mathrm{U}_{\ell\mathrm{j}}^{*}\mathrm{U}_{\ell^{\prime}\mathrm{j}}|\cos \Big{(}\frac{\Delta\mathrm{m_{ij}^{2}}\mathrm{L}}{2\mathrm{E}}-\phi_{\ell\ell^ {\prime}\mathrm{i}\mathrm{j}}\Big{)}. \tag{1.30}\]
If we consider a two-neutrino paradigm, the unitary mixing matrix is real and can be parameterized by a single "mixing angle" \(\theta\),
\[\mathrm{U}\equiv\begin{pmatrix}\mathrm{U}_{\ell 1}&\mathrm{U}_{\ell 2}\\ \mathrm{U}_{\ell^{\prime}1}&\mathrm{U}_{\ell^{\prime}2}\end{pmatrix}=\begin{pmatrix} \cos\theta&\sin\theta\\ -\sin\theta&\cos\theta\end{pmatrix}. \tag{1.31}\]
In this scenario, summing over the two mass eigenstates as in equation (1.30) gives
\[\mathrm{P}_{\nu_{\ell}\to\nu_{\ell^{\prime}}}(\mathrm{t})=\sin^{2}2\theta\sin^ {2}\Big{(}\frac{\Delta\mathrm{m^{2}}\mathrm{L}}{4\mathrm{E}}\Big{)}. \tag{1.32}\]
The extension to the standard three neutrino paradigm can be found in any text on neutrino oscillations. We quote the result here. Three mass eigenstates lead to two independent mass-squared splittings, \(\Delta\mathrm{m_{12}^{2}}\) and \(\Delta\mathrm{m_{23}^{2}}\). The mixing matrix in equation (1.24) can be parameterized by three real mixing angles \(\theta_{\mathrm{i}\mathrm{j}}\) and one complex
CP-violating phase \(\delta\),
\[\mathrm{U}=\begin{pmatrix}1&0&0\\ 0&\mathrm{c}_{23}&\mathrm{s}_{23}\\ 0&-\mathrm{s}_{23}&\mathrm{c}_{23}\end{pmatrix}\begin{pmatrix}\mathrm{c}_{13}&0& \mathrm{s}_{13}\mathrm{e}^{-\mathrm{i}\delta}\\ 0&1&0\\ -\mathrm{s}_{13}\mathrm{e}^{\mathrm{i}\delta}&0&\mathrm{c}_{13}\end{pmatrix} \begin{pmatrix}\mathrm{c}_{12}&\mathrm{s}_{12}&0\\ -\mathrm{s}_{12}&\mathrm{c}_{12}&0\\ 0&0&1\end{pmatrix} \tag{1.33}\]
where \(\mathrm{c}_{\mathrm{i}\mathrm{j}}\equiv\cos\theta_{\mathrm{i}\mathrm{j}}\) and \(\mathrm{s}_{\mathrm{i}\mathrm{j}}\equiv\sin\theta_{\mathrm{i}\mathrm{j}}\). The three mixing angles (\(\theta_{12}\), \(\theta_{13}\), \(\theta_{23}\)) and two relevant mass squared splittings \(\Delta\mathrm{m}^{2}_{12}\) and \(|\Delta\mathrm{m}^{2}_{23}|\) have been measured to a precision of \(\mathcal{O}(1\%\)-\(10\%)\) over the past two decades [54, 55, 56]. An extensive experimental program is planned to measure \(\delta\) to similar precision, as well as the neutrino hierarchy (i.e., the sign of \(\Delta\mathrm{m}^{2}_{23}\)) and the octant of \(\theta_{23}\)[71].
### 1.4 Anomalies in the Neutrino Sector
Despite the success of the three-neutrino mixing paradigm, several anomalous results have appeared. Perhaps the most famous of these is the excess of \(\bar{\nu}_{\mathrm{e}}\) candidate events observed by the Liquid Scintillator Neutrino Detector (LSND) experiment [5]. LSND took data at Los Alamos Meson Physics Facility (LAMPF) from 1993-1998, observing neutrino interactions from a high-intensity decay-at-rest (DAR) source. The LSND detector was a 167-ton cylindrical tank of mineral oil that collected scintillation and Cherenkov light produced in neutrino interactions. The LAMPF accelerator provided a \(\sim 1\) mA beam of 798 MeV protons, which were then focused on a water or high-Z target. This process created a large number of pions, which then decayed to produce neutrinos. Most \(\pi^{-}\) came to rest and were captured by nuclei in and around the target, and the \(\pi^{+}\to\nu_{\mathrm{e}}\mathrm{e}^{+}\) decay chain is helicity-suppressed due to the interplay between angular momentum conservation and the left-chiral nature of the weak interaction. Thus the dominant neutrino production process was \(\pi^{+}\to\nu_{\mu}(\mu^{+}\to\bar{\nu}_{\mu}\nu_{\mathrm{e}}\mathrm{e}^{+})\).
LSND looked specifically for \(\bar{\nu}_{\mu}\to\bar{\nu}_{\mathrm{e}}\) conversion using \(\bar{\nu}_{\mu}\) from \(\mu^{+}\) DAR. The \(\bar{\nu}_{\mathrm{e}}\) events were observed via the inverse beta decay (IBD) process. This is a very clean channel, as one can require a coincidence between the initial positron emission and the subsequent neutron capture on hydrogen, which releases a characteristic 2.2 MeV photon. The intrinsic \(\bar{\nu}_{\mathrm{e}}\) flux, coming predominately from \(\pi^{-}\) decay-in-flight (DIF), was suppressed compared to intrinsic \(\bar{\nu}_{\mu}\) by a factor of \(\sim 8\times 10^{-4}\). Any significant excess of \(\bar{\nu}_{\mathrm{e}}\) would be evidence for \(\bar{\nu}_{\mu}\to\bar{\nu}_{\mathrm{e}}\) oscillations. This is exactly what LSND observed, as shown in figure 1-9. However, the neutrino energies \(\mathcal{O}(30\) MeV) and baselines (\(\mathcal{O}(30\) m) required a mass-squared-splitting of \(\Delta\mathrm{m}^{2}\sim 1\) eV\({}^{2}\). This is larger than the measured values of \(\Delta\mathrm{m}^{2}_{12}\) and \(|\Delta\mathrm{m}^{2}_{23}|\) by at least three orders of magnitude-therefore, the LSND result cannot be explained by the standard three neutrino oscillation paradigm. One must introduce a fourth neutrino to the SM neutrinos in order to facilitate such oscillations. Measurements of the invisible width of the Z boson forbid this neutrino from coupling to the weak force in the same way as the three SM neutrinos [72]. Thus, this fourth neutrino is typically referred to as a "sterile neutrino" (\(\nu_{\mathrm{s}}\)). The sterile neutrino paradigm will be introduced in more detail in section 1.4 and discussed thoroughly throughout this thesis. The LSND anomaly
is currently under direct investigation by the follow-up JSNS[2] experiment [73, 74], which will use a gadolinium-loaded liquid scintillator detector [75] to measure IBD interactions at the J-PARC Materials and Life Science Experimental Facility.
The Mini Booster Neutrino Experiment (MiniBooNE) was designed to follow up on the LSND anomaly [76]. MiniBooNE took data at Fermilab's Booster Neutrino Beam (BNB) from 2002-2019, observing the interactions of neutrinos with energy E \(\sim\) 500 MeV in an 800-metric-ton mineral oil (CH\({}_{2}\)) detector [15]. The Fermilab Booster accelerates protons to a kinetic energy of 8 GeV, at which point they collide with the beryllium target of the BNB. This produces a cascade of mesons, predominately pions. The charged mesons are focused using a magnetic horn and decay in a 50 m decay pipe; in the nominal "neutrino mode", the magnetic field is generated to create a flux of mostly muon neutrinos from \(\pi^{+}\) decay-in-flight [14]. The current in the magnetic horns can be reversed to instead focus \(\pi^{-}\) along the beamline, thus creating a beam of mostly muon antineutrinos-this is referred to as "antineutrino mode". MiniBooNE was situated at a baseline of L \(\sim\) 500 m from the BNB target, resulting in a similar characteristic L/E as that of LSND, \(\approx\) 1 m/MeV. By equation (1.30), this means MiniBooNE would also be sensitive to oscillations at \(\Delta\)m\({}^{2}\sim\) 1 eV\({}^{2}\).
In 2007, MiniBooNE began to report results from their flagship analysis: the search for an excess of \(\nu_{\rm e}\) events in the BNB [76]. MiniBooNE relied primarily on the reconstruction of Cherenkov light from charged final state particles to identify neutrino interactions. Thus, \(\nu_{\rm e}\) CC interactions would show up as a "fuzzy" Cherenkov ring due to multiple scattering of the electron as well as the induced EM shower [77].
Figure 1-9: The LSND excess of \(\overline{\nu}_{\rm e}\) events on top of the predicted SM background (green and red regions). The blue region indicates the best fit to \(\overline{\nu}_{\mu}\rightarrow\overline{\nu}_{\rm e}\) oscillations via a sterile neutrino state. Figure from Ref. [5]
These fuzzy Cherenkov ring events are hereafter referred to as "electron-like" events. Starting with the initial results [76, 78], MiniBooNE has consistently observed an excess of electron-like events above their expected SM background, the significance of which has grown over the 17-year data-taking campaign of the experiment [10]. Figure 2-5 shows the \(4.8\sigma\) MiniBooNE electron-like excess considering the total neutrino mode dataset, corresponding to \(18.75\times 10^{20}\) protons-on-target (POT) [10]. A similar excess was observed in the antineutrino mode dataset [79]. The as-yet-unexplained MiniBooNE excess represents one of the most significant disagreements with the SM to date.
Though the origin of the MiniBooNE excess remains unknown, neutrino physicists have converged on a number of potential explanations. The most famous explanation involves sterile neutrino-driven \(\nu_{\mu}\rightarrow\nu_{\rm e}\) oscillations consistent with the LSND result (\(\Delta\)m\({}^{2}\sim 1\) eV\({}^{2}\)). While this model can explain at least some of the MiniBooNE excess, the excess in the lowest energy region (E\({}_{\nu}\lesssim 400\) MeV) sits above even the best-fit sterile neutrino solution. Due to the Cherenkov nature of the detector, electrons and photons are essentially indistinguishable-both seed EM showers which appear as fuzzy Cherenkov rings. Thus, the MiniBooNE excess could also come from a mismodeled photon background. Though not the subject of this thesis, there have been extensive experimental and theoretical efforts, both within and outside of the MiniBooNE collaboration, to validate the MiniBooNE SM photon background prediction [10, 80, 81, 82]. One can also consider BSM sources of electron-like events in MiniBooNE. Typical models introduce additional sources of photons and/or e\({}^{+}\)e\({}^{-}\) events in MiniBooNE through couplings to new dark sector particles. Resolution of the LSND and MiniBooNE anomalies, often referred to as the short baseline (SBL) anomalies, is a major goal within the particle physics community [83]. This thesis specifically investigates the MiniBooNE anomaly in further detail, covering both experimental and phenomenological studies into the origin of the excess.
Figure 1-10: The MiniBooNE electron-like channel data and SM background prediction for the entire neutrino mode dataset, as a function of the reconstructed neutrino energy.
We now briefly touch on two additional classes of anomalies that have surfaced over the years: the reactor antineutrino anomaly (RAA) and the gallium anomaly. The RAA [8] is a \(\sim 5\%\) deficit in the total \(\overline{\nu}_{\rm e}\) rate observed from nuclear reactors compared to the theoretical expectation from the Huber-Mueller (HM) model [84, 85]. The HM model combines results using the summation method (summing the contributions of all beta-decay branches in the reactor) and the conversion method (relying on older measurements of the \(\overline{\nu}_{\rm e}\) flux from the different fissionable isotopes in the reactor). The data contributing to the RAA mostly come from reactor neutrino experiments operating at baselines short enough that the effects of SM neutrino oscillations are negligible. One can interpret the RAA as due to \(\overline{\nu}_{\rm e}\) disappearance via oscillations involving a sterile neutrino. Coincidentally, due to the relevant neutrino energies and baselines, such a solution requires \(\Delta{\rm m}^{2}\gtrsim 1\) eV\({}^{2}\), similar to the LSND and MiniBooNE solution [6]. Figure 1-11 shows an overview of the RAA circa 2012, including the suite of short baseline reactor experiments which observe a deficit with respect to the HM model with SM neutrino oscillations (red line), as well as an example sterile neutrino solution to the RAA (blue line). Recently, the reactor \(\overline{\nu}_{\rm e}\) flux calculation has been revisited by various groups, each of which improves upon some aspect of the summation or conversion method used in the HM flux model [86, 87, 88, 89]. The significance of the RAA either diminishes or disappears in some of these models; however, these improved models have difficulty removing the RAA while also explaining the "5-MeV bump" observed by most short baseline reactor experiments with respect to the HM model [89]. Thus, while the RAA story is quickly evolving, our understanding of reactor neutrino fluxes is far from clear.
The gallium anomaly refers to a series of gallium-based detectors that have observed a deficit of \(\nu_{\rm e}\) capture events on \({}^{71}\)Ga with respect to the theoretical expectation. The original harbingers of the anomaly, SAGE [90] and GALLEX [91], were designed to measure solar neutrinos using the \({}^{71}\)Ga\(\nu_{\rm e}\rightarrow\)\({}^{71}\)Gee- capture process. Each detector was calibrated using electron capture \(\nu_{\rm e}\) sources, including \({}^{51}\)Cr and \({}^{37}\)Ar. Combining all available calibration data across both experiments, the observed \({}^{71}\)Ge production rate was lower than the expectation by a factor of \(0.87\pm 0.05\)[90]. Though the statistical significance of the anomaly was only modest (\(2\,\)-\(\,3\sigma\)), the community was already beginning to interpret the anomaly as \(\nu_{\rm e}\rightarrow\nu_{\rm s}\) transitions via an eV-scale sterile neutrino [92]. A follow-up experiment to the SAGE and GALLEX
Figure 1-11: Data contributing to the reactor antineutrino anomaly, indicating the \(\sim 5\%\) flux deficit observed by short-baseline reactor neutrino experiments. The red line indicates the prediction incorporating SM neutrino oscillations only, while the blue line shows an example prediction including a sterile neutrino. Figure from Ref. [6].
anomaly, BEST [9], released their first results in 2021. BEST placed a 3.414 MCi \({}^{51}\)Cr \(\nu_{\rm e}\) source at the center of two nested \({}^{71}\)Ga volumes, each with a different average distance from the source. The ratio of observed to the predicted \({}^{71}\)Ge production rate was R\({}_{\rm in}\) = \(0.79\pm 0.05\) (R\({}_{\rm out}\) = \(0.77\pm 0.05\)) for the inner (outer) volume, thus reaffirming the gallium anomaly [9]. No evidence for a difference in the deficit between the inner and outer volumes was observed, which would have been a smoking gun signature of a baseline-dependent effect like \(\nu_{\rm e}\rightarrow\nu_{\rm s}\) oscillations. However, the statistical significance of the gallium anomaly is now much stronger; the combined SAGE, GALLEX, and BEST results give evidence for a deficit at the \(5.0\sigma\) level [7]. The datasets contributing to this anomaly are summarized in figure 1-12.
As alluded to above, the most common BSM interpretation of the SBL, reactor antineutrino, and gallium anomalies is the "3+1 model", which involves the addition of a new neutrino state-the sterile neutrino-at the eV scale. The sterile neutrino introduces a fourth weak interaction eigenstate \(\nu_{\rm s}\) and mass eigenstate \(\nu_{4}\) to the standard three-neutrino mixing paradigm. Thus, equation (1.24) becomes
\[\begin{pmatrix}\nu_{\rm e}\\ \nu_{\mu}\\ \nu_{\tau}\\ \nu_{\rm s}\end{pmatrix}=\begin{pmatrix}\rm U_{e1}&\rm U_{e2}&\rm U_{e3}&\rm U _{e4}\\ \rm U_{\mu 1}&\rm U_{\mu 2}&\rm U_{\mu 3}&\rm U_{\mu 4}\\ \rm U_{\tau 1}&\rm U_{\tau 2}&\rm U_{\tau 3}&\rm U_{\tau 4}\end{pmatrix} \begin{pmatrix}\nu_{1}\\ \nu_{2}\\ \nu_{3}\\ \nu_{4}\end{pmatrix}. \tag{1.34}\]
As we are interested in an eV-scale sterile neutrino, the mass-squared splittings between the three active neutrinos are smaller by at least 2-3 orders of magnitude
Figure 1-12: Data contributing to the gallium anomaly, indicating the \(\sim 20\%\) deficit in the \({}^{71}\)Ge production rate observed by SAGE, GALLEX, and BEST. Figure from Ref. [7].
compared to their mass-squared splittings with the fourth mass eigenstate. This means that the active neutrino mass splittings are negligible for short-baseline experiments, i.e. those in which the argument of the second \(\sin^{2}\) term in equation (1.32) is small. Experiments contributing to the aforementioned anomalies all satisfy this condition. Thus, when considering sterile neutrino explanations for these anomalies, we can make the approximation
\[\Delta\mathrm{m}_{41}^{2}\approx\Delta\mathrm{m}_{42}^{2}\approx\Delta\mathrm{ m}_{43}^{2}\equiv\Delta\mathrm{m}^{2}, \tag{1.35}\]
where we hereafter use \(\Delta\mathrm{m}^{2}\) to refer to the mass-squared splitting of the fourth mass eigenstate. This approximation holds regardless of the hierarchy of SM neutrino mass eigenstates.
The experiments discussed in this thesis are sensitive only to \(\overset{(\rightharpoonup)}{\nu}_{\mathrm{e}}\) and \(\overset{(\rightharpoonup)}{\nu}_{\mu}\) interactions. The sterile neutrino can facilitate short-baseline oscillations between these flavor states; the oscillation probability expressions, which can be derived using equation (1.30) within the 3+1 framework, are given by [93]
\[\begin{split}\mathrm{P}_{\nu_{\mathrm{e}}\to\nu_{\mathrm{e}}}& =1-4\sin^{2}2\theta_{\mathrm{ee}}\sin^{2}(1.27\Delta\mathrm{m}^{2} \mathrm{L/E})\\ \mathrm{P}_{\nu_{\mu}\to\nu_{\mu}}&=1-4\sin^{2}2 \theta_{\mu\mu}\sin^{2}(1.27\Delta\mathrm{m}^{2}\mathrm{L/E})\\ \mathrm{P}_{\nu_{\mu}\to\nu_{\mathrm{e}}}&=4\sin^{2 }2\theta_{\mu\mathrm{e}}\sin^{2}(1.27\Delta\mathrm{m}^{2}\mathrm{L/E}),\end{split} \tag{1.36}\]
where \(\Delta\mathrm{m}^{2}\), \(\mathrm{L}\), and \(\mathrm{E}\) are in units of \(\mathrm{eV}^{2}\), \(\mathrm{km}\), and \(\mathrm{GeV}\), respectively, and
\[\begin{split}\sin^{2}2\theta_{\mathrm{ee}}&=4(1-| \mathrm{U}_{\mathrm{e}4}|^{2})|\mathrm{U}_{\mathrm{e}4}|^{2}\\ \sin^{2}2\theta_{\mu\mu}&=4(1-|\mathrm{U}_{\mu 4}|^{2})| \mathrm{U}_{\mu 4}|^{2}\\ \sin^{2}2\theta_{\mu\mathrm{e}}&=4|\mathrm{U}_{\mu 4}|^{2}| \mathrm{U}_{\mathrm{e}4}|^{2}.\end{split} \tag{1.37}\]
The first expression in equation (1.36) can potentially explain the deficit of \(\overline{\nu}_{\mathrm{e}}\) and \(\nu_{\mathrm{e}}\) events observed in the RAA and gallium anomaly, respectively. Though both anomalies stem qualitatively from the same phenomenon-\(\overset{(\rightharpoonup)}{\nu}_{\mathrm{e}}\) disappearance at short baseline-the gallium anomaly in general prefers a larger value of \(\sin^{2}2\theta_{\mathrm{ee}}\) than the RAA. This is evident in figure 1-13, which shows the regions in \(\sin^{2}2\theta_{\mathrm{ee}}\)-\(\Delta\mathrm{m}^{2}\) parameter space preferred by the RAA and gallium anomalies, as well as global constraints from other experiments. These constraints come from short-to-medium-baseline reactor experiments, including NEOS [94], RENO [95], and Daya Bay [96], as well as very-short-baseline reactor experiments, including STEREO [97], DANSS [98], and PROSPECT [99]. Each of these experiments searches for \(\overline{\nu}_{\mathrm{e}}\) disappearance in a reactor-flux-agnostic way: the former though comparisons of the reactor \(\overline{\nu}_{\mathrm{e}}\) spectra measured by different detectors [100], and the latter through the use of modular or movable detectors capable of comparing \(\overline{\nu}_{\mathrm{e}}\) interaction rates across different baselines. The KATRIN experiment, which is sensitive to the neutrino mass via an extremely precise measurement of the tritium beta spectrum endpoint, also places strong constraints on \(\sin^{2}2\theta_{\mathrm{ee}}\) in the \(\Delta\mathrm{m}^{2}\gtrsim 10\)\(\mathrm{eV}^{2}\) region [101].
The second expression in equation (1.36) can potentially explain the SBL anoma
lies. This is because both LSND and MiniBooNE operated at accelerator neutrino sources for which the neutrino flux was generated mainly by charged pion decay [5, 14]; thus, due to helicity suppression, the flavor composition was dominated muon-flavored (anti)neutrinos. This means that even a small value of \(\sin^{2}2\theta_{\mu\mathrm{e}}\) could generate an observable level of \(\overset{\leftarrow}{\nu}\overset{\rightarrow}{\nu}\) appearance on top of the SM \(\overset{\leftarrow}{\nu}\)e flux prediction. Figure 1-14 shows the allowed regions in \(\sin^{2}2\theta_{\mu\mathrm{e}}\)-\(\Delta\mathrm{m}^{2}\) parameter space from LSND and MiniBooNE [10]. Strikingly, both anomalies generally prefer the same region of parameter space. However, as the MiniBooNE excess tends to peak more sharply at lower energies, the 3+1 fit prefers lower values of \(\Delta\mathrm{m}^{2}\) compared to the LSND result.
It is important to note that the fits performed in figure 1-14 account only for oscillations, ignoring any potential \(\overset{\leftarrow}{\nu}\overset{\rightarrow}{\nu}\)e or disappearance in the SM background prediction. This is a reasonable approximation, however, the inclusion of the latter effects does indeed impact the MiniBooNE allowed regions. This effect was only accounted for recently in Ref. [102], which is presented in section 5.3.1 of this thesis.
While there are indications of short baseline appearance and \(\overset{\leftarrow}{\nu}\overset{\rightarrow}{\nu}\)e disappearance in the global anomaly picture, direct observation of disappearance via the third expression in equation (1.36) remains elusive. Long baseline experiments such as MINOS/MINOS+ [103, 104] and CCFR84 [105] have searched for muon neutrino disappearance from an accelerator neutrino source. Additionally, the IceCube experiment has searched for a sterile-induced matter resonance impacting muon neutrinos as they transit through the earth [106]. So far, no definitive evidence for disappearance has been found (up to a \(\sim 2\sigma\) preference in the IceCube results [106]).
Figure 1-13: Preferred regions in \(\sin^{2}2\theta_{\mathrm{ee}}\)–\(\Delta\mathrm{m}^{2}\) parameter space to explain the RAA [8] (green contour) and gallium anomaly [9] (blue regions). The total excluded region from other experiments (grey region) is also shown. Figure from Ref. [9].
The lack of \(\overset{\text{\tiny$\leftarrow$}}{\nu}_{\mu}\) disappearance introduces significant tension when one tries to fit global neutrino data within a consistent 3+1 model. This conclusion has been reached by multiple 3+1 global fitting efforts [11, 93, 12]; figure 1-15 shows a representation of the tension between appearance and disappearance experiments observed in global fits. This tension persists even with the inclusion of the recent BEST result, which prefers larger values of \(|\text{U}_{\text{e}4}|^{2}\) (thus allowing lower values of \(|\text{U}_{\mu 4}|^{2}\) to fit the \(\overset{\text{\tiny$\leftarrow$}}{\nu}_{\text{e}}^{\text{\tiny$\leftarrow$}}\) appearance anomalies) [12]. Thus, the 3+1 model, while still an important benchmark BSM scenario, has become disfavored as a solution to all observed anomalies in the neutrino sector. The state of the sterile neutrino explanation of the SBL anomalies is discussed in more detail throughout this thesis.
In recent years, neutrino physicists have begun to turn toward alternative explanations of the anomalies, often involving dark sector particles with additional interactions. Chapter 6 of this thesis covers one such explanation of the MiniBooNE anomaly, involving heavy right-handed neutrinos with a transition magnetic moment coupling to the active neutrinos.
Figure 1-14: Preferred regions in \(\sin^{2}2\theta_{\mu\text{e}}\)–\(\Delta\text{m}^{2}\) parameter space to explain the LSND anomaly [5] (filled contours) and MiniBooNE anomaly [10] (open contours). Figure from Ref. [10].
Figure 1-15: Graphical representation of the tension observed in 3+1 global fits between different subsets of the experimental landscape. Figure 1-15a shows the tension between \(\nu_{\text{e}}\) appearance experiments and \(\nu_{\text{e}}/\nu_{\mu}\) disappearance experiments observed in Ref. [11]. Figure 1-15b shows the tension between allowed regions from \(\nu_{\text{e}}\) appearance (lower right), \(\nu_{\text{e}}\) disappearance (upper right), and \(\nu_{\mu}\) disappearance (upper left) experiments observed in Ref. [12], which includes the latest results from the BEST experiment. |
2301.08546 | Disorder Induced Nonlinear Mode Coupling and Symmetry Breaking in
Amorphous Solids | Applying very small purely radial strains on amorphous solids in radial
geometry one observes elastic responses that break the radial symmetry. Without
any plasticity involved, the responses indicate nonlinear mode coupling
contributions even for minute strains. We show that these symmetry-breaking
responses are due to disorder, typical to amorphous configurations. The
symmetry breaking responses are quantitatively explained using the classical
Michell solutions which are excited by mode coupling. | Avanish Kumar, Itamar Procaccia, Murari Singh | 2023-01-20T13:05:38Z | http://arxiv.org/abs/2301.08546v1 | # Disorder Induced Nonlinear Mode Coupling and Symmetry Breaking in Amorphous Solids
###### Abstract
Applying _very small_ purely radial strains on amorphous solids in radial geometry one observes elastic responses that break the radial symmetry. Without any plasticity involved, the responses indicate nonlinear mode coupling contributions even for minute strains. We show that these symmetry-breaking responses are due to disorder, typical to amorphous configurations. The symmetry breaking responses are quantitatively explained using the classical Michell solutions which are excited by mode coupling.
## I Introduction
It is very customary in physics to assert that the effects of small perturbations on a given system can be faithfully analyzed using linear or linearized theories. This is certainly correct in mechanics, where small external strains on any given solid are expected to induce stresses and displacement fields that are perfectly predictable by linear elasticity theory [1; 2]. But this may not be the case when the solid in question is amorphous. Even with very small strains, the spatial disorder that characterizes amorphous solids is not "small" in any sense, and can result in responses that usually appear in classical solids only at much larger magnitudes of strain. In this paper we demonstrate this using a classical model of amorphous solids consisting of point particles interacting via Lennard-Jones forces with a distribution of interaction diameters. \(N\) particles are confined to an annulus with inner radius \(r_{\rm in}\) and outer radius \(r_{\rm out}\). The inner radius can be inflated, and the inflation is strictly radial, \(r_{\rm in}\to r_{\rm in}+\delta\). In this paper we choose minute values of \(\delta/r_{\rm in}\), of the order of \(10^{-7}\). The outer boundary is rigid, leading to vanishing radial component of the displacement field. Having such a small inflation in mind one expects to see only purely radial displacement field throughout the sample, since only nonlinear effects can lead to the breaking of the radial symmetry. The interesting and perhaps surprising typical result that simulations discover are shown in Fig. 1. The actual displacement field is not radial. We should stress that every configuration of the amorphous solid exhibits different symmetry-breaking, depending on the realization of the random structure of the solid, as explained below. The inescapable conclusion is that disorder induces mode coupling to non-radial modes which are exposed and analyzed in this Letter.
The structure of this Letter as follows: in Sect. II we describe the numerical experiments and extract the components of the displacement field that are responsible for the symmetry-breaking. To understand the nature of these components we turn in Sect. III to the Michell solutions of the radial elasticity problem [2; 3], and demonstrate their relevance for the phenomenon under study. In Sect. IV we demonstrate that the symmetry-breaking modes in the displacement field can be faithfully reconstructed with a few low-order non-radial Michell solutions. We offer a summary and a discussion in Sect. V. In this discussion we assign the mode coupling seen in the numerics to the existence of a disorder length, which typically increases when the pressure in the amorphous solid decreases. With the radius of the inner boundary \(r_{\rm in}\) exceeding this length, the mode-coupling effects reduce to nothing, and the expected purely elastic response is regained. To complete the assignment of the symmetry breaking to the presence of disorder, we repeat the simulations using perfectly ordered configurations. In these cases no nonlinear mode-coupling is found. Finally we comment on the relevance of the present findings to the modeling of mechanical responses of amorphous solids using elasto-plastic models [4; 5; 6], arguing that disorder-induced nonlinear effects cannot be overlooked.
Figure 1: Magnitude of the displacement field resulting form a minute purely radial inflation \(r_{\rm in}\to r_{\rm in}+\delta\) where \(r_{\rm in}=5\), \(r_{\rm out}=80\) and \(\delta=10^{-6}\). The concern of this paper is the non-radial response seen in this image.
Numerical simulations
### system preparation
In the simulations we construct an annulus with two rigid walls, with an inner radius \(r_{\rm in}\) and outer radius \(r_{\rm out}\). The annulus is then filled up with \(N\) point particles put in random positions in the area \(A=\pi(r_{\rm out}^{2}-r_{\rm in}^{2})\). The number of particles is chosen such that the density of the equilibrated glass (as described below) has a value \(\rho=N/A\). In this section we use a standard poly-dispersed model of \(N\) particles of mass \(m=1\)[7]. The binary interactions are
\[\phi(r_{ij})=\epsilon\left(\frac{\sigma_{ij}}{r_{ij}}\right)^{12 }+C_{0}+C_{2}\left(\frac{r_{ij}}{\sigma_{ij}}\right)^{2}+C_{4}\left(\frac{r_{ ij}}{\sigma_{ij}}\right)^{4}\] \[\epsilon\!=\!1,C_{0}\!=\!-1.92415,C_{2}\!=\!2.11106,C_{4}\!=\!-0.5 91097 \tag{1}\]
This potential is cut-off at \(r=1.5\sigma\) with two smooth derivatives. The unit of energy is \(\epsilon\) and Boltzmann's constant is unity. The interaction length was drawn from a probability distribution \(P(\sigma)\sim 1/\sigma^{3}\) in a range between \(\sigma_{\rm min}\) and \(\sigma_{\rm max}\) such that the mean \(\bar{\sigma}=1\):
\[\sigma_{ij}=\frac{\sigma_{i}+\sigma_{j}}{2}\Big{[}1-0.2\Big{|} \sigma_{i}-\sigma_{j}\Big{|}\Big{]},\] \[\sigma_{\rm max}=1.61\,\sigma_{\rm min}=\sigma_{\rm max}/2.219. \tag{2}\]
The units of mass and length are \(m\) and \(\bar{\sigma}\) (the average \(\sigma\)). The parameters are chosen to avoid crystallization. The system is thermalized at some "mother temperature" \(T_{m}\) using Swap Monte Carlo and then cooled down to \(T=0\), using conjugate gradient methods. The interaction between the point particles and the two walls are of the same form Eq. (1), where \(r_{ij}\) and \(\sigma_{ij}\) are replaced by the distance to the wall and by \(\sigma_{i}\).
Once the system is mechanically equilibrated with the total force on each particle smaller than \(10^{-8}\), we inflate the inner radius \(r_{\rm in}\), forcing a radial displacement of magnitude \(d_{0}\) as reported below. After inflation the radius of the inner circles is \(r_{\rm in}+d_{0}\). It should be stressed that our inflations are instantaneous, not quasi-static. We made sure that no particle gets trapped in the inflated disk; all our particles are confined between the inner and outer boundaries. This of course limits the degree of inflation in our simulations. After inflation we mechanically equilibrate the system again by the conjugate gradients, and then measure the displacement of each particle \(\{\mathbf{d}_{i}\}_{i=1}^{N}\), comparing the two equilibrated configurations \(\{r_{i},\theta_{i}\}_{i=1}^{N}\) before and after inflation.
### Numerical Results
Typical plots of the magnitude of the displacement field are shown, in addition to Fig. 1, in Fig. 2. The conditions and protocol leading to the results shown in Fig. 2 are identical to those in Fig. 1. The difference is that each time the configuration was recreated starting from random initial conditions. Thus it is clear that the randomness in the resulting amorphous solid is responsible for the precise expression of the symmetry-breaking. Below we analyze and characterize the non-radial components of the response using the classical Michell solutions. Section IV demonstrates the reconstruction of the symmetry-breaking modes in the displacement field using non-radial Michell solutions.
### Data Aanlysis
Having the displacement of every particle \(\{\mathbf{d}_{i}\}_{i=1}^{N}\), we consider \(K\) annuli of fixed width, limited by circles of radii \(\{R_{k}<R_{k+1}\}_{k=1}^{K-1}\) such that \(R_{k+1}-R_{k}=\Delta\) and \(R_{K}=R_{\rm out}\). In each annulus we compute the averages
Figure 2: Two further examples of the magnitude of the displacement field resulting form a minute purely radial inflation \(r_{\rm in}\to r_{\rm in}+\delta\) where the parameters are all identical to those of Fig.1. The point to notice is the variability of results for identical protocols, a reflection of the material disorder.
\(f_{k}^{(m)}\) and \(g_{k}^{(m)}\) which are defined by:
\[f_{k}^{(m)} \equiv 2\pi(k+1/2)\Delta\sum_{i}\mathbf{d}_{i}\cdot\hat{\mathbf{r}}\sin(m \theta_{i})\,\ m=1,2,3\ldots \tag{3}\] \[g_{k}^{(m)} \equiv 2\pi(k+1/2)\Delta\sum_{i}\mathbf{d}_{i}\cdot\hat{\mathbf{r}}\cos(m \theta_{i})\,\ m=0,1,2,3\ldots\]
The factor \((k+1/2)\Delta\) stands for the mid-radius associated with the \(k^{\prime}th\) annulus. In Fig. 3 we show \(g_{k}^{(0)}\) which is simply the radial component of the displacement field \(d_{r}(r)\) multiplied by \(2\pi r\). In Figs. 4 we show \(f_{k}^{(m)}\) for \(m=1,2\) and \(3\). Similarly in Figs. 5 we show \(g_{k}^{(m)}\) for \(m=1,2\) and \(3\). The data are shown as circles, and the continuous line pertains to the theoretical curve that is explained in the next section.
## III Theory
### Michell Solutions
To understand the numerical results we turn now to the classical Michell solutions [3] for the displacement field in polar coordinates. These solutions refer precisely to the geometry that we have used above. In two dimensions, when no body forces are present, the stresses can be expressed through the Airy stress function \(\chi\). Strain compatibility equations restrain the Airy stress function to satisfy the bi-harmonic equation [1; 8],
\[\nabla^{2}\nabla^{2}\chi=0. \tag{4}\]
Since stresses and displacements must be single-valued and continuous, any solution of the displacements and the stress functions must be periodic functions of \(\theta\). This was used by Michell to write a general solution for \(\chi\) in Fourier series. Using this series one can directly compute the radial component of the displacement field in the form [2; 3; 8]
\[\begin{split}\vec{d}\cdot\hat{r}&=A_{0}r+B_{0}r( \ln(r)-1)+C_{0}r^{-1}+D_{0}\theta\\ &+\big{[}A_{1}+A_{1}^{\prime}\theta+B_{1}r^{2}+C_{1}r^{-2}+D_{1} \ln(r)\big{]}\sin(\theta)\\ &+\big{[}E_{1}+E_{1}^{\prime}\theta+F_{1}r^{2}+G_{1}r^{-2}+H_{1} \ln(r)\big{]}\cos(\theta)\\ &+\sum_{n=2}^{\infty}\bigg{[}A_{n}r^{n-1}+\frac{B_{n}}{r^{n-1}}+ C_{n}r^{n+1}+\frac{D_{n}}{r^{n+1}}\bigg{]}\sin(n\theta)\\ &+\sum_{n=2}^{\infty}\bigg{[}E_{n}r^{n-1}+\frac{F_{n}}{r^{n-1}}+ G_{n}r^{n+1}+\frac{H_{n}}{r^{n+1}}\bigg{]}\cos(n\theta)\.\end{split} \tag{5}\]
Needless to say, this is the most general solution; depending on the geometry and the boundary conditions, only few or more components may survive as the actual solution of our problem.
Integrating Eq. (5) over the angle results in the angle-averaged radial component of the displacement field,
\[d_{r}(r)\equiv\oint_{0}^{2\pi}\vec{d}\cdot\hat{r}d\theta=A_{0}r+B_{0}r(\ln(r) -1)+C_{0}r^{-1}. \tag{6}\]
In our set up we have only two boundary conditions, i.e. that the radial component of the displacement field
Figure 3: The numerically computed function \(g_{k}^{(0)}(r)\) presented by the open dots. The continuous line is \(I_{0}(r)\) defined by Eq. (8).
Figure 4: The function defined in Eqs. (3) for \(n=1,2\) and \(3\). The continuous lines are \(I_{m}\) of Eq. (9) for \(m=1,2\) and \(3\).
equals \(d_{0}\) at \(r_{\rm in}\) and \(0\) at \(r_{\rm out}\). Here we have however three unknowns, so to reach a unique solution we need another constraint. If we assume that the response is purely linear in the strain perturbation, we can exclude any monopole or multipole in the solution, and this condition will result in \(B_{0}\) being zero in Eq. (6). Applying then the two boundary conditions results in the classical solution[9; 10; 11]
\[d_{r}^{\rm lin}(r)=d_{0}\left(\frac{r^{2}-r_{out}^{2}}{r_{rin}^{2}-r_{out}^{2}} \right)\frac{r_{in}}{r}. \tag{7}\]
The numerical solution as shown above indicate that the last assumption is unwarranted, and that modes that are not allowed in a purely linear solution are readily excited. We therefore cannot cancel a-priori the \(B_{0}\) term in Eq. (5), and we will therefore consider now the angular integrals over the various contribution in that equation in full detail. In accordance with Eqs. (3) we will calculate the following integrals:
* The zeroth-order (\(n=0\)) Fourier integral is \[I_{0}(r)=\oint_{0}^{2\pi}\vec{d}\cdot\hat{r}dl=A_{0}+B_{0}r^{2}+C_{0}r^{2}(-1+ \ln(r)),\] (8) where \(dl=rd\theta\).
* Similarly, the first four integrals (\(n=1,2,3,4\)) for the sine component are given by the following equations \[I_{1}(r) =\oint_{0}^{2\pi}\vec{d}\cdot\hat{r}\sin(\theta)dl=A_{1}r+B_{1}r^ {3}+C_{1}r\ln(r)+\frac{D_{1}}{r}\] \[I_{2}(r) =\oint_{0}^{2\pi}\vec{d}\cdot\hat{r}\sin(2\theta)dl=A_{2}+B_{2}r^ {2}+C_{2}r^{4}+\frac{D_{2}}{r^{2}}\] \[I_{3}(r) =\oint_{0}^{2\pi}\vec{d}\cdot\hat{r}\sin(3\theta)dl=A_{3}r^{3}+ \frac{B_{3}}{r}+C_{3}r^{5}+\frac{D_{3}}{r^{3}}\] \[I_{4}(r) =\oint_{0}^{2\pi}\vec{d}\cdot\hat{r}\sin(4\theta)dl=A_{4}r^{4}+ \frac{B_{4}}{r^{2}}+C_{4}r^{6}+\frac{D_{4}}{r^{4}},\] (9) where the coefficients \(A_{i},B_{i},C_{i},D_{i}\) are the material dependent parameters related to the stress components. The Fourier integrals for the cosine components have exactly the same form as sine components but with different coefficients, say \(E_{i},F_{i},G_{i},H_{i}\). We denote the cosine integrals as \(J_{1}\)-\(J_{4}\) in parallel to Eq. (9).
### Comparison with simulations
In our numerical calculations we observe that only a few lower-order (\(n\leq 4\)) Fourier components are appreciably excited by the very small inflation of the inner boundary. Hence it is enough to compare only a few lower-order (\(n\leq 4\)) Fourier integrals from the Michell's solution to the numerics as summarized by Eqs.(3).
In Figures 3, 4 and 5 we show, in continuous lines, the functional forms Eq. (9) and their cosine counterparts, with the coefficients fitted to the data. The fits are excellent, supporting our proposition that the Michell solutions provide the correct basis for the numerically computed displacement field. We also note that Michell solutions with \(n>4\) are not appreciable excited, although they may very well become important for larger inflations \(d_{0}\).
## IV Data reconstruction using the Michell solutions
Finally, to connect the symmetry-breaking modes to the Michell solutions, we will subtract in each of our \(K\) annuli the angle-averaged radial component of the displacement field, Eq. (7), from the displacement of each particle, and define a new, purely non-radial displacement data \(D_{i}\):
\[\mathbf{D}_{i}=\mathbf{d}_{i}-\hat{\mathbf{r}}_{i}d_{r}^{\rm lin}\big{(}(k+1/2)\Delta\big{)} \,i\in\text{k'th circular shell}. \tag{10}\]
We plot these data such that for every particle \(i\) we compute the radial component of \(\mathbf{D}_{i}\), thus determining a
Figure 5: The function defined in Eqs. (3) for \(n=1,2\) and \(3\). The continuous lines are \(J_{m}\) which are the analogs of Eq. (9) but for the cosine modes with \(m=1,2\) and \(3\).
positive or negative sign depending on this component being outgoing from the center or incoming towards the center. Then we plot the magnitude of \(\mathbf{D}_{i}\cdot\dot{r}_{i}\) at every \(r_{i},\theta_{i}\) with color code that includes the sign. The resulting image associated with the data presented in Fig. 1 is presented in the upper panel of Fig. 6. This presentation of the data underlines the importance of the symmetry-breaking components in the response to the purely radial inflation.
Finally we demonstrate that the displacement field presented in Fig. 6 can be obtained, to a very good approximation, by inflating the inner boundary not radially, but by summing up Fourier components. To this aim we perform now an inflation of the inner boundary according to
\[r_{\rm in}\to r_{\rm in}+\sum_{n=1}^{4}\alpha_{n}\cos(n\theta)+\sum_{n=1}^{4} \beta_{n}\sin(n\theta)\, \tag{11}\]
where
\[\alpha_{n}\equiv\int_{r_{\rm in}}^{r_{\rm out}}J_{n}(r)dr\,\quad\beta_{n} \equiv\int_{r_{\rm in}}^{r_{\rm out}}I_{n}(r)dr. \tag{12}\]
Needless to say, this form depends on the realization since the integrals \(I_{n}\) and \(J_{n}\) vary from system to system, reflecting the randomness in the amorphous configurations. Plotting the resulting signed magnitude of the displacement field (without any need of subtraction of a nonexistent radial part), we obtain the data shown in the lower panel of Fig. 6. The correspondence between the upper and lower panels of Figs. 6 is obvious.
## V Discussion
To understand the reason for the mode-coupling of the radial inflation to the Michell solution we need to recall that on local scales the disorder in our amorphous solid is never "small". In fact, it can be quantified by a typical length, say \(\xi\), which is expected to diverge in granular packing when the unjamming point is approached [12; 13]. At finite pressure this length is always finite, and we expect that small perturbations applied on larger scales should obey linear elasticity theory. In the present context this implies that by increasing \(r_{\rm in}\) beyond \(\xi\) the mode-coupling observed above should disappear. This is demonstrated in Fig. 7. Taking now \(r_{\rm in}=27\) the mode-coupling effect becomes suppressed. Computing the integrals shown above in Figs. 4 and 5 results in random numbers that cannot be fitted to the expected Michell solutions at all.
To complete the identification of the disorder as an inducer of symmetry breaking, we repeated the simulations of inflating an inner boundary into a perfectly ordered hexagonal crystal. When we choose the inner boundary
Figure 6: Upper panel: The symmetry breaking of the radial component of the displacement field, see Eq. (10) for definition, with positive sign for outgoing and negative sign for incoming vector. Lower panel: The radial component of the response to inflation containing only Fourier modes with \(n=1,2,3\) and \(4\), see Eq. (11)
Figure 7: Magnitude of the displacement field resulting form a minute purely radial inflation \(r_{\rm in}\to r_{\rm in}+\delta\) where \(r_{\rm in}=27\), \(r_{\rm out}=84\) and \(\delta=10^{-6}\). The non-radial response is reduced now to almost random noise.
to be a hexagon, we obtain the displacement field exhibited in the upper panel of Fig. 8. The symmetry is perfectly retained in this case. On the other hand, if we choose the inner boundary to be circular, we obtain the displacement field shown in the lower panel of Fig. 8. Here the clash between the circular and hexagonal symmetry results in a disordered displacement near the inner boundary, but this disorder is quickly suppressed in favor of an ordered displacement field in the farther field. No nonlinear mode-coupling to Michell solutions is observed.
In summary, we have shown that the existence of randomness in amorphous solids can lead to appreciable mode-coupling effects when the scale of the applied strain is within the disorder length. Then modes that appear in the Michell analysis are excited even by perturbation of different symmetry. Radial perturbation can be mode-coupled to higher order Fourier modes. Linear elasticity theory cannot be trusted and more general solutions for the displacement field should be considered. This is before plasticity effects are taken into account. These are expected to show up when the amplitude of the perturbation increases, leading to other interesting and non-trivial breakdowns of elasticity theory [9; 10; 11; 14].
The phenomenon discussed in this paper appears relevant for modeling plasticity in amorphous solids as well. As is well known, plastic events are generically Eshelby quadrupoles [15] carrying their own eigen-strain with core sizes that are typically smaller than the disorder length. Assuming therefore that their influence on neighboring region of amorphous matter can be modeled by the _linear_ Eshelby kernel may need careful reconsideration.
We thank Michael Moshe for various useful discussions on the research presented here. This work had been supported in part by ISF under grant #3492/21 (collaboration with China) and the Minerva Center for "Aging, from physical materials to human tissues" at the Weizmann Institute.
|
2302.11547 | Kerr black hole in de Sitter spacetime and observational redshift:
Toward a new method to measure the Hubble constant | We extract the Hubble law by the frequency-shift considerations of test
particles revolving the Kerr black hole in asymptotically de Sitter spacetime.
To this end, we take into account massive geodesic particles circularly
orbiting the Kerr-de Sitter black holes that emit redshifted photons towards a
distant observer which is moving away from the emitter-black hole system. By
considering this configuration, we obtain an expression for redshift in terms
of the spacetime parameters, such as mass, angular momentum, and the
cosmological constant. Then, we find the frequency shift of photons versus the
Hubble constant with the help of some physically motivated approximations.
Finally, some exact formulas for the Schwarzschild black hole mass and the
Hubble constant in terms of the observational redshift of massive bodies
circularly orbiting this black hole are extracted. Our results suggest a new
independent general relativistic approach to obtaining the late-time Hubble
constant in terms of observable quantities. | Mehrab Momennia, Alfredo Herrera-Aguilar, Ulises Nucamendi | 2023-02-22T18:39:13Z | http://arxiv.org/abs/2302.11547v2 | # Kerr black hole in de Sitter spacetime and observational redshift:
###### Abstract
We extract the Hubble law by the frequency shift considerations of test particles revolving the Kerr black hole in asymptotically de Sitter spacetime. To this end, we take into account massive geodesic particles circularly orbiting the Kerr-de Sitter black holes that emit redshifted photons towards a distant observer whose going away from the emitter-black hole system. By considering this configuration, we obtain an expression for redshift in terms of the spacetime parameters, such as mass, angular momentum, and the cosmological constant. Then, we find the frequency shift of photons versus the Hubble constant with the help of some physically motivated approximations. Finally, some exact formulas for the Schwarzschild black hole mass and the Hubble constant in terms of the observational redshift of massive bodies circularly orbiting this black hole are extracted. Our results suggest a new independent general relativistic approach to obtaining the late-time Hubble constant in terms of observable quantities.
**Keywords:** Kerr black hole, de Sitter spacetime, Hubble constant, black hole rotation curves, frequency shift.
pacs: 04.70.Bw, 98.80.-k, 04.40.-b, 98.62.Gq
## I Introduction
Black holes are the densest massive objects known in nature and are among the most important and interesting solutions to the Einstein field equations. By now, they have been directly detected through gravitational waves produced by the coalescence events captured in the LIGO and Virgo observatories [1] as well as the shadow images of supermassive black holes hosted at the center of the Milky Way galaxy and the M87 galaxy revealed by the EHT collaboration [2; 3]. Therefore, nowadays, exploring various aspects of black holes' physics attracted much attention in the context of the general relativity theory.
Among others, inventing and developing methods to determine the black hole parameters, such as mass, charge, and angular momentum has a special place. One of the robust methods to obtain the black hole parameters has been initially suggested in [4], and then developed to analytically express the mass and spin parameters of the Kerr black hole in terms of a few directly observable quantities [5]. In this general relativistic formalism, the observables are frequency shifts of photons emitted by massive geodesic particles orbiting the central black holes along with their orbital parameters.
From the theoretical point of view, the method of [4] has been applied to several black hole spacetimes, such as higher dimensional Myers-Perry black holes [6], Kerr-Newman black holes in de Sitter (dS) spacetime [7], the Plebanski-Demianski background [8], and spherically symmetric regular black holes [9]. In addition, the boson stars [10] as well as black holes in modified gravity [11], coupled to nonlinear electrodynamics [12], and immersed in a strong magnetic field [13] have been investigated by employing a similar procedure, i.e. finding a relation between frequency shift and compact object parameters. However, all the aforementioned attempts were based on the kinematic redshift which is not a directly measured observational quantity, unlike the total frequency shift of photons. Thus, this fact has motivated us to take into account the total redshifts of photons and obtain concise and elegant analytic formulas for the mass and spin of the Kerr black hole in terms of these directly observable elements [5]. More recently, this method has been also applied to express the parameters of static polymerized black holes in terms of the total frequency shifts [14].
From a practical point of view, the developed prescription of this general relativistic approach has been employed to estimate the mass-to-distance ratio of some supermassive black holes hosted at the core of active galactic nuclei (AGNs), like NGC 4258 [15], TXS-2226-184 [16], and more fifteen galaxies [17; 18]. These AGNs enjoy accretion disks consisting of water vapor clouds that are circularly orbiting the central supermassive black hole and emitting photons toward the distant observer, hence enabling us to estimate the mass-to-distance ratio and quantify the gravitational redshift produced by the spacetime curvature that is a general relativistic effect.
On the other hand, the so-called \(\Lambda\)-cold dark matter cosmological standard model successfully explains the
current epoch in the evolution of the cosmos. The field equations of Einstein gravity in the presence of a cosmological constant \(\Lambda\) along with an energy-momentum tensor \(T_{\mu\nu}^{m}\) that accounts for the matter content of the Universe read
\[G_{\mu\nu}+\Lambda g_{\mu\nu}=T_{\mu\nu}^{m}, \tag{1}\]
where \(G_{\mu\nu}\) is the Einstein tensor. Thus, in order to explain the current accelerated expansion of the Universe, taking into account the contribution of the dark energy, and thus adding the \(\Lambda\)-term to the Einstein field equations is inevitable [19; 20]. Indeed, although the observations in small scales could be explained by the first term of the left-hand side (lhs) of Eq. (1), a consistent description of the large-scale structure of the Universe requires considering the second term. Therefore, it is quite natural to attempt to quantitatively clarify the influence of the repulsive cosmological constant on the detected redshift and blueshift of photons coming from massive geodesic particles, stars for instance, orbiting the Kerr black hole.
In order to advance work in this direction, we shall consider the field equations (1) in the absence of matter content as the first step. A family of solutions to these simplified field equations describes black holes in asymptotically dS spacetime. Moreover, the rotating black hole solutions to the Einstein-\(\Lambda\) field equations are described by the Kerr-dS (KdS) line element [21] and the properties of the geodesic motion in this background have been investigated in [22; 23] (see [7] as well). Thus, by taking into account the Universe expansion effect encoded in the cosmological constant through the explicit appearance of the \(\Lambda\) term in the metric, we push forward the formalism developed in [4; 5] for expressing the Kerr black hole parameters in terms of purely observational quantities to the case in which the Hubble constant can also be determined.
The consideration of the accelerated expansion of the Universe in the redshift due to a cosmological constant has potential interest in terms of astrophysical applications, since many of the AGNs with megamaser disks orbiting its central black hole are within the Hubble flow [24; 25; 26; 27; 28; 29; 30; 31; 32]. Therefore, this modeling includes the contribution of the expansion of the Universe in the metric, making it suitable for describing this effect on the total redshift of photons emitted by test particles and detected on Earth. Thus, this new form of accounting for the dS accelerated expansion of the Universe in the expression for total redshift allows us to extract the Hubble law as well. Finally, it is worth noticing that this approach differs from the previous ones in which the expansion effect is taken into account in the total redshift through a composition of redshifts that has no metric origin (see, for instance, [17; 18; 30; 32]).
The outline of this paper is as follows. The next section is devoted to a brief review of the geometrical properties of the KdS black holes and the geodesic motion in this background. Besides, we analytically obtain the valid parameter space for having KdS black holes, and also review our general relativistic formalism that allows expressing the black hole parameters in terms of observational redshift. In Sec. III, we express the redshift of emitters that are circularly orbiting the KdS black holes in terms of the parameters of spacetime while the detector is in radial motion. Then, by considering a physically motivated configuration, we extract the Hubble law in its original formulation from the obtained frequency shift relation. Finally, we find analytic expressions for the Schwarzschild black hole mass and the Hubble constant in terms of the observational frequency shifts of photons emitted by massive particles orbiting circularly a Schwarzschild black hole. We finish our paper with some concluding remarks.
## II Kerr-de Sitter spacetime
Here, we give a short review of the geometrical properties of the rotating black holes in the dS background and analytically obtain valid parameter space for having KdS black holes in Sec. II.1. Then, we study the geodesic motion of massless/massive particles in this geometry in Sec. II.2 and derive equations that are important for our next purposes. Finally, in II.3, we briefly review our general relativistic formalism that allows us to express the black hole parameters in terms of observational redshift and orbital parameters of massive geodesic particles orbiting around the black holes. We shall use the general results of this section for a special configuration in Sec. III to extract the Hubble law.
### Properties of the Kerr-dS background
The KdS line element in the standard Boyer-Lindquist coordinates \(\left(t,r,\theta,\varphi\right)\) reads [21] (we use \(c=1=G\) units)
\[ds^{2}=g_{tt}dt^{2}+2g_{t\varphi}dtd\varphi+g_{\varphi\varphi}d\varphi^{2}+g_{ rr}dr^{2}+g_{\theta\theta}d\theta^{2}, \tag{2}\]
with the metric components
\[g_{tt}=-\left(\frac{\Delta_{r}-\Delta_{\theta}a^{2}\sin^{2}\theta}{\Sigma \Xi^{2}}\right),\quad g_{rr}=\frac{\Sigma}{\Delta_{r}},\quad g_{\theta\theta}= \frac{\Sigma}{\Delta_{\theta}}, \tag{3}\]
\[g_{\varphi\varphi}=\frac{\sin^{2}\theta}{\Sigma\Xi^{2}}\left[\Delta_{\theta} \left(r^{2}+a^{2}\right)^{2}-\Delta_{r}a^{2}\sin^{2}\theta\right]\,, \tag{4}\]
\[g_{t\varphi}=-\frac{a\sin^{2}\theta}{\Sigma\Xi^{2}}\left[\Delta_{\theta} \left(r^{2}+a^{2}\right)-\Delta_{r}\right], \tag{5}\]
where the functions \(\Delta_{r}\left(r\right)\), \(\Delta_{\theta}\left(\theta\right)\), \(\Sigma\left(r,\theta\right)\), and \(\Xi\) have the following explicit form
\[\Delta_{r}=r^{2}+a^{2}-2Mr-\frac{\Lambda r^{2}}{3}\left(r^{2}+a^{2}\right), \tag{6}\]
\[\Delta_{\theta}=1+\frac{\Lambda}{3}a^{2}\cos^{2}\theta, \tag{7}\]
\[\Sigma=r^{2}+a^{2}\cos^{2}\theta\,, \tag{8}\]
\[\Xi=1+\frac{\Lambda}{3}a^{2}, \tag{9}\]
and \(M\) is the total mass of the black hole, \(a\) is the angular momentum per unit mass \(a=J/M\), and \(\Lambda\) is the cosmological constant related to dS radius \(l_{dS}\) as \(\Lambda=3/l_{dS}^{2}\). The KdS metric (2) describes an axially symmetric and stationary spacetime (between the event horizon and the cosmological horizon) that reduces to the standard Kerr black hole in the limit \(\Lambda=0\) and the Schwarzschild-dS (SdS) black hole for \(a=0\). The coordinate singularities of this spacetime are characterized by \(\Delta_{r}=0\) (the four roots correspond to horizons), while calculation of the invariant curvature scalar reveals that the intrinsic singularity is given by \(\Sigma=0\) where located at \(\{r=0,\theta=\pi/2\}\) for \(a\neq 0\). Therefore, the presence of a cosmological horizon, characterized by the largest root of \(\Delta_{r}\), is one of the consequences of non-vanishing \(\Lambda\).
In order to find the extreme values of \(a\) and \(\Lambda\) for having black holes, we found that it is convenient to introduce the normalized variables \(x\), \(\alpha\), and \(\lambda\) as below
\[x=\frac{r}{M},\quad\alpha=\frac{a}{M},\quad\lambda=\frac{\Lambda M^{2}}{3}, \tag{10}\]
and express the \(\Delta_{r}\) function in terms of the new variables as follows
\[M^{-2}\Delta_{r}=x^{2}+\alpha^{2}-2x-\lambda x^{2}\,\left(x^{2}+\alpha^{2} \right). \tag{11}\]
Generally, one can show that \(\Delta_{r}\) has (at most) four distinct roots that can be regarded as the cosmological horizon (\(x=x_{c}\)), the event horizon (\(x=x_{+}\)), the inner horizon (\(x=x_{-}\)), and a negative root (\(x=x_{0}<0\)) so that \(x_{-}<x_{+}<x_{c}\) (see Fig. 1). Therefore, we can express \(\Delta_{r}\) in terms of these quantities in the following form
\[M^{-2}\Delta_{r}=\lambda\left(x-x_{0}\right)\left(x-x_{-}\right)\left(x-x_{+} \right)\left(x_{c}-x\right). \tag{12}\]
Now, by equating the equations for \(\Delta_{r}\) given in (11) and (12), the following relations between parameters are found
\[\alpha^{2}=\lambda x_{-}x_{+}x_{c}\left(x_{-}+x_{+}+x_{c}\right), \tag{13}\]
\[\lambda=\frac{2}{\left(x_{-}+x_{+}\right)\left(x_{-}+x_{c}\right)\left(x_{+} +x_{c}\right)}, \tag{14}\]
\[x_{0}=-\left(x_{-}+x_{+}+x_{c}\right), \tag{15}\]
where \(x_{-}\), \(x_{+}\), and \(x_{c}\) are considered as three fundamental parameters of spacetime. Similar to the Kerr case, there should be a maximum value for the rotation parameter, say \(\alpha_{\rm max}\), such that we have black holes in the range \(0\leq\alpha^{2}\leq\alpha_{\rm max}^{2}\) and a naked singularity for \(\alpha^{2}>\alpha_{\rm max}^{2}\). By considering (13)-(14), and also, the condition \(x_{-}<x_{+}<x_{c}\) between roots, we find that the maximum \(\alpha_{\rm max}\) happens whenever \(x_{-}\lesssim x_{+}\lesssim x_{c}\), hence all the horizons are closely spaced for \(\alpha_{\rm max}\). Therefore, we should take into account the approximation \(x_{-}\approx x_{+}\approx x_{c}\) to obtain \(\alpha_{\rm max}\). On the other hand, \(0\leq\alpha^{2}\leq\alpha_{\rm max}^{2}\) and (13) show that the cosmological constant should obey the following interval as well
\[0\leq\lambda\leq\lambda_{crit},\quad\lambda_{crit}=\frac{\alpha_{\rm max}^{2} }{x_{-}x_{+}x_{c}\left(x_{-}+x_{+}+x_{c}\right)}. \tag{16}\]
Note that in order to have black holes, there is always a maximum cosmological constant, say \(\lambda_{\rm max}\), which corresponds to an arbitrary rotation parameter \(\alpha\) in the range \(0\leq\alpha\leq\alpha_{\rm max}\). The maximum value of \(\lambda\), in the general case, is corresponding to \(\alpha_{\rm max}\) which is denoted as critical cosmological constant \(\lambda_{crit}\) in the aforementioned inequality. Therefore, generally speaking, there is an interval for \(\lambda_{\rm max}\) that depends on the rotation parameter \(\alpha\), as \(\lambda_{\rm max}^{(SdS)}(\alpha=0)\leq\lambda_{\rm max}\leq\lambda_{crit} \left(\alpha=\alpha_{\rm max}\right)\) so that its lower bound represents the maximum value of \(\lambda\) for the SdS black hole and \(\lambda\approx\lambda_{\rm max}^{(SdS)}\) characterizes the near-extremal SdS solution. In other words, there is a maximum value of \(\lambda\) for an arbitrary rotation parameter \(\alpha\), and similarly, there is a maximum value of \(\alpha\) for an arbitrary cosmological constant \(\lambda\) (for example, for the SdS black hole with \(\alpha=0\), the cosmological constant ranges within \(0\leq\lambda\leq\lambda_{\rm max}^{(SdS)}\)).
Figure 1: The general behavior of \(\Delta_{r}\) function given in (11) versus the radial coordinate \(x\) for four black hole types, namely Schwarzschild, SdS, Kerr, and KdS. \(\Delta_{r}\) is positive between \(x_{+}\) and \(x_{c}\) as well as before \(x_{-}\), whereas is negative otherwise. By increasing \(\lambda\) (\(\alpha\)), \(x_{c}\) (\(x_{-}\)) approaches \(x_{+}\) (not shown here).
Now, in order to obtain \(\alpha_{\rm max}\), and then \(\lambda_{\rm max}^{(SdS)}\) and \(\lambda_{crit}\), we need to take into account the condition \(x_{-}\lesssim x_{+}\lesssim x_{c}\), as we mentioned above. To do so, we first equate Eqs. (11) and (12) while considering \(x_{+}\) and \(x_{c}\) as two independent variables to obtain the following relations
\[\alpha=\sqrt{\frac{4x_{+}x_{c}+\Pi-\Upsilon}{2\left(x_{+}+x_{c}\right)}}, \tag{17}\]
\[x_{-} = \frac{1}{2\sqrt{\left(x_{c}+x_{+}-2\right)\left(x_{c}+x_{+}\right) }}\left\{x_{c}^{4}+x_{+}^{4}+4\Pi+2\Upsilon\right. \tag{18}\] \[\left.+2x_{+}x_{c}\left[2\left(x_{+}+x_{c}+1\right)-x_{c}x_{+} \right]\right]^{\frac{1}{2}}\] \[-\frac{1}{2}\left(x_{+}+x_{c}\right),\]
\[x_{0}=-\left(x_{-}+x_{+}+x_{c}\right), \tag{19}\]
\[\lambda=\frac{-\Pi+\Upsilon}{2x_{+}^{2}x_{c}^{2}\left(x_{+}+x_{c}\right)}, \tag{20}\]
with \(\Pi\) and \(\Upsilon\) being
\[\Pi=\sqrt{\Upsilon^{2}-4x_{+}^{2}x_{c}^{2}\left(x_{+}+x_{c}-2\right)\left(x_{+ }+x_{c}\right)}, \tag{21}\]
\[\Upsilon=x_{+}^{3}+x_{+}x_{c}\left(x_{+}+2\right)+x_{c}^{2}\left(x_{+}+x_{c} \right). \tag{22}\]
Then, we take into account the near-extremal regime \(x_{c}\to x_{+}\) in the above-mentioned formulas, i.e. when the cosmological horizon \(x_{c}\) is very close to the black hole event horizon \(x_{+}\) (\(x_{c}-x_{+}<<x_{+}\)). Hence, we obtain the following relations for \(\alpha\), \(x_{-}\), \(x_{0}\), and \(\lambda\) in the nearly extreme regime
\[\alpha\approx\sqrt{\frac{x_{+}}{2}}\left(1-2x_{+}+\sqrt{1+8x_{+}}\right)^{ \frac{1}{2}}, \tag{23}\]
\[x_{-}\approx-x_{+}+\sqrt{\frac{x_{+}}{2\left(x_{+}-1\right)}}\left(1+2x_{+}+ \sqrt{1+8x_{+}}\right)^{\frac{1}{2}}, \tag{24}\]
\[x_{0}\approx-2x_{+}-x_{-}, \tag{25}\]
\[\lambda_{\rm max}\approx\frac{1+2x_{+}-\sqrt{1+8x_{+}}}{2x_{+}^{3}}, \tag{26}\]
where we replaced \(\lambda\) with \(\lambda_{\rm max}\) since \(x_{c}\approx x_{+}\). Now, the maximum value of the rotation parameter, \(\alpha_{\rm max}\), can be obtained by taking the limit \(x_{+}\to x_{-}\) in the aforementioned relations. By considering \(x_{-}\approx x_{+}\) in (24), we obtain a maximum value for the event horizon as below
\[x_{+}=\frac{3+2\sqrt{3}}{4}, \tag{27}\]
that is indeed a coincidental point for all the three horizons (\(x_{c}\approx x_{+}\approx x_{-}\approx\left(3+2\sqrt{3}\right)/4\)), hence it maximizes the rotation parameter in Eq. (23). Therefore, by substituting (27) in (23), we obtain
\[\alpha_{\rm max}=\frac{1}{4}\left(9+6\sqrt{3}\right)^{\frac{1}{2}}\approx 1.1 01, \tag{28}\]
that is slightly higher than unity for the standard Kerr black hole (\(\alpha_{\rm max}>\alpha_{\rm max}^{Kerr}=1\)). It is worthwhile to mention that this value corresponds to the maximum possible value of the cosmological constant, \(\lambda_{crit}\left(\alpha=\alpha_{\rm max}\right)\), and therefore, it will be less for lower values of the cosmological constant. Thus, the rotation parameter ranges
\[0\leq\alpha\leq\alpha_{\rm max} \tag{29}\]
with \(\alpha_{\rm max}\) given by (28).
Now, we obtain bounds on maximum values of the cosmological constant \(\lambda_{\rm max}\), namely \(\lambda_{\rm max}^{(SdS)}\left(\alpha=0\right)\) and \(\lambda_{crit}\left(\alpha=\alpha_{\rm max}\right)\). The maximum value for the event horizon is given in Eq. (27) that corresponds to maximally rotating black holes with \(\alpha=\alpha_{\rm max}\). Therefore, by substituting Eq. (27) in Eq. (26), we obtain \(\lambda_{crit}\left(\alpha=\alpha_{\rm max}\right)=16/\left(3+2\sqrt{3}\right)^ {3}\).
On the other hand, since SdS is static \(\alpha=0\), we set \(x_{-}=0\) in (24) and obtain the maximum value of the event horizon as \(x_{+}=3\equiv x_{+\rm max}^{(SdS)}\) for this case (we set \(x_{-}=0\) because SdS black holes have no inner horizon). By replacing this value in Eq. (26), one can find the maximum value of the cosmological constant for the static case as \(\lambda_{\rm max}\left(\alpha=0\right)=1/27\equiv\lambda_{\rm max}^{(SdS)}\). We summarized
Figure 2: The valid area of KdS black holes in \(\alpha-\lambda\) parameter space. The shaded green region belongs to KdS black holes while the marginal points on the left indicate SdS black holes and on the bottom represent standard Kerr solutions. This figure also shows how a background cosmological constant extends the parameter space.
the results of this section on the bounds of KdS parameters as follows
\[\left\{\begin{array}{c}0\leq\alpha\leq\alpha_{\max},\\ \\ \lambda_{\max}^{(SdS)}\leq\lambda_{\max}\leq\lambda_{crit},\\ \\ x_{+\max}^{(SdS)}\leq x_{+\max}\leq x_{+crit},\end{array}\right. \tag{30}\]
with
\[\left\{\begin{array}{c}\alpha_{\max}=\frac{1}{4}\left(9+6\sqrt{3}\right)^{ \frac{1}{2}},\\ \\ \lambda_{\max}^{(SdS)}=\frac{1}{27},\quad\lambda_{crit}=\frac{16}{\left(3+2 \sqrt{3}\right)^{3}},\\ \\ x_{+\max}^{(SdS)}=3,\quad x_{+crit}=\frac{3+2\sqrt{3}}{4}.\end{array}\right. \tag{31}\]
Therefore, the maximum value of the cosmological constant in the Kerr geometry must be in the aforementioned interval. For instance, the relation of \(\lambda\) given in (14) must obey \(0\leq\lambda\leq\lambda_{\max}^{(SdS)}\) for the static case \(\alpha=0\), while \(\lambda=0\) represents the standard Schwarzschild solution and \(\lambda\approx\lambda_{\max}^{(SdS)}\) denotes nearly extreme SdS black holes. Other cases far from these two extreme values and within this range are known as SdS black holes while we have a naked singularity for \(\lambda>\lambda_{\max}^{(SdS)}\).
On the other hand, one may note that for an arbitrary value of the rotation parameter in the range \(1<\bar{\alpha}\leq\alpha_{\max}\), \(\lambda\) acquires a minimum value as well and must obey \(\lambda_{\min}\leq\lambda\leq\lambda_{\max}\) for a given \(\bar{\alpha}\). Obtaining these bounds on the parameters of KdS black holes is important since they will help us to find valid values of the redshifted photons emitted by massive particles orbiting a KdS black hole.
Various regions of KdS black holes in the \(\alpha-\lambda\) plane are illustrated in Fig. 2. In this figure, the continuous vertical blue line \(\alpha=0\) represents SdS black holes, the continuous horizontal red line \(\lambda=0\) shows the standard Kerr black holes, and there are standard Schwarzschild solutions where they join \(\left\{\alpha=0,\lambda=0\right\}\). Extreme points \(\left\{\alpha=0,\lambda=\lambda_{\max}^{(SdS)}\right\}\) and \(\left\{\alpha=\alpha_{\max},\lambda=\lambda_{crit}\right\}\) that we have obtained analytically in (30) are shown on the top corners. To obtain the \(\lambda_{\max}\)-dashed line on the top of the shaded green area, we used Eqs. (23) and (26) while one can employ the relations (17)-(20) in order to find the \(\lambda_{\min}\)-dashed line on the right of the shaded green area assuming \(x_{-}=x_{+}\). Thus, we derived all the marginal points of KdS black holes in the parameter space \(\alpha-\lambda\)_analytically_ that are presented in Eqs. (17)-(20), (23)-(26), and (30). Note that some of these bounds have been found in [22] from a different approach.
It is worthwhile to mention that, we have a naked singularity for \(\lambda>\lambda_{\max}\), whereas for \(\alpha>\alpha_{\max}\), the inner and outer horizons vanish and there is just a cosmological horizon assuming \(\lambda\neq 0\). One can see that the non-vanishing cosmological constant introduces two important modifications to the standard Kerr geometry: (i) It leads to a new horizon, known as the cosmological horizon, and (ii) allows having higher values for the rotation parameter. As we shall see in Sec. III.1, the cosmological constant also modifies the particles' motion and leads to an upper bound on the radius of stable emitters in circular motion.
### Geodesics of timelike and null particles in the Kerr-dS background
The equation of motion of test massless/massive particles in the rotating spacetimes is described by the geodesic equations. In this regard, the geodesic equations can be obtained by using the separation of variables of the Hamilton-Jacobi equation. The Hamilton-Jacobi equation, for a given background \(g_{\mu\nu}\left(x^{\rho}\right)\), leading to the geodesic equations can be written as [33]
\[2\frac{\partial S}{\partial\tau}=-g^{\mu\nu}\frac{\partial S}{\partial x^{\mu} }\frac{\partial S}{\partial x^{\nu}}, \tag{32}\]
where \(S\) represents Hamilton principal function. In this relation, \(\tau\) is the proper time that parametrizes the particle worldline and is related to the affine parameter \(\sigma\) by \(\tau=\sigma m\) which \(m\) is the particle rest mass (\(\tau\) represents the affine parameter in the case of photons). For the KdS spacetime, the Hamilton principal function \(S\) can be separated as
\[S=\frac{1}{2}\eta\tau-\bar{E}t+\bar{L}\varphi+S_{r}\left(r\right)+S_{\theta} \left(\theta\right), \tag{33}\]
for both timelike (\(\eta=1\)) and null (\(\eta=0\)) particles, and \(S_{r}\left(r\right)\) is a function of \(r\) while \(S_{\theta}\left(\theta\right)\) is a function of \(\theta\) only. The constants of motion \(\bar{E}\) and \(\bar{L}\), respectively, correspond to conserved energy \(E_{0}\) and angular momentum \(L_{0}\) of massive particles obtained through the following equations
\[\bar{E}=\frac{E_{0}}{m}=-g_{\mu\nu}\xi^{\mu}U^{\nu}, \tag{34}\]
\[\bar{L}=\frac{L_{0}}{m}=g_{\mu\nu}\psi^{\mu}U^{\nu}, \tag{35}\]
where \(\xi^{\mu}=\delta_{t}^{\mu}\) is the timelike Killing vector field and \(\psi^{\mu}=\delta_{\varphi}^{\mu}\) is the rotational Killing vector field of the spacetime, and \(U^{\mu}\) is the 4-velocity of particles which is normalized to unity \(U^{\mu}U_{\mu}=-1\).
On the other hand, by substituting the decomposition (33) into the Hamilton-Jacobi equation (32), we get the
following equality
\[\eta a^{2}\cos^{2}\theta+\Delta_{\theta}\left(\frac{dS_{\theta} \left(\theta\right)}{d\theta}\right)^{2} \tag{36}\] \[+\frac{\Xi^{2}}{\Delta_{\theta}\sin^{2}\theta}\left(a\bar{E}\sin^{ 2}\theta-\bar{L}\right)^{2}\] \[= -\eta r^{2}-\Delta_{r}\left(\frac{dS_{r}\left(r\right)}{dr} \right)^{2}\] \[+\frac{\Xi^{2}}{\Delta_{r}}\left[\left(r^{2}+a^{2}\right)\bar{E}- a\bar{L}\right]^{2},\]
where the lhs is a function of \(\theta\) only and the right-hand side (rhs) just depends on the \(r\)-coordinate. Therefore, either side is equal to a constant of motion, known as the Carter constant \(\mathcal{C}\) with the following form [33]
\[\mathcal{C}=\mathcal{K}+\left(\bar{L}-a\bar{E}\right)^{2}\Xi^{2}, \tag{37}\]
which \(\mathcal{K}\) is a constant that arises from the contraction of the Killing tensor field \(K_{\mu\nu}\) of Kerr-dS spacetime with the 4-velocity as \(\mathcal{K}=K_{\mu\nu}U^{\mu}U^{\nu}\). Now, by making use of (34)-(37), and the unity condition \(U^{\mu}U_{\mu}=-1\), we can obtain the 4-velocity components of massive particles (\(\eta=1\)) in terms of constants of motion \(\bar{E}\), \(\bar{L}\), and \(\mathcal{K}\) as follows
\[U^{t} = \frac{\Xi^{2}}{\Sigma\Delta_{\theta}\Delta_{r}}\left\{a\left[ \Delta_{r}-\left(a^{2}+r^{2}\right)\Delta_{\theta}\right]\bar{L}\right. \tag{38}\] \[\left.+\left[\left(a^{2}+r^{2}\right)^{2}\Delta_{\theta}-a^{2} \sin^{2}\theta\Delta_{r}\right]\bar{E}\right\},\]
\[\Sigma^{2}\left(U^{r}\right)^{2} = \Xi^{2}\left[\left(a^{2}+r^{2}\right)\bar{E}-a\bar{L}\right]^{2} \tag{39}\] \[-\Delta_{r}\left[\mathcal{K}+r^{2}+\Xi^{2}(\bar{L}-a\bar{E})^{2}\right]\] \[\equiv V_{r}\left(r\right),\]
\[\Sigma^{2}\left(U^{\theta}\right)^{2} = \Delta_{\theta}\left(\mathcal{K}-a^{2}\cos^{2}\theta\right)-a^{2 }\Xi^{2}\left(\sin^{2}\theta-\Delta_{\theta}\right)\bar{E}^{2} \tag{40}\] \[-\Xi^{2}\left(\frac{1}{\sin^{2}\theta}-\Delta_{\theta}\right) \bar{L}^{2}+2a\Xi^{2}\left(1-\Delta_{\theta}\right)\bar{E}\bar{L}\] \[\equiv V_{\theta}\left(\theta\right),\]
\[U^{\varphi} = \frac{\Xi^{2}}{\Sigma\Delta_{\theta}\Delta_{r}\sin^{2}\theta} \left\{\left(\Delta_{r}-a^{2}\Delta_{\theta}\sin^{2}\theta\right)\bar{L}\right. \tag{41}\] \[\left.+a\sin^{2}\theta\left[\left(a^{2}+r^{2}\right)\Delta_{ \theta}-\Delta_{r}\right]\bar{E}\right\},\]
where the rhs of Eq. (39) is a function of \(r\) and the rhs of Eq. (40) is a function of \(\theta\) only. Note that Eqs. (34) and (35) have been used to obtain Eqs. (38) and (41), while Eqs. (36), (37), and unity condition \(U^{\mu}U_{\mu}=-1\) have been employed to get the relations (39) and (40).
From Eq. (40), it is clear that the constant of motion \(\mathcal{K}\) (that is related to the Carter constant by Eq. (37)) represents a measure of how much the geodesic of particles deviates from the equatorial plane \(\theta=\pi/2\), where this constant vanishes. Therefore, the test particles moving in the equatorial plane have zero \(\mathcal{K}\), whereas it is non-vanishing whenever particles cross the equatorial plane.
The first-order differential equations presented in Eqs. (38)-(41) show the geodesic equations of massive particles for every direction in the KdS background in terms of the constants of motion \(\bar{E}\), \(\bar{L}\), and \(\mathcal{K}\). These equations reduce to the corresponding relations given in [4] for Kerr geometry in the limit \(\Lambda\to 0\), as it should be. Therefore, the cosmological constant encodes deviations of KdS black holes from the standard Kerr background.
On the other hand, a similar strategy can be followed to obtain the null geodesics of photons with 4-momentum \(k^{\mu}\) moving between the event horizon and cosmological horizon of the KdS spacetime. For massless test particles, the conserved energy \(\bar{E}_{\gamma}\) and angular momentum \(\bar{L}_{\gamma}\) of particles can be found through the following relations
\[\bar{E}_{\gamma}=-g_{\mu\nu}\xi^{\mu}k^{\nu}, \tag{42}\]
\[\bar{L}_{\gamma}=g_{\mu\nu}\psi^{\mu}k^{\nu}, \tag{43}\]
where the 4-momentum \(k^{\mu}\) of null particles satisfies \(k^{\mu}k_{\mu}=0\). Besides, the equality (36) takes the form
\[\Delta_{\theta}\left(\frac{dS_{\theta}\left(\theta\right)}{d\theta }\right)^{2}+\frac{\Xi^{2}}{\Delta_{\theta}\sin^{2}\theta}\left(a\bar{E}_{ \gamma}\sin^{2}\theta-\bar{L}_{\gamma}\right)^{2}=\] \[-\Delta_{r}\left(\frac{dS_{r}\left(r\right)}{dr}\right)^{2}+ \frac{\Xi^{2}}{\Delta_{r}}\left[\left(r^{2}+a^{2}\right)\bar{E}_{\gamma}-a \bar{L}_{\gamma}\right]^{2}, \tag{44}\]
for the null particles (\(\eta=0\)). Now, by making use of Eqs. (42)-(44) and \(k^{\mu}k_{\mu}=0\), we obtain the various components of the 4-momentum in terms of the constants of motion \(\bar{E}_{\gamma}\), \(\bar{L}_{\gamma}\), and \(\mathcal{K}_{\gamma}\) as follows
\[k^{t} = \frac{\Xi^{2}}{\Sigma\Delta_{\theta}\Delta_{r}}\left\{a\left[ \Delta_{r}-\left(a^{2}+r^{2}\right)\Delta_{\theta}\right]\bar{L}_{\gamma}\right. \tag{45}\] \[\left.+\left[\left(a^{2}+r^{2}\right)^{2}\Delta_{\theta}-a^{2} \sin^{2}\theta\Delta_{r}\right]\bar{E}_{\gamma}\right\},\]
\[\Sigma^{2}\left(k^{r}\right)^{2} = \Xi^{2}\left[\left(a^{2}+r^{2}\right)\bar{E}_{\gamma}-a\bar{L}_{ \gamma}\right]^{2} \tag{46}\] \[-\Delta_{r}\left[\mathcal{K}_{\gamma}+\Xi^{2}(\bar{L}_{\gamma}-a \bar{E}_{\gamma})^{2}\right]\] \[\equiv \mathcal{V}_{r}\left(r\right),\]
\[\Sigma^{2}\left(k^{\theta}\right)^{2} = \Delta_{\theta}\mathcal{K}_{\gamma}-a^{2}\Xi^{2}\left(\sin^{2} \theta-\Delta_{\theta}\right)\bar{E}_{\gamma}^{2} \tag{47}\] \[-\Xi^{2}\left(\frac{1}{\sin^{2}\theta}-\Delta_{\theta}\right)\bar {L}_{\gamma}^{2}+2a\Xi^{2}\left(1-\Delta_{\theta}\right)\bar{E}_{\gamma}\bar{L}_{ \gamma}\] \[\equiv \mathcal{V}_{\theta}\left(\theta\right),\]
\[k^{\varphi} = \frac{\Xi^{2}}{\Sigma\Delta_{\theta}\Delta_{r}\sin^{2}\theta} \left\{\left(\Delta_{r}-a^{2}\Delta_{\theta}\sin^{2}\theta\right)\bar{L}_{ \gamma}\right. \tag{48}\] \[\left.+a\sin^{2}\theta\left[\left(a^{2}+r^{2}\right)\Delta_{ \theta}-\Delta_{r}\right]\bar{E}_{\gamma}\right\},\]
where the rhs of Eq. (46) is a function of \(r\), the rhs of Eq. (47) is a function of \(\theta\), and \(\mathcal{C}_{\gamma}=\mathcal{K}_{\gamma}+\left(\bar{L}_{\gamma}-a\bar{E}_{\gamma }\right)^{2}\Xi^{2}\) is the corresponding Carter constant for photons.
It is worthwhile to mention that the equations given in (38)-(41) and (45)-(48), respectively, fully describe any geodesic motion of massive and massless particles in the background of KdS black holes for given sets of constants of motion \(\{\bar{E},\,\bar{L},\mathcal{K}\}\) and \(\{\bar{E}_{\gamma},\bar{L}_{\gamma},\bar{\mathcal{K}}_{\gamma}\}\). Hence, these relations govern the most general orbits of massive bodies, namely nonequatorial elliptic trajectories, and one can obtain arbitrary particular cases, such as nonequatorial circular orbits, elliptic equatorial paths, elliptic nonequatorial orbits, nonelliptic trajectories, and equatorial circular orbits by imposing some suitable boundary conditions.
### Frequency shift
In this section, we briefly review our previous results on the frequency shift of photons emitted by massive particles moving in an axially symmetric spacetime, a construction based on a general relativistic method [4; 5].
This formalism allows one to express the frequency shift of photons in terms of orbital parameters of radiant massive objects (stars, for instance), and the free parameters of the spacetime (the set of parameters \(\{M,\,a,\Lambda\}\) in our black hole case study). In this scenario, the probe particles feel the curvature of spacetime produced by the black hole and encode the properties of spacetime, characterized by black hole parameters, in the frequency shift of emitted photons. This capability allows us to estimate the black hole parameters through measuring the shift in the frequency of photons and solving an inverse problem.
The orbiting massive particles can emit electromagnetic waves towards us such that the corresponding photons travel along null geodesics from emission till detection while the information of the geometry is encoded in their frequency shift. The frequency of this photon at some position \(x_{p}^{\mu}=\left(x^{t},x^{r},x^{\theta},x^{\varphi}\right)\mid_{p}\) reads
\[\omega_{p}=-\left(k_{\mu}U^{\mu}\right)\mid_{p}, \tag{49}\]
where the index \(p\) refers to either the point of emission \(x_{e}^{\mu}\) or detection \(x_{d}^{\mu}\) of the photon.
One can see that, in contrast to the commonly used radial velocities in Newtonian gravity which are coordinate-dependent observables, \(\omega_{p}\) is a general relativistic invariant quantity that keeps memory of photons from emission at \(x_{e}^{\mu}\) till detection at \(x_{d}^{\mu}\). Therefore, in the transition from Newtonian gravity to general relativity, it is logical to take advantage of shifts in the frequency (49) rather than redshift due to changes in speed. This is because, in addition to the redshift due to speed changes, the frequency shift due to curvature of spacetime is also encoded in the observable quantity \(\omega_{p}\).
The most general expression for shifts in the frequency \(\omega_{p}\) in axially symmetric backgrounds of the KdS form (2) can be written as [4]
\[1 + z_{{}_{KdS}}=\frac{\omega_{e}}{\omega_{d}} \tag{50}\] \[= \frac{\left(E_{\gamma}U^{t}-L_{\gamma}U^{\varphi}-g_{rr}U^{r}k^{r }-g_{\theta\theta}U^{\theta}k^{\theta}\right)\mid_{e}}{\left(E_{\gamma}U^{t}- L_{\gamma}U^{\varphi}-g_{rr}U^{r}k^{r}-g_{\theta\theta}U^{\theta}k^{\theta} \right)\mid_{d}},\]
where the 4-velocity \(U^{\mu}\) (of emitter/detector) and the 4-momentum \(k^{\mu}\) (at emitter/detector position) are given in Eqs. (38)-(41) and Eqs. (45)-(48), respectively. Hence, \(z_{{}_{KdS}}\) is the frequency shift that light signals emitted by massive particles orbiting a KdS black hole experience in their path along null geodesics towards a detecting observer. Since we have general forms of \(U^{\mu}\) and \(k^{\mu}\), the KdS shift (50) includes arbitrary stable orbits, such as circular, elliptic, irregular, equatorial, non-equatorial, etc. Therefore, the frequency shifts of these photons, that are directly measured observational quantities, along with the orbital parameters of the emitter and the observer can be used to determine the black hole parameters [5].
In the rest of the paper, we shall focus on equatorial circular orbits for emitters (an important situation describing accretion disks orbiting supermassive black holes at the core of AGNs and circularly orbiting binary compact stars) and on radial motion of detectors due to the accelerated expansion of the Universe produced by the cosmological constant.
## III Frequency shift in terms of black hole parameters
In Sec. III.1, we obtain the 4-velocity of emitters in equatorial circular motion in terms of the black hole parameters. Then, in Sec. III.2, we express the 4-velocity of the detector in radial motion with respect to the emitter-black hole system versus the KdS parameters. We shall use these results in Sec. III.3 to obtain the redshift of the KdS black holes \(z_{{}_{KdS}}\) in terms of the parameters of spacetime for this special configuration and extract the Hubble law from some physically motivated approximations. Finally, we express the Schwarzschild mass and the Hubble constant in terms of observational redshift in Sec. III.4.
### Emitters in circular and equatorial orbits
Usually, the accretion disks orbiting black holes can be well described by the equatorial circular motion of massive test particles around the rotating black holes and even any tilted disk should be driven to the equatorial plane of the rotating background [34]. Hence, in what follows, we concentrate our attention on the equatorial circular orbits of emitters characterized by \(\theta=\pi/2\) and \(U^{r}=0=U^{\theta}\), to find the relations between KdS black hole parameters \(\{M,a,\Lambda\}\) and measured redshifts/blueshifts of light signals detected by an observer
located far away from their source. We also assume that the observer detects the photons in the equatorial plane \(\theta=\pi/2\) since accretion disks can be detected mostly in an edge-on view from Earth [25; 35], and therefore \(k^{\theta}=0\) identically. With this assumption, we place the detector in the equatorial plane as well.
At this stage, we express the 4-velocity \(U^{\mu}\) for the equatorial circular orbits in terms of the KdS black hole parameters \(\left\{M,\,a,\Lambda\right\}\) in order to substitute in the frequency shift relation (50), hence find a connection between observational redshift/blueshift and KdS parameters. Therefore, by considering Eqs. (38)-(41), the non-vanishing components \(U_{e}^{t}\) and \(U_{e}^{\varphi}\) of the emitter read
\[U_{e}^{t}=\frac{a\left(\Delta_{e}-a^{2}-r_{e}^{2}\right)L_{e}+\left[\left(a^{2 }+r_{e}^{2}\right)^{2}-a^{2}\Delta_{e}\right]E_{e}}{r_{e}^{2}\Delta_{e}}\Xi, \tag{51}\]
\[U_{e}^{\varphi}=\frac{\left(\Delta_{e}-a^{2}\right)L_{e}+a\left(a^{2}+r_{e}^{ 2}-\Delta_{e}\right)E_{e}}{r_{e}^{2}\Delta_{e}}\Xi, \tag{52}\]
where \(L_{e}=\Xi\bar{L}_{e}\), \(E_{e}=\Xi\bar{E}_{e}\), \(\Delta_{e}=\Delta_{r}(r=r_{e})\), and \(r_{e}\) is the radius of the emitter. In this case, the Carter constant vanishes whereas the constants of motion \(E_{e}\) and \(L_{e}\) can be obtained by taking into account the conditions
\[V_{r}\left(r\right)=0,\quad\frac{dV_{r}\left(r\right)}{dr}=0, \tag{53}\]
simultaneously for having circular orbits while \(V_{r}\left(r\right)\) is given in Eq. (39). Therefore, one can solve these conditions to get [22]
\[E_{e}=\frac{r_{e}^{\frac{3}{2}}\left[1-\frac{\Lambda a^{2}}{3}\left(a^{2}+r_{ e}^{2}\right)\right]-2Mr_{e}^{\frac{1}{2}}\pm a\left(M-\frac{\Lambda r_{e}^{ 3}}{3}\right)^{\frac{1}{2}}}{r_{e}^{\frac{2}{2}}\sqrt{r_{e}^{\frac{3}{2}} \left(1-\frac{\Lambda a^{2}}{3}\right)-3Mr_{e}^{\frac{1}{2}}\pm 2a\left(M-\frac{ \Lambda r_{e}^{3}}{3}\right)^{\frac{1}{2}}}}, \tag{54}\]
\[L_{e}=\frac{\left(a^{2}+r_{e}^{2}\right)\left[\pm\left(M-\frac{\Lambda r_{e}^{ 3}}{3}\right)^{\frac{1}{2}}-\frac{\partial\Lambda}{3}r_{e}^{\frac{3}{2}} \right]-2aMr_{e}^{\frac{1}{2}}}{r_{e}^{\frac{2}{2}}\sqrt{r_{e}^{\frac{3}{2}} \left(1-\frac{\Lambda a^{2}}{3}\right)-3Mr_{e}^{\frac{1}{2}}\pm 2a\left(M-\frac{ \Lambda r_{e}^{3}}{3}\right)^{\frac{1}{2}}}}, \tag{55}\]
in terms of the KdS black hole parameters while the upper sign corresponds to a co-rotating object and the lower sign refers to a counter-rotating object with respect to the angular velocity of the black hole, and we shall use this convention in the upcoming equations.
Now, by substituting relations (54) and (55) into Eqs. (51) and (52), we can obtain rather simple equations as follows
\[U_{e}^{t}\left(r_{e},\pi/2\right)=\frac{r_{e}^{\frac{3}{2}}\pm a\left(M-\frac {\Lambda r_{e}^{3}}{3}\right)^{\frac{1}{2}}}{\mathcal{X}_{\pm}}\Xi, \tag{56}\]
\[U_{e}^{\varphi}\left(r_{e},\pi/2\right)=\pm\frac{\left(M-\frac{\Lambda r_{e} ^{3}}{3}\right)^{\frac{1}{2}}}{\mathcal{X}_{\pm}}\Xi, \tag{57}\]
with
\[\mathcal{X}_{\pm}=r_{e}^{\frac{3}{2}}\sqrt{r_{e}^{\frac{3}{2}}\left(1-\frac{ \Lambda a^{2}}{3}\right)-3Mr_{e}^{\frac{1}{2}}\pm 2a\left(M-\frac{\Lambda r_{e}^{3}}{3} \right)^{\frac{1}{2}}}. \tag{58}\]
From Eqs. (56)-(58), it is obvious that one should follow the conditions \(3M-\Lambda r_{e}^{3}\geq 0\) and \(\mathcal{X}_{\pm}^{2}>0\) in order to have the equatorial circular orbits. The former puts an upper bound on the emitter radius as \(r_{e}^{3}\leq 3M/\Lambda\), a quantity that must be located within the cosmological horizon. We call this special distance \(\bar{r}=\left(3M/\Lambda\right)^{1/3}\) as _zero gravity radius_ (ZGR), a radius where the effective gravity vanishes, as we shall show below. With the quantities given in (56)-(58) at hand, we can also obtain the angular velocity of an emitter orbiting around the KdS black hole in a circular and equatorial orbit as below
\[\Omega_{\pm}=\frac{U_{e}^{\varphi}}{U_{e}^{t}}=\pm\frac{\left(M-\frac{\Lambda r _{e}^{3}}{3}\right)^{\frac{1}{2}}}{r_{e}^{\frac{3}{2}}\pm a\left(M-\frac{ \Lambda r_{e}^{3}}{3}\right)^{\frac{1}{2}}}. \tag{59}\]
Besides, the non-vanishing components of the 4-momentum of photons \(k^{\mu}\), given in equations (45)-(48), in the equatorial plane reduce to
\[k^{t}=\frac{a\left(\Delta_{r}-a^{2}-r^{2}\right)L_{\gamma}+\left[\left(a^{2}+r ^{2}\right)^{2}-a^{2}\Delta_{r}\right]E_{\gamma}}{r^{2}\Delta_{r}}\Xi, \tag{60}\]
\[r^{2}\left(k^{r}\right)^{2}=\left[\left(a^{2}+r^{2}\right)E_{\gamma}-aL_{ \gamma}\right]^{2}-\Delta_{r}(L_{\gamma}-aE_{\gamma})^{2}, \tag{61}\]
\[k^{\varphi}=\frac{\left(\Delta_{r}-a^{2}\right)L_{\gamma}+a\left(a^{2}+r^{2}- \Delta_{r}\right)E_{\gamma}}{r^{2}\Delta_{r}}\Xi, \tag{62}\]
with \(L_{\gamma}=\Xi\bar{L}_{\gamma}\) and \(E_{\gamma}=\Xi\bar{E}_{\gamma}\).
On the other hand, the condition for having stable orbits in KdS geometry is given by
\[V_{r}^{\prime\prime} \equiv \frac{d^{2}V_{r}\left(r\right)}{dr^{2}}=-\left[r^{2}+(L-aE)^{2} \right]\Delta_{r}^{\prime\prime}-4r\Delta_{r}^{\prime} \tag{63}\] \[-2\Delta_{r}+4\left(3r^{2}+a^{2}\right)E^{2}-4aLE\leq 0,\]
where prime denotes \(\partial_{r}\) and one can use either upper/lower sign of Eqs. (54) and (55) to obtain radii of stable circular orbits of co/counter -rotating stars. However, the polynomial expression (63) is of 10th-order in \(r\) and could not be solved analytically, unlike the standard Kerr case.
The general behavior of \(V_{r}^{\prime\prime}\) is illustrated through Fig. 3 for various values of \(\Lambda\) and \(a\) as well as co/counter - rotating classes. As one can see from this figure, the roots of relation (63) characterizes the innermost stable circular orbit (ISCO) \(r_{ISCO}\) and outermost stable circular orbit (OSCO) \(r_{OSCO}\) describing, respectively, the inner edge of orbiting accretion disk and its outer edge. Therefore, we expect that the lower constraint on the
emitter radius as \(r_{e}\geq r_{ISCO}\) leads to an upper bound on the redshift/blueshift of orbiting objects, whereas the upper constraint \(r_{e}\leq r_{OSCO}\) puts a lower bound on the frequency shift.
Now, it is worthwhile to summarize the most important radii in the KdS geometry as follows
\[r_{0}<0<r_{-}<r_{+}<r_{ISCO}<r_{OSCO}<\bar{r}<r_{c}, \tag{64}\]
where (i) the inner horizon \(r_{-}\), the event horizon \(r_{+}\), and the cosmological horizon \(r_{c}\) are the solutions of \(\Delta_{r}\) in Eq. (6), (ii) \(r_{ISCO}\) and \(r_{OSCO}\) are the solutions of the stability condition in Eq. (63), and (iii) the ZGR \(\bar{r}=\left(3M/\Lambda\right)^{1/3}\) is a maximum radius for having the equatorial circular orbits obtained through (56)-(58).
In this study, we are interested in stable orbits satisfying \(r_{ISCO}\leq r_{e}\leq r_{OSCO}\) for the emitter and far away detectors with the condition
\[\bar{r}<r_{d}<r_{c}, \tag{65}\]
describing black hole systems in the Hubble flow. However, note that some of the important radii (64) may change/vanish under certain circumstances, as we discussed in Sec. II.1 (for \(r_{-}\), \(r_{+}\), and \(r_{c}\)) and in Fig. 3 (for \(r_{ISCO}\) and \(r_{OSCO}\)). In addition, we restrict our calculations to \(\Lambda\leq 10^{-4}\) in order to have stable orbits, in consistency with Fig. 3.
### Detectors in radial motion
Here, we should note that the situation for the KdS geometry differs from the previous cases studied before (see [4; 5; 9; 11] and references therein) and we cannot consider circular orbiting or static detectors since the behavior of \(z_{{}_{KdS}}\) in Eq. (50) versus \(\Lambda\) turns out to be unphysical once we take into account the circular orbits beyond \(\bar{r}\). Indeed, because of the accelerated expansion of the Universe due to the positive cosmological constant at large scales, the detector should move away from the black hole in the case of far-away detectors that we are interested in. Therefore, in this case, we consider a detector that radially moves away from the KdS black hole instead of usual circularly orbiting or static detectors. This implies that \(U_{d}^{\varphi}=0=U_{d}^{\theta}\), hence the non-vanishing components of the 4-velocity of the detector read (see Eqs. (38)-(41))
\[U_{d}^{t}=\frac{\left(a^{2}+r_{d}^{2}\right)^{2}-a^{2}\Delta_{d}}{r_{d}^{2} \Delta_{d}}E_{d}\Xi, \tag{66}\]
\[(U_{d}^{r})^{2}=\frac{\left(a^{2}+r_{d}^{2}\right)^{2}E_{d}^{2}-\Delta_{d} \left(r_{d}^{2}+a^{2}E_{d}^{2}\right)}{r_{d}^{4}}, \tag{67}\]
where we have set \(L_{d}=0\) due to the radial motion of the detector, \(E_{d}=\Xi\bar{E}_{d}\), \(\Delta_{d}=\Delta_{r}(r=r_{d})\), and \(r_{d}\) is the distance between the black hole and the detector.
Figure 3: The general behavior of \(V_{r}^{\prime\prime}\) versus the radial coordinate for the co-rotating branch (upper panels) and counter-rotating branch (lower panels). \(V_{r}^{\prime\prime}\) is negative between \(r_{ISCO}\) and \(r_{OSCO}\), indicating stable orbits area. \(r_{OSCO}\) approaches \(r_{ISCO}\) as the cosmological constant increases, and finally, there will be no stable equatorial circular orbits for sufficiently large \(\Lambda\).
Note that \(U_{d}^{\varphi}=0\) is just valid for far enough detectors, otherwise the rotation nature of the spacetime drags the detector, as it can be seen from Eq. (41).
As the next step, we need to obtain \(E_{d}\) in terms of the parameters of the spacetime \(\{M,a,\Lambda\}\). One may note that at some radius \(r_{d}=R\), where the gravitational attraction generated by the black hole mass is completely balanced by the expansion of the Universe produced by the cosmological constant, such that \(M=M_{\Lambda}\), with \(M_{\Lambda}\) being an effective mass related to the cosmological constant. Thus, the radial velocity \(U_{d}^{\tau}\) (67) vanishes at \(r_{d}=R\) because the repulsive nature of the cosmological constant is exactly cancelled by the gravitational attraction. We obtain the effective mass \(M_{\Lambda}\) through the following integral
\[M_{\Lambda}=\int_{0}^{R}4\pi\rho_{\Lambda}r^{2}dr, \tag{68}\]
where the density of cosmological constant \(\rho_{\Lambda}\) is related to the cosmological constant via \(\rho_{\Lambda}=\Lambda/\left(8\pi G\right)\)[20]. By performing this integral and equating \(M=M_{\Lambda}\), we find the vanishing velocity radius as \(R=\left(3M/\Lambda\right)^{1/3}\) that is exactly equal to the ZGR \(\bar{r}\). Therefore, this is the radius at which the cosmological constant compensates the gravitational attraction of the black hole, and hence the effective gravity vanishes as we discussed after Eq. (58). Note that the angular velocity of the emitter (57) also vanishes for \(r_{e}=\left(3M/\Lambda\right)^{1/3}\) which means the emitter is static at this point as well.
Now, by replacing \(r_{d}=R=\left(3M/\Lambda\right)^{1/3}\) in Eq. (67) and solving \(U_{d}^{\tau}\left(r_{d}=R\right)=0\), one can find the energy of the detector as below
\[E_{d}=\left(\frac{3a^{2}\left[\left(\frac{3M}{\Lambda}\right)^{1/3}-M\right]-9 M\left[\left(\frac{3M}{\Lambda}\right)^{2/3}-\frac{1}{\Lambda}\right]}{a^{2} \left(\frac{3M}{\Lambda}\right)^{1/3}\left(3+\Lambda a^{2}\right)+9M\left(a^ {2}+\frac{1}{\Lambda}\right)}\right)^{\frac{1}{2}}. \tag{69}\]
In this way, the 4-velocity components (66)-(67) can be written as
\[U_{d}^{t} = \left(\frac{3a^{2}\left[\left(\frac{3M}{\Lambda}\right)^{1/3}-M \right]-9M\left[\left(\frac{3M}{\Lambda}\right)^{2/3}-\frac{1}{\Lambda}\right] }{a^{2}\left(\frac{3M}{\Lambda}\right)^{1/3}\left(3+\Lambda a^{2}\right)+9M \left(a^{2}+\frac{1}{\Lambda}\right)}\right)^{\frac{1}{2}}\times \tag{70}\] \[\frac{\left(a^{2}+r_{d}^{2}\right)^{2}-a^{2}\Delta_{d}}{r_{d}^{2} \Delta_{d}}\Xi,\]
\[\left(U_{d}^{\tau}\right)^{2} = \left(\frac{3a^{2}\left[\left(\frac{3M}{\Lambda}\right)^{1/3}-M \right]-9M\left[\left(\frac{3M}{\Lambda}\right)^{2/3}-\frac{1}{\Lambda}\right] }{a^{2}\left(\frac{3M}{\Lambda}\right)^{1/3}\left(3+\Lambda a^{2}\right)+9M \left(a^{2}+\frac{1}{\Lambda}\right)}\right)\times \tag{71}\] \[\frac{\left(a^{2}+r_{d}^{2}\right)^{2}-\Delta_{d}a^{2}}{r_{d}^{4} }-\frac{\Delta_{d}}{r_{d}^{2}},\]
in terms of the black hole parameters.
### Frequency shift versus parameters of spacetime and the Hubble law
For this configuration, i.e. circularly orbiting emitters and radial motion of detectors, the general expression for the frequency shift of photons (50) reduces to
\[1+z_{{}_{KdS_{1,2}}}=\frac{\left(E_{\gamma}U^{t}-L_{\gamma}U^{\varphi}\right) \left|{}_{e}\right.}{\left(E_{\gamma}U^{t}-g_{rr}U^{r}k^{r}\right)\left|{}_{d} \right.}=\frac{U_{e}^{t}-b_{e_{(\mp)}}\,U_{e}^{\varphi}}{U_{d}^{t}-g_{d}U_{d}^ {r}\left(\frac{k^{r}_{\varphi}}{E_{\gamma}}\right)}\,, \tag{72}\]
where we defined the light bending parameter \(b\) as \(b\equiv L_{\gamma}/E_{\gamma}\) that represents the deflection of light due to gravitational field in the vicinity of the KdS black hole. Besides, \(g_{d}=g_{rr}\left(r=r_{d}\right)\) is given in (3) and the ratio \(k_{d}^{r}/E_{\gamma}\) can be written as (from Eq. (61))
\[\left(\frac{k_{d}^{r}}{E_{\gamma}}\right)^{2}=\frac{\left[\left(a^{2}+r_{d}^{2 }\right)-ab_{d_{(\mp)}}\right]^{2}-\Delta_{d}(b_{d_{(\mp)}}-a)^{2}}{r_{d}^{2}}. \tag{73}\]
Note that \(b\), presented in Eqs. (72) and (73), is preserved along the whole light path followed by photons from their emission till their detection due to the fact that \(E_{\gamma}\) and \(L_{\gamma}\) are constants of motion. Therefore, one can set \(b_{e}=b_{d}\) without loss of generality. Moreover, the subscript \({}_{(\mp)}\) signs refer to the deflection of light \(b\) at either side of the line of sight, whereas the subindices \({}_{1}\) and \({}_{2}\) in Eq. (72) correspond to the \({}_{(-)}\) and \({}_{(+)}\) signs, respectively.
On the other hand, the maximum value of the light bending parameter is given by the condition \(k^{r}=0\), where the position vector of orbiting stars with respect to the black hole center is approximately orthogonal to the line of sight. Thus, we substitute \(k^{t}\) and \(k^{\varphi}\) from Eqs. (42)-(43) as well as the condition \(k^{r}=0\) in the photons' equation of motion
\[k^{\mu}k_{\mu}=0=g_{tt}k^{t}k^{t}+g_{rr}k^{r}k^{r}+g_{\varphi\varphi}k^{\varphi }k^{\varphi}+2g_{t\varphi}k^{t}k^{\varphi}, \tag{74}\]
to find the maximum value of the light bending parameter for the rotating metric (2) as follows
\[b_{(\pm)}=-\frac{g_{t\varphi}(\pm)\sqrt{g_{t\varphi}^{2}-g_{tt}g_{\varphi\varphi }}}{g_{tt}}, \tag{75}\]
where in terms of the KdS black hole parameters, we have
\[b_{(\pm)} = \frac{1}{r\left[1-\frac{\Lambda}{3}\left(r^{2}+a^{2}\right) \right]-2M}\times \tag{76}\] \[\left[-2Ma-\frac{\Lambda}{3}ar\left(r^{2}+a^{2}\right)\right.\] \[\left.\left(\pm\right)r\sqrt{r^{2}+a^{2}-2Mr-\frac{\Lambda r^{2} }{3}\left(r^{2}+a^{2}\right)}\right]\]
for equatorial circular orbits. In this formula, the sign of \(b\) denotes the redshifted and blueshifted photons when their source is co-rotating with respect to the black
hole angular momentum, and vice versa if it is counter-rotating. In other words, in the frequency shift formulas (like Eq. (72)), the minus sign enclosed in parentheses corresponds to the redshifted photons, whereas the plus sign indicates blueshifted ones.
Now, we can substitute \(g_{d}\), \(U_{e}^{t}\), \(U_{e}^{\varphi}\), \(U_{d}^{t}\), \(U_{d}^{t}\), \(k_{d}^{r}/E_{\gamma}\), and \(b_{e_{(\mp)}}\), respectivly, from Eqs. (3), (56), (57), (70), (71), (73), and (76) into (72) to find the final expressions for the redshift \(z_{{}_{KdS_{1}}}\) and blueshift \(z_{{}_{KdS_{2}}}\) of the KdS spacetime as follows
\[1+z_{{}_{KdS_{1}}} = \frac{r_{e}^{\frac{3}{2}}\pm a\left(M-\frac{\Lambda r_{e}^{2}}{3} \right)^{\frac{1}{2}}\pm b_{e_{-}}\,\left(M-\frac{\Lambda r_{e}^{3}}{3}\right) ^{\frac{1}{2}}}{\Gamma\mathcal{X}_{\pm}}\,, \tag{77}\] \[1+z_{{}_{KdS_{2}}} = 1+z_{{}_{KdS_{1}}}\left\{b_{e_{-}}\to b_{e_{+}}\right\}, \tag{78}\]
with
\[\Gamma = \frac{1}{r_{d}^{2}\Delta_{d}\Xi}\left\{\left[\left(a^{2}+r_{d}^{ 2}\right)^{2}-a^{2}\Delta_{d}\right]E_{d}\Xi\right. \tag{79}\] \[\left.-\sqrt{\left(a^{2}+r_{d}^{2}\right)^{2}E_{d}^{2}-\Delta_{d }\left(r_{d}^{2}+a^{2}E_{d}^{2}\right)}\times\right.\] \[\left.\sqrt{\left[\left(a^{2}+r_{d}^{2}\right)-abe_{-}\right]^{2 }-\Delta_{d}(b_{e_{-}}-a)^{2}}\right\},\]
where \(b_{e_{-}}=\left.b_{(-)}\right|_{r=r_{e}}\), \(b_{e_{+}}=\left.b_{(+)}\right|_{r=r_{e}}\), and the upper (lower) sign refers to a co- (counter-) rotating emitter. Note that the explicit expressions of \(z_{{}_{KdS_{1,2}}}\) in terms of the black hole parameters have cumbersome forms, but are not hard to be found since it is a matter of substituting \(\mathcal{X}_{\pm}\), \(\Delta_{d}\), \(\Xi\), \(b_{e_{-}}\), and \(E_{d}\) in the above-mentioned equations.
The general behavior of \(z_{{}_{KdS_{1,2}}}\) versus the detector radius \(r_{d}\) for different values of the cosmological constant is illustrated in Fig. 4. Interestingly, one can see that the shift in frequency of photons increases as the cosmological constant \(\Lambda\) increases. On the other hand, the farther the detector is from the source, the higher shift in frequency it observes. This is because the amount of dark energy between the emitter and detector increases by increasing the distance leading to larger changes in the frequency. This is what we expected from the repulsive nature of the cosmological constant, and in this study, the change in the frequency shift due to this effect is quantified through Eqs. (77)-(78). This fact leads one to extract the Hubble law in its original form from \(z_{{}_{KdS_{1,2}}}\) by taking into account some physically motivated approximations as we shall show below.
It is worth mentioning that for real astrophysical systems consisting of supermassive black holes orbited by an accretion disk containing photon sources in the form of water vapor clouds (the so-called megamasers), usually, the emitter radius is at sub-parsec scale (\(r_{e}<1pc\)) while the detector radius is at tens of mega-parsec scale to be within the Hubble flow (\(r_{d}>30Mpc\)). On the other hand, the mass and angular momentum of the black holes are of the event horizon radius order \(M,a\sim r_{+}\), and the cosmological constant is of the order \(\Lambda\sim 10^{-52}m^{-2}\)[20]. Therefore, for a configuration including a supermassive black hole of the order of \(10^{6}\) solar masses in the Hubble flow, we have \(r_{+}\sim 10^{10}m\), \(r_{e}<10^{6}r_{+}\), \(r_{d}>10^{13}r_{+}\), and \(\Lambda\sim 10^{-32}r_{+}^{-2}\) which leads to the following facts
\[\Lambda a^{2} \sim 10^{-32};\ \Lambda r_{e}^{2}<10^{-20};\ \frac{M}{r_{d}}<10^{-13}, \tag{80}\] \[\frac{M}{r_{e}} > 10^{-6};\ \Lambda r_{d}^{2}>10^{-6}, \tag{81}\]
hence, we can ignore the negligible terms \(\left\{\Lambda a^{2},\Lambda r_{e}^{2},M/r_{d}\right\}\) and keep dominant terms \(\left\{M/r_{e},\Lambda r_{d}^{2}\right\}\) for tracking the general relativistic effects. As the next stage, we expand Eqs. (77)-(78) for \(\left\{\Lambda r_{e}^{2}\to 0,\Lambda a^{2}\to 0,M/r_{d}\to 0,\Lambda r_{d}^{2}\to 0\right\}\) and keep the first dominant term in \(\Lambda r_{d}^{2}\), to get
\[1+z_{{}_{KdS_{1,2}}}\approx\left(1+z_{{}_{Kerr_{1,2}}}\right)\left(1+z_{\Lambda }\right), \tag{82}\]
where \(z_{\Lambda}=\sqrt{\Lambda/3}r_{d}\) is the contribution of the cosmological constant in the redshift, and the factors \(1+z_{{}_{Kerr_{1,2}}}\) have the following explicit forms
\[1+z_{{}_{Kerr_{1}}} = \frac{\left(1-2\tilde{M}\right)\pm\tilde{M}^{1/2}\left(\tilde{a} +\sqrt{\tilde{\Delta}_{Kerr}}\right)}{\left(1-2\tilde{M}\right)\sqrt{1-3 \tilde{M}\pm 2\,\tilde{a}\,\tilde{M}^{1/2}}}, \tag{83}\]
\[1+z_{{}_{Kerr_{2}}} = \frac{\left(1-2\tilde{M}\right)\pm\tilde{M}^{1/2}\left(\tilde{a} -\sqrt{\tilde{\Delta}_{Kerr}}\right)}{\left(1-2\tilde{M}\right)\sqrt{1-3 \tilde{M}\pm 2\,\tilde{a}\,\tilde{M}^{1/2}}}, \tag{84}\]
that are the frequency shifts in the standard Kerr spacetime found in [5] with \(\tilde{M}=M/r_{e}\), \(\tilde{a}=a/r_{e}\), and \(\tilde{\Delta}_{Kerr}=1+\tilde{a}^{2}-2\tilde{M}\).
The Hubble constant \(H_{0}\) is related to the cosmological constant with [20]
\[H_{0}=\sqrt{\frac{\Lambda}{3\Omega_{\Lambda}}}, \tag{85}\]
where \(\Omega_{\Lambda}\) is the cosmological constant density parameter. For the special case of \(\Omega_{\Lambda}=1\) (the Universe filled with dark energy, i.e. in the absence of matter), we recover the Hubble law from \(z_{\Lambda}=\sqrt{\Lambda/3}\,r_{d}\) as
\[z_{\Lambda}=H_{0}\,r_{d}, \tag{86}\]
in which \(z_{\Lambda}\) represents the velocity of the host galaxy going away from the detector and \(r_{d}\) is the distance between the black hole and the observer. By introducing the relation (85) in Eq. (82), we can obtain the frequency shift in the KdS background in terms of the Kerr black hole parameters and the Hubble constant as below
\[1+z_{{}_{KdS_{1,2}}}\approx\left(1+z_{{}_{Kerr_{1,2}}}\right)\left(1+\sqrt{ \Omega_{\Lambda}}\,H_{0}\,r_{d}\right), \tag{87}\]
an expression that can be employed to obtain \(H_{0}\) as well as black hole parameters. Therefore, we extracted the Hubble law by considering the frequency shift of stars orbiting around Kerr black holes in asymptotically dS spacetime detected by a far-away observer.
Note that the formula (87) does not constitute a simple multiplication of \(1+z_{{}_{K\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
For the static Schwarzschild black holes, the redshift formula (87) reduces to (for \(\Omega_{\Lambda}=1\))
\[1+z_{{}_{SdS_{1,2}}}\approx\left(1+z_{{}_{Schw_{1,2}}}\right)\left(1+H_{0}r_{d} \right), \tag{88}\]
where \(z_{{}_{SdS_{1,2}}}\) is the frequency shift of SdS black holes and \(z_{{}_{Schw_{1,2}}}\) is the frequency shift of the Schwarzschild black holes that can be determined by taking the limit \(\tilde{a}\to 0\) in (83)-(84) as follows
\[1+z_{{}_{Schw_{1,2}}}=\frac{1}{\sqrt{1-3\tilde{M}}}\left(1\pm\sqrt{\frac{ \tilde{M}}{1-2\tilde{M}}}\right), \tag{89}\]
in which the upper (lower) sign refers to redshifted (blueshifted) particles [36]. Now, one can use Eqs. (88) and (89) to get
\[RB=\frac{\left(1+H_{0}r_{d}\right)^{2}}{1-2\tilde{M}}, \tag{90}\]
\[\frac{R}{B}=\frac{1-\tilde{M}+2\sqrt{\tilde{M}\left(1-2\tilde{M}\right)}}{1-3 \tilde{M}}, \tag{91}\]
where \(R=1+z_{{}_{SdS_{1}}}\) and \(B=1+z_{{}_{SdS_{2}}}\). As the next step, we solve the first equation (90) to obtain the Schwarzschild black hole mass as below
\[\tilde{M}=\frac{RB-\left(1+H_{0}r_{d}\right)^{2}}{2RB}, \tag{92}\]
in terms of the frequency shift and \(H_{0}r_{d}\) product. It is worth noticing that when the \(H_{0}\) constant vanishes, we recover the mass formula as a function of \(R\) and \(B\) obtained in [5]. In order to find the dependency of \(H_{0}r_{d}\)-term on the redshift, we replace (92) in Eq. (91) and solve for \(H_{0}\) as
\[H_{0}=\frac{1}{r_{d}}\left(-1+\frac{\left(R+B\right)\sqrt{RB}}{\sqrt{3R^{2}+3 B^{2}-2RB}}\right), \tag{93}\]
which gives the Hubble constant in terms of the frequency shift \(R\) and \(B\) of the massive geodesic particles on either side of the Schwarzschild black hole as well as the detector distance to the black hole \(r_{d}\). Therefore, the \(H_{0}\,r_{d}\) product appearing in (93) can be used to expressed the mass relation (92) in terms of the redshift and blueshift only. Alternatively, from Eq. (91) it is straightforward to obtain the following expression
\[\tilde{M}=\frac{\left(R-B\right)^{2}}{3R^{2}+3B^{2}-2RB}, \tag{94}\]
for the mass parameter defined by purely observational quantities \(R\) and \(B\).
For the special case \(\tilde{M}<<1\), we obtain \(R\approx B\) (see Eq. (91)). In this situation, the redshift \(z_{{}_{SdS_{1}}}\) and blueshift \(z_{{}_{SdS_{2}}}\) are almost equal \(z_{{}_{SdS_{1}}}\approx z_{{}_{SdS_{2}}}\equiv z\). Thus, we recover the Hubble law for this case from the analytic expression (93) by taking the limit \(R\to B\) as follows
\[z=H_{0}\,r_{d}. \tag{95}\]
It is worthwhile to mention that finding a relation of the form (88) (or Eq. (87) for the rotating case) has another significant importance practically. For instance, in the case of accretion disks circularly orbiting supermassive black holes in the center of AGNs, the recessional redshift of galaxies is composed by [37]
\[1+z_{rec}=\left(1+z_{Cosm}\right)\left(1+z_{Boost}\right), \tag{96}\]
where \(z_{Cosm}\) is the cosmological redshift due to the accelerated expansion of the Universe and \(z_{Boost}\) is due to the peculiar redshift produced by the local gravity effects (see [15; 16; 17; 18] when the geometry of the central objects was described by the Schwarzschild line element). In this relation, since \(z_{Cosm}\) and \(z_{Boost}\) do not depend on the metric, the cosmological redshift and the peculiar redshift become degenerate and we can just obtain \(z_{rec}\), but not \(z_{Cosm}\) and \(z_{Boost}\) separately. On the contrary, since the dependency of \(z_{Cosm}\) on the metric derived in (88) as \(z_{Cosm}=H_{0}r_{d}\) is explicit (or more completely in Eqs. (77)-(78)), this fact can help to break the degeneracy between \(z_{Cosm}\) and \(z_{Boost}\), allowing us to estimate both of these frequency shifts separately.
## IV Discussion and final remarks
In this paper, we have taken into account the KdS solutions and analytically obtained valid parameter space for having KdS black holes. Then, we have expressed the frequency shift of photons emitted by massive geodesic particles, stars for instance, that are circularly orbiting the KdS black holes in terms of the parameters of spacetime, such as the black hole mass, angular momentum, and cosmological constant. For this purpose, we have considered the detectors to be in radial motion with respect to the emitter-black hole system and employed a general relativistic formalism that was briefly described through the text.
In addition, we have seen that the shift in frequency of photons increases with an increase in the cosmological constant as well as the detector distance to the emitter-black hole system that was compatible with the repulsive nature of the cosmological constant. Hence, this observation led us to extract the Hubble law from the original redshift formulas by taking into account some physically motivated approximations.
Moreover, we have found analytic expressions for the Schwarzschild black hole mass and the Hubble constant in terms of the observational frequency shifts of massive particles orbiting circularly this static spherically symmetric black hole. Interestingly, we have also shown that
the Hubble law arose naturally from the exact formula of the Hubble constant (93). The concise and elegant formulas that we have found allow us to extract the properties of spacetime characterized by the black hole mass and spin as well as the cosmological constant through measuring shifts in the frequency of photons.
Now, we finish our paper with a couple of suggestions for future work. It would be interesting to employ and generalize this work in some other directions. For instance, in this study, we were interested in emitters in the range \(r_{e}\in[r_{ISCO},r_{OSCO}]\) and far-away detectors within \(r_{d}\in(\bar{r},r_{c})\), describing the black hole systems in the Hubble flow. However, this formalism can be generalized for circularly orbiting (or static) detectors as well for possible local tests of the accelerated expansion of the Universe. On the other hand, the formula (88) can be employed to estimate the Schwarzschild black hole mass \(M\), the distance \(r_{d}\) to the black hole, and the Hubble constant \(H_{0}\) (or the cosmological constant \(\Lambda\)) by using accretion discs circularly orbiting supermassive black holes hosted at the core of AGNs with the help of Bayesian fitting methods. Our primary estimations of \(H_{0}\) based on the observational data of galaxies within the Hubble flow show that this approach could be a powerful tool to obtain the Hubble constant alongside the black hole parameters. This investigation is currently under consideration.
Finally, we would like to stress that the \(H_{0}\) expression, that we obtained in (93) with the help of the KdS metric, represents a first step towards a more realistic parameterization of \(H_{0}\) in terms of observable quantities that also considers the matter content of the Universe, in consistency with the \(\Lambda\)-cold dark matter cosmological standard model. We are currently studying this problem and hope to report on it in the near future.
## Acknowledgements
All authors are grateful to FORDECYT-PRONACES-CONACYT for support under grant No. CF-MG-2558591; M.M. also acknowledges CONACYT for providing financial assistance through the postdoctoral grant No. 31155. A.H.-A. and U.N. thank SNI and PROMEP-SEP and were supported by grants VIEP-BUAP No. 122 and CIC-UMSNH, respectively. U.N. also acknowledges support under grant CF-140630.
|
2308.04677 | The dusty red supergiant progenitor and the local environment of the
Type II SN 2023ixf in M101 | As one of the closest supernovae (SNe) in the last decade, SN 2023ixf is an
unprecedented target to investigate the progenitor star that exploded. However,
there is still significant uncertainty in the reported progenitor properties.
In this work, we present a detailed study of the progenitor of SN 2023ixf with
two independent analyses. We first modelled its spectral energy distribution
(SED) based on Hubble Space Telescope optical, Spitzer mid-infrared (IR), and
ground-based near-IR data. We find that stellar pulsation and circumstellar
extinction have great impacts on SED fitting, and the result suggests a
relatively massive red supergiant (RSG) surrounded by C-rich dust with an
initial mass of 16.2--17.4 Msun. The corresponding rate of mass-loss occurring
at least 3 years before the SN explosion is about $2 \times 10^{-4}
M_\odot$yr$^{-1}$. We also derived the star formation history of the SN
environment based on resolved stellar populations, and the most recent
star-forming epoch corresponds to a progenitor initial mass of 17--19 Msun, in
agreement with that from our SED fitting. Therefore, we conclude that the
progenitor of SN 2023ixf is close to the high-mass end for Type II SN
progenitors. | Ze-Xi Niu, Ning-Chen Sun, Justyn R. Maund, Yu Zhang, Rui-Ning Zhao, Ji-Feng Liu | 2023-08-09T03:13:34Z | http://arxiv.org/abs/2308.04677v2 | # The dusty red supergiant progenitor and the local environment of the Type II SN 2023ixf in M101
###### Abstract
As one of the closest supernovae (SNe) in the last decade, SN 2023ixf is an unprecedented target to investigate the progenitor star that exploded. However, there is still significant uncertainty in the reported progenitor properties. In this work, we present a detailed study of SN 2023ixf's progenitor with two independent analyses. We first modelled its spectral energy distribution (SED) based on Hubble Space Telescope optical, Spitzer mid-infrared (IR), and ground-based near-IR data. We find that stellar pulsation and circumstellar extinction have great impacts on SED fitting, and the result suggests a relatively massive red supergiant (RSG) surrounded by C-rich dust with an initial mass of 16.2-17.4 \(M_{\odot}\). The corresponding rate of mass-loss occurring at least 3 years before the SN explosion is about \(2\times 10^{-4}M_{\odot}\)yr\({}^{-1}\). We also derived the star formation history of the SN environment based on resolved stellar populations, and the most recent star-forming epoch corresponds to a progenitor initial mass of 17-19 \(M_{\odot}\), in agreement with that from our SED fitting. Therefore, we conclude that the progenitor of SN 2023ixf is close to the high-mass end for Type II SN progenitors.
0000-0002-8820-788X]Zexi Niu
## 1 Introduction
Core-collapse supernovae (SNe) are the spectacular explosions of dying massive (\(>\)8 \(M_{\odot}\)) stars. It is a major goal, and currently a major difficulty, to determine the progenitor stars of different SN types. It has been confirmed that the Type II-P SNe arise from the explosion of red supergiants (RSGs) with almost 20 directly probed progenitors (e.g. SN 2003gd, Smartt et al., 2004; Maund and Smartt, 2009; SN 2005cs, Maund et al., 2005; SN 2017eaw, Van Dyk et al., 2019; Rui et al., 2019). However, none of them appear more massive than \(\sim\)16-18 \(M_{\odot}\), which is significantly lower than the theoretically predicted upper mass limit of 25-30 \(M_{\odot}\)(i.e. the "RSG problem"; Smartt, 2009, 2015; although see also Davies and Beasor, 2018). This could be due to the very uncertain circumstellar extinction, which may lead to underestimation of the progenitor masses (Walmswell and Eldridge, 2012); it is also possible that the more massive stars may explode as other types of SNe (e.g. Type II-L or Type IIn; Groh et al., 2013) or even collapse directly into a black hole (Sukhbold et al., 2016).
SN 2023ixf is a Type II SN that recently exploded in the nearby galaxy M101 (i.e. the Whirpool galaxy; Itagaki, 2023). It serves an unprecedented example of studying properties of the progenitor. Soon after the explosion, the stellar variability of the progenitor in Infrared (IR) band was identified by Szalai and Dyk (2023) Simultaneously, several groups reported the detection of a progenitor candidate for SN 2023ixf on pre-explosion images (e.g. Pledger and Shara, 2023; Kilpatrick et al., 2023; Jencson et al., 2023; Soraisam et al., 2023). Their results are all consistent with a RSG progenitor enshrouded by a dusty envelope. The presence of dense circumstellar material (CSM) around the progenitor is also indicated by the fast rising luminosity of the SN and its early spectra with prominent and narrow nebular emission lines (Yamanaka et al., 2023; Smith et al., 2023; Vasylyev et al., 2023; Teja et al., 2023; Jacobson et al., 2023; Hiramatsu et al., 2023; Bostroem et al., 2023; Hosseinzadeh et al., 2023). However, the inferred
progenitor mass is still under debate and can range from 8-10 \(M_{\odot}\) up to \(\sim\)20 \(M_{\odot}\). This large uncertainty presents a significant challenge to further studies of SN 2023ixf.
In this paper, we carry out a detailed analysis of the progenitor of SN 2023ixf in order to derive its accurate properties. We use two different techniques, based on the direct progenitor detection and SN environment, to assess the reliability of our results against possible systematic errors. Throughout this paper, we adopt a distance of 6.85 \(\pm\) 0.15 Mpc (Riess et al., 2022), a Milky Way extinction of \(E(B-V)_{\rm MW}=0.008\) mag (Schalfy and Finkbeiner, 2011), a host galaxy extinction of \(E(B-V)_{\rm host}=0.033\)(Lundquist et al., 2023; Smith et al., 2023; Jacobson-Galan et al., 2023), and a standard extinction law with \(R_{V}=3.1\)(Cardelli et al., 1989). All magnitudes are reported in the Vega system unless otherwise specified.
## 2 Data and Photometry
### HST optical data
The site of SN 2023ixf was observed by Hubble Space Telescope (HST) with the Wide Field and Planetary Camera 2 (WFPC2), the Wide Field Camera 3 (WFC3), and the Advanced Camera for Surveys (ACS) before explosion. We retrieved the calibrated images from the Mikulski Archive for Space Telescopes1 and reprocessed them with the astrodrizzle package (Gonzaga et al., 2012) for better image alignment and cosmic ray removal. On the ACS F814W image, there are two objects within 0.2 arcsec from the reported SN position and Kilpatrick et al. (2023) identified the relatively brighter one as the SN progenitor (Fig. 1) based on differential astrometry with post-explosion images. We performed point-spread-function (PSF) photometry with the dolphot package (Dolphin, 2000) and detected the progenitor significantly in the F658N, F675W, and F814W bands. The measured magnitudes, which are listed in Table 1, are slightly brighter than those reported by Kilpatrick et al. (2023) within 1-4\(\sigma\) uncertainties (note their magnitudes are in the AB system). The difference could be due to our different parameters in dolphot photometry, and this difference does not have any significant effect in the following analysis. The progenitor was not detected in any other bands, and we estimated the detection limits with artificial star tests. We also used the ACS F435W, F555W, and F814W photometry to perform an environmental analysis of the resolved stellar populations around SN 2023ixf (Section 4.2).
Footnote 1: [https://archive.stsci.edu](https://archive.stsci.edu)
### Spitzer mid-IR data
The site of SN 2023ixf was observed by Spitzer/IRAC at a total of 31 epochs during its cold mission in 2004 and during the warm mission in 2012-2019. We retrieved the post-basic calibrated (PBCD) images from the Spitzer Heritage Archive2, and a pre-explosion source is clearly visible at the SN position in the [3.5] and [4.5] bands (Fig. 1). We used the dopphot package (Schechter et al., 1993) to perform PSF photometry at individual epochs, the results of which are listed in Table 2 and displayed in Fig. 2. Prominent variability, with a semi-amplitude of \(\sim\)0.25 mag, can be seen in both the [3.6] and [4.5] bands without obvious color variation. This variability is significant compared with the photometric uncertainties and not due to the possible zero-point shift between different epochs (using non-variable field stars to provide a reference, we found the zero-point shift, if any, to be negligible compared with the progenitor's variability). This variability was also reported in Kilpatrick et al. (2023), Jencson et al. (2023), and Soraisam et al. (2023), and they all found a pulsational period of \(\sim\)1000 days. We derived phase-weighted average magnitudes of [3.6] = 17.78 \(\pm\) 0.19 mag and [4.5] = 17.50 \(\pm\) 0.20 mag. On the [5.8] and [8.0] images, however, no counterpart can be significantly detected at the SN position down to 3\(\sigma\) detection limits of 14.95 and 14.38 mag, respectively.
Footnote 2: [https://sha.ipac.caltech.edu](https://sha.ipac.caltech.edu)
### Ground-based near-IR data
During the writing of this paper, Soraisam et al. (2023) reported \(JHK\)-band light curves acquired from the Gemini Near-IR Imager (NIRI) and the Wide Field Camera (WFCAM) on the United Kingdom Infrared Telescope (UKIRT) images. Their photometry is roughly consistent with those derived by Jencson et al.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Epoch & Program & Instrument & Magnitude \\ (MJD) & ID & and filter & \\ \hline
52593.99 & 9490\({}^{\rm a}\) & ACS/F435W & \(>27.3\) \\
52594.01 & 9490 & ACS/F555W & \(>27.0\) \\
53045.01 & 9720\({}^{\rm b}\) & ACS/F658N & 24.75 (0.20) \\
51261.04 & 6829\({}^{\rm c}\) & WFPC2/F675W & 25.36 (0.19) \\
52594.02 & 9490 & ACS/F814W & 24.34 (0.05) \\ \hline \multicolumn{4}{c}{PIs: (a) K. Kuntz; (b) P. Chandar; (c) Y.-H. Chu.} \\ \end{tabular}
\end{table}
Table 1: HST observations used in this work and photometry of SN 2023ixf’s progenitor. The other HST observations are not listed since their detection limits are not very constraining for the progenitor’s SED (Section 3) or because they are not used in the environmental analysis (Section 4.2).
Figure 1: (a, b) HST and (c, d) Spitzer images of the site of SN 2023ixf. The SN position is shown by the cross-hair and the images have a dimension of 25 arcsec \(\times\) 25 arcsec. In the F658N narrow-band image (a), a few H ii regions are visible to the west and to the north of SN 2023ixf, and the grey-shaded stripe shows the slit position in one of our long-slit spectroscopic observations of the SN (Zhang et al. in preparation); we extracted a spectrum from the red-outlined area in order to estimate the metallicity of the ionized gas (Section 4.1). In (a) and (b), the circle is centered at the SN position with a radius of 150 pc, within which we selected stars for an environmental analysis (Section 4.2). (e-h) Stamps of HST images in the F435W, F555W, F675W, and F814W bands, all with a dimension of 3 arcsec \(\times\) 3 arcsec. All panels are aligned North up and East to the left.
(2023) and Kilpatrick et al. (2023) based on observations with NIRI, NEWFIRM infrared camera, and/or the MMT and Magellan Infrared Spectrograph (MMIRS). In this paper, we applied \(JHK\)-band photometry of Soriasam et al. (2023) and computed the phase-weighted average \(J\) = 20.67 \(\pm\) 0.19 mag, \(H\) = 19.55 \(\pm\) 0.13 mag, and \(K\) = 18.69 \(\pm\) 0.08 mag with their period and amplitudes, since their data have better phase coverage.
## 3 SED Analysis
### Method
RSGs could experience significant mass loss and enshroud themselves within dusty envelopes (de Jager et al., 1988; van Loon et al., 2005; Massey et al., 2005; Beasor et al., 2020). The uncertain circumstellar extinction often leads to an underestimation of the progenitor mass based on detections in only one or a few filters (e.g. Van Dyk et al., 2019). For the progenitor of SN 2023ixf, however, the extensive optical, near-IR and mid-IR data allow us to perform a detailed modeling of its spectral energy distribution (SED).
The intrinsic SED of the SN progenitor was synthesized by the marcs model atmospheres (Gustafsson et al., 2008), assuming a microtubulent velocity of 5 km s\({}^{-1}\), surface gravity log(\(g\)) = 0 dex and metallicity [Fe/H] = \(-\)0.25 dex (close to our estimation in Section 4.1). For the effective temperature, we tried three different values of \(T_{\rm eff}\) = 3400, 3700, and 4000 K, which are typical of RSGs when they explode (e.g. Smartt, 2015, although see Davies et al., 2013). The radiation transfer through the dusty envelope was then solved with the dusty code. We used a \(\rho\propto r^{-2}\) wind-like radial density profile and the default relative shell thickness that is 1000 times the inner radius. The standard Mathis, Rumpl & Nordsieck (1977; MRN) grain size distribution (Mathis et al., 1977) was adopted, and we considered two different types of dust grains made of either pure graphite or pure silicate (Draine & Lee, 1984), representing a C-rich or O-rich chemical composition, respectively. The progenitor's bolometric luminosity, \(L_{\rm bol}\), the dust temperature at the inner shell boundary, \(T_{\rm in}\)
\begin{table}
\begin{tabular}{c c c} \hline \hline Epoch & [3.6] & [4.5] \\ (MJD) & (mag) & (mag) \\ \hline
[MISSING_PAGE_POST]
\hline phase-weighted & & \\ average & 17.78 (0.19) & 17.50 (0.20) \\ \hline \end{tabular}
\end{table}
Table 2: Spitzer/IRAC photometry of the progenitor of SN 2023ixf in the [3.6] and [4.5] bands.
Figure 2: Light and color curves of the progenitor of SN 2023ixf. The dotted horizontal lines are the phase-weighted averages and the light-shaded regions reflect the typical photometric uncertainties.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \(T_{\rm eff}\) & \(\log(L/L_{\odot})\) & \(T_{\rm in}\) & \(\tau_{V}\) & \(E(B-V)_{\rm CSM}\) & Comment \\ & (K) & & (K) & & (mag) & \\ \hline C-rich & 3400 & \(4.97^{+0.09}_{-0.08}\) & \(514^{+96}_{-70}\) & \(5.37^{+0.64}_{-0.59}\) & 1.39 & \\ & 3700 & \(5.11^{+0.08}_{-0.08}\) & \(433^{+55}_{-50}\) & \(6.39^{+0.58}_{-0.59}\) & 1.64 & (a) \\ & 4000 & \(5.22^{+0.07}_{-0.08}\) & \(387^{+46}_{-39}\) & \(7.16^{+0.54}_{-0.53}\) & 1.85 & \\ \hline O-rich & 3400 & \(4.95^{+0.06}_{-0.05}\) & \(1025^{+317}_{-283}\) & \(12.13^{+0.86}_{-0.93}\) & 1.88 & \\ & 3700 & \(5.07^{+0.08}_{-0.06}\) & \(715^{+327}_{-205}\) & \(12.79^{+0.94}_{-0.93}\) & 1.98 & (b) \\ & 4000 & \(5.18^{+0.09}_{-0.05}\) & \(553^{+272}_{-298}\) & \(13.15^{+1.01}_{-0.91}\) & 2.04 & (b) \\ \hline \end{tabular} Comments: (a) This is the most favored effective temperature since the other temperatures are either too cold or too warm compared with that of a RSG just before the explosion (Fig. 3c). (b) These two models have prominent silicate bumps that almost exceed the [8.0] detection limits.
\end{table}
Table 3: Best-fitting and derived parameters of the SED for the progenitor of SN 2023ixf.
Figure 3: (a) Observed SED (black data points) and the best-fitting model with C-rich dust (colored solid lines) for the progenitor of SN 2023ixf. The vertical error bars reflect their 3\(\sigma\) photometric uncertainties and the horizontal error bars are the root-mean-square bandwidths. Only 3\(\sigma\) detection limits in Spitzer/IRAC [5.8] and [8.0] bands are displayed (inverted triangles), and those in the other bands are not very constraining on the progenitor’s SED. The 2\(\sigma\) detection limit in [8.0] band is shown in gray. The dashed, dotted, and dot-dashed lines correspond to the attenuated RSG radiation, dust emission, and dust-scattered radiation, respectively. The yellow, red, and blue colors correspond to effective temperatures of \(T_{\rm eff}\) = 3400, 3700, and 4000 K for the RSG progenitor. Detailed model parameters are listed in Table 3. (b) Same as (a) but for the O-rich dust model. Notice that the prominent silicate bumps of \(T_{\rm eff}\) = 3700 and 4000 K models almost exceed the [8.0] detection limit. (c) The progenitor of SN 2023ixf on the Hertzsprung-Russell diagram. The diamonds and stars demonstrate models with O-rich and C-rich dust, respectively. They are colored in the same way as (a-b). Overlaid black/grey lines are the parsec stellar evolutionary tracks for different initial masses.
(which is required to be lower than 1500 K such that the dust can survive in the envelope; Pozzo et al., 2004; Sarangi et al., 2018), and the V-band optical depth of the envelope, \(\tau_{V}\), were left as free parameters to be fitted from the data.
We used the Markov Chain Monte Carlo method to search for models that match the progenitor detections in the F675W, F814W, J, H, K, [3.6], and [4.5] filters. We used the phase-weighted average magnitudes for the near- and mid-IR filters; for the optical F675W and F814W filters, however, data at only one epoch are available and it is difficult to estimate their phase-weighted average magnitudes. Therefore, in these two filters, we conservatively allowed the model magnitudes to vary within 0.5 mag from the observed ones, accounting for their possible variability due to stellar pulsation (see Section 5 for a more detailed discussion).
### Results
The best-fitting model SED with C-rich/O-rich dust and with different progenitor effective temperatures are displayed in Fig. 3 (a, b), and the detailed parameters are listed in Table 3. The circumstellar extinction \(E(B-V)_{\rm CSM}\) is also computed according to the equation in Kochanek et al. (2012) and convolved with the passbands. Note, however, the O-rich models with \(T_{\rm eff}\) = 3700 and 4000 K have prominent silicate bumps that almost exceed the 3\(\sigma\) [8.0] detection limit, and significantly exceed the limit at the 2\(\sigma\) level (15.0 mag). Therefore, there is a 95% probability that these two models are against the observations.
The position of SN 2023ixf's progenitor on the Hertzsprung-Russell diagram is shown in Fig. 3c in comparison with the parsec v1.2S single-star evolutionary tracks (Bressan et al., 2012). Although we tried 3 different effective temperatures typical of RSGs, only the intermediate value (i.e. \(T_{\rm eff}\) = 3700 K) is consistent with the end points of the tracks, while the other two values (3400 and 4000 K) are either too cold or too warm for a RSG just before the explosion (we note, however, there could be uncertainties in the estimate and model prediction of the effective temperature of RSGs; e.g. Davies et al., 2013.) Therefore, assuming single-star evolution SN 2023ixf is most likely to have a relatively massive progenitor with \(M_{\rm ini}\) = 16.2-17.4 \(M_{\odot}\) enshrouded by a C-rich dusty envelope.
### Mass-loss rate
Given the CSM parameters derived from SED fitting, the mass-loss rate (\(\dot{M}\)) can be calculated with \(\dot{M}=\frac{16}{3}\pi R_{\rm in}\tau_{V}\rho_{d}a\mathrm{\Psi_{exp}}r_{\rm gd }Q_{\lambda}^{-1}\)(Beasor and Davies, 2016). For the expansion velocity, we use \(V_{\rm exp}=115\) kms\({}^{-1}\) as measured by Smith et al. (2023) based on high-resolution SN spectra. For the dust grains, we assume an effective extinction efficiency of \(Q_{V}=0.4\) and a bulk density of \(r_{\rm gd}\) = 2.26 g cm\({}^{-3}\) (typical of graphites; Draine and Lee, 1984) and a grain size of \(a=\sqrt{0.005\times 0.25}\)\(\mu\)m (similar to that adopted in Humphreys et al., 2020). Considering the half-solar metallicity of the SN environment (see Section 4.1), we use a gas-to-dust ratio of 400 (van Loon et al., 2005). With these values, we infer a mass-loss rate of \(2\times 10^{-4}M_{\odot}\)yr\({}^{-1}\) for the CSM of the SN progenitor.
On another hand, Beasor et al. (2020, see also their Erratum in 2023) established an empirical mass-dependent \(\dot{M}\)-\(L_{\rm bol}\) relation based on Galactic and LMC RSGs. Our derived mass-loss rate is significantly larger than that expected from this relation by almost 2 orders of magnitude. This suggests that the mass-loss rate is significantly enhanced for a RSG shortly before its explosion.
Meanwhile, mass-loss rates inferred from both flash spectroscopy and early light curve of SN 2023ixf are \(>10^{-3}M_{\odot}\)yr\({}^{-1}\)(Jacobson-Galan et al., 2023; Hiramatsu et al., 2023; Bostroem et al., 2023), significantly higher than that from SED modelling. This tension is also noticed in the previous works of Jencson et al. 2023. We note that the dense CSM probed by the SN flash spectroscopy and early light curve was distributed within (3-7)\(\times 10^{14}\) cm, while the CSM probed by the direct progenitor detection in this paper has a much larger inner radius of 1.5\(\times 10^{15}\) cm. Also note that the pre-explosion photometry we used were taken at 3-19 years before the SN explosion. Therefore, it is possible that the progenitor experienced a relatively enhanced mass-loss (\(\dot{M}\approx 2\times 10^{-4}M_{\odot}\)yr\({}^{-1}\)) until several years before the explosion and experienced a extreme mass-loss (\(\dot{M}=10^{-3}-10^{0}M_{\odot}\)yr\({}^{-1}\)) later during 1-3 years before the explosion. It is also worth mentioning that the discrepancy of mass-loss rate maybe originates from nonhomogeneous CSM (Smith et al., 2023; Beasor et al., 2023; Soker, 2023).
In addition to continuous mass-loss, for mass-loss through eruptions, precursor outburst like that observed in normal Type II SN 2020tlf are excluded by Dong et al. (2023). Neustadt et al. (2023) found that transient with peak luminosity \(>2\times 10^{39}\) ergs\({}^{-1}\) cannot happen during 1-15 year before the SN explosion. Higher non-detection limits can be also seen in Flinner et al. (2023) in ultraviolet bands.
## 4 Environmental analysis
### Metallicity
As shown in Fig. 1, there are a few H ii regions at distances of 150-300 pc to the north and west of SN 2023ixf.
Figure 4: (a) Continuum-subtracted spectrum of the nearby H ii region to the north of SN 2023ixf (Fig. 1). The grey-shaded wavelength ranges are enlarged in panels (b) and (c), where the red and blue lines display the total and single Gaussian fits to the nebular emission lines, respectively.
Figure 5: (a, b) Color-magnitude diagrams of resolved stellar populations within 150 pc from SN 2023ixf (orange data points) The error bars reflect their 1\(\sigma\) photometric uncertainties. The dashed lines show the 50 per cent detection limits, and the grey-shaded regions show where \(\leq\)68 per cent of the artificial stars can be successfully recovered. Model stellar populations with three age components are fitted to the data, and the (blue, green, and red) thin lines are stellar isochrones from 100 random realizations according to the stellar log-age and extinction distributions of the model populations. The arrows in the upper-left corners are total (Galactic + internal) reddening vectors for a standard extinction law with \(R_{V}\) = 3.1. (c) The star formation history in the vicinity of SN 2023ixf from 100 random realizations; the heights of the peaks are scaled to correspond to the weighting of each star formation epoch.
Part of the northern H ii region is also covered by our long-slit follow-up spectroscopy of SN 2023ixf with the Xinglong 2.16-m telescope (Fig. 1; Zhang et al. in preparation). Here we use one of the acquired spectra to investigate the ionized gas in the SN environment. This spectrum was observed on June 1st by the Beijing Faint Object Spectrograph and Camera (BFOSC) and G4+385LP grism (Fan et al., 2016). The wavelength range is from 3700 to 8800 and the exposure time is 1200 s. The spectrum of the H ii region was extracted from the red-outlined area in Fig. 1, and data reduction was performed with the astro-plpy3 and astro-wcpy packages (Zhao, 2023).
Footnote 3: [https://pypi.org/project/astro-plpy](https://pypi.org/project/astro-plpy)
Figure 4 shows the extracted spectrum, from which a stellar continuum has been fit (with the ppxf package; Cappellari and Emsellem, 2004; Cappellari, 2017) and removed. Prominent nebular emission lines are apparent (such as H\(\alpha\), H\(\beta\), [N ii] \(\lambda\lambda\) 6548, 6583, [O i] \(\lambda\lambda\) 6300, 6363, [O iii] \(\lambda\lambda\) 4959, 5007, and [S ii] \(\lambda\lambda\) 6716, 6731), and we measured their fluxes by fitting Gaussian profiles (Fig. 4b, c). With the O3N2 calibration of the strong-line diagnosities (Marino et al., 2013), we derived an oxygen abundance of 12 + log(O/H) = 8.37 \(\pm\) 0.18 dex, which is lower than the solar value (8.69 dex; Asplund et al., 2009) by 0.32 dex (i.e. half-solar metallicity). We note, the strong-line method could suffer from possible systematic uncertainties. For example, the oxygen abundance would be 8.49 dex if adopting the empirical calibration in Pettini and Pagel (2004). It is consistent with the above value within the margin of error.
### Star formation history
In the immediate SN vicinity, there are no obvious signs of ongoing star formation on the HST or IRAC images (e.g. young stellar complexes, H\(\alpha\) emission or dust IR emission; Fig. 1). In this area, the most recent star-forming activity may occur some time ago, after which star formation ceased or declined to very low levels. In order to recover the past star formation history, we analyzed the resolved stellar populations within \(\sim\)150 pc (typical scale of star-forming complexes; see Efremov, 1995) from SN 2023ixf based on their HST/ACS F435W/F555W/F814W photometry. In each band, we used only detections with signal-to-noise ratios larger than 5, and required their dolphot sharpness parameter to be in the range of \(-\)0.5 \(<\) SHARP \(<\) 0.5, so that the selected sources have point-like morphologies. We also used randomly positioned artificial stars to estimate a detection limit and additional photometric uncertainties induced by source crowding and imperfect sky subtraction. A total of 362 stars are detected in the local environment of SN 2023ixf and their color-magnitude diagrams are displayed in Fig. 5.
We then fitted model stellar populations (based on the parsec v1.2S stellar evolutionary models; Bressan et al., 2012) to the data with a hierarchical Bayesian method, which was detailed in Maund and Ramirez-Ruiz (2016) and Sun et al. (2021)(see also Maund, 2017, 2018; Sun et al., 2020, 2022, 2023a, 2023b). In brief, each model population has a Salpeter (1955) initial mass function, a 50 per cent (non-interacting) binary fraction, and a flat distribution of secondary-to-primary mass ratio. Stars of each population has Gaussian log-age and extinction distributions, and we assumed a small log-age dispersion of 0.05 dex and a small extinction dispersion of 0.05 mag. Prolonged star formation can be considered as the superposition of multiple stellar populations with different mean log-ages.
We used three model populations to fit the data and solve for their parameters with the dynamic nested sampling package dynesty(Speagle, 2020). We derived a mean (host galaxy) extinction of \(A_{V}^{\rm host}\) = 0.4 mag, and mean log-ages of 7.07, 7.84 and 8.16 dex for the three age components (i.e. 12, 69, and 144 Myr). The derived star formation history and stellar isochrones corresponding to the three model populations are displayed in Fig. 5. Using more model populations may improve the accuracy of the derived star formation history but will not change the conclusion reached in this section. As later discussed, the SN progenitor corresponds to the youngest population and this population has already been fitted well.
We note that the derived extinction is significantly larger than that for the SN itself (e.g. Smith et al., 2023). We argue that this is not unreasonable, since interstellar extinction often has significant spatial variation and the progenitor of SN 2023ixf could have expelled the nearby dust with its intensive radiation and stellar wind. Liu et al. (private communications) performed an environmental analysis of SN 2023ixf based on integral-field-unit spectroscopy. With Balmer decrement they found a total reddening of \(E(B-V)_{\rm tot}\) = 0.11 \(\pm\) 0.06 mag for the ionized gas within 3 arcsec from SN 2023ixf. This corresponds to a host-galaxy extinction of \(A_{V}^{\rm host}\) = 0.3 \(\pm\) 0.2 mag, consistent with our results.
Assuming single-star evolution, the most recent star-forming burst corresponds to a SN progenitor mass of \(M_{\rm ini}\) = 17-19 \(M_{\odot}\) (considering a conservative log-age uncertainty of 0.05 dex) and the earlier star formation epochs are too old to be consistent with a core-collapse SN. This result again suggests a relatively massive pro
genitor and is in agreement with the mass estimate derived from SED fitting (Section 3).
## 5 Comparison with Previous Studies
Pledger and Shara (2023) identified the progenitor of SN 2023ixf from the HST images and suggested it to be within the relatively low initial mass range of 8-10 \(M_{\odot}\). They noted the star may be subject to significant extinction, which would be difficult to estimate without the near- and mid-IR data.
Kilpatrick et al. (2023) also reported the detection of the progenitor of SN 2023ixf. With SED fitting, they derived a progenitor mass of only \(M_{\odot}\) = 11 \(M_{\odot}\), which is significantly smaller than our result. This difference could be partly due to the stellar brightness variability. The HST observations were conducted at only one epoch, and it is difficult to estimate the phase-weighted average magnitudes in the optical bands. In order to account for this effect, we allowed the F675W and F814W model magnitudes to vary within 0.5 mag from the observed values, i.e. over a much larger range than the photometric uncertainties (Section 3). In fact, our best-fitting model SEDs predict brighter magnitudes in these two optical bands than the observed ones. In a test we found that the derived bolometric luminosity would be much smaller if we strictly required the model SEDs to match the observed F675W and F814W magnitudes within photometric uncertainties. This is consistent with the finding of Jencson et al. (2023) that the HST observations were timed near the bottom of the pulsation cycle. In addition, we have included the \(JHK\)-band photometry reported by Soraisam et al. (2023), while in Kilpatrick et al. (2023) the \(JH\)-band are detection limits. In our analysis we found the progenitor's SED peaks near the \(J\) and \(H\) bands (Fig. 3), while their best-fitting model SED peaks at a slightly longer wavelength.
Jencson et al. (2023) derived a bolometric luminosity of log(\(L/L_{\odot}\)) = 5.1 \(\pm\) 0.2 dex and initial mass of \(M_{\odot}\) = 17 \(\pm\) 4 \(M_{\odot}\) for the progenitor of SN 2023ixf, which are roughly consistent with our results. Their analysis was performed by fitting the near-IR and mid-IR phase-weighted average magnitudes based on the grams models with O-rich silicate dust. As we pointed out in Section 3, however, the prominent silicate bump is rarely under the Spitzer/IRAC [8.0] detection limit. When a more strict limit (e.g 2\(\sigma\)) is applied, O-model is incompatible with the observations. We prefer a C-rich dust chemical composition. We note that Jencson et al. (2023) derived a detection limit of [8.0] \(>\) 11.8 mag, significantly brighter than ours (14.95 mag). For comparison, the estimate of Kilpatrick et al. (2023) (their Table 2) corresponds to a more strict detection limit of [8.0] \(>\) 16.1 mag, assuming a nominal zero-point flux of Vega of 64.9 Jy. The different values may arise from our different photometry techniques.
Soraisam et al. (2023) accurately measured the progenitor's pulsational period and estimated its \(K\)-band absolute magnitude with the period-luminosity relation. They converted the absolute magnitude to a bolometric luminosity of log(\(L/L_{\odot}\)) = 5.2-5.4 dex and inferred an initial mass of 20 \(\pm\) 4 \(M_{\odot}\), both of which are slightly larger than ours. The period-luminosity relation they used was calibrated based on 255 RSGs in M31 (Soraisam et al., 2018). For the progenitor of SN 2023ixf, however, it is still unclear whether a RSG just before SN explosion would still follow the same period-luminosity relation as those at earlier evolutionary stages.
In summary, it is very challenging to derive accurate parameters for the progenitor of SN 2023ixf, due to its brightness variability, the uncertain circumstellar dust, and the poor understanding of the stellar evolutionary stage shortly before the explosion. The environmental analysis (Section 4) avoids these obstacles (although it has its own difficulties; see the discussion of Sun et al., 2021) and, as an independent analysis, has derived consistent results as our SED fitting (Section 3). We believe, therefore, that our conclusion should be reliable that SN 2023ixf has a relatively massive progenitor with an initial mass of \(M_{\rm ini}\) = 16.2-17.4 \(M_{\odot}\) (from SED fitting) or 17-19 \(M_{\odot}\) (from SN environment).
## 6 Summary and Conclusions
In this paper, we report a detailed analysis of the progenitor of the nearby Type II SN 2023ixf. Two independent analyses, based on direct progenitor detection in pre-explosion observations and an analysis of the SN environment, are used and they reach consistent results.
The progenitor of SN 2023ixf is significantly detected on the pre-explosion images acquired by HST in the F658N, F675W, and F814W bands and by Spitzer in the [3.6] and [4.5] bands. In agreement with previous studies, the mid-IR light curves exhibit significant variability without obvious color changes.
The progenitor's SED is consistent with a RSG enshrouded by a dusty envelope. We modelled the SED by calculating the radiative transfer through dust; two different dust compositions were considered, i.e. C-rich pure graphite and O-rich pure silicate. Only the C-rich model seems consistent with observations and, assuming an effective temperature of \(T_{\rm eff}\)\(\sim\) 3700 K, the progenitor star has a bolometric luminosity of log(\(L/L_{\odot}\)) = 5.11 dex, corresponding to an initial mass of \(M_{\rm ini}\) = 16.2-17.4 \(M_{\odot}\). The mass-loss rate is about 1 \(\times 10^{-5}\)\(M_{\odot}\)yr\({}^{-1}\).
We also analyzed the environment of SN 2023ixf as another approach to understand its progenitor. A few H ii regions are located at distances of 150-300 pc from the SN, and we derived a half-solar metallicity from strong nebular emission lines.
In the immediate SN vicinity (\(<\)150 pc) there are no obvious signs of ongoing star formation. We derived the star formation history based on the resolved stellar populations. While most star-forming bursts are too old to be consistent with a core-collapse SN, the most recent one occurred 12 Myr ago, corresponding to an initial mass of \(M_{\rm{ini}}\) = 17-19 \(M_{\odot}\) for the progenitor of SN 2023ixf, assuming single-star evolution.
In summary, the progenitor of SN 2023ixf is among the most massive ones that have been directly probed for Type II SNe. For such a massive progenitor, the powerful stellar wind likely drives significant mass loss and results in a low-mass H envelope, which could explain the relatively steep slope of the light curve out to 50 days after explosion (Bianciardi et al., 2023). It remains to be explored whether binary evolution plays any role for the progenitor of SN 2023ixf, although currently no obvious signs for a companion star have been discovered.
## 7 Acknowledgments
We acknowledge the referee for his/her valuable comments and suggestions that improved the quality of the paper significantly. ZXN acknowledges the helpful discussions with Dr. Shu Wang, and NCS is grateful to Dr. Chenxu Liu for providing the extinction value estimated with integral-field-unit spectroscopic data. NCS's research is funded by the NSFC grant No. 12261141690. The research of JRM is supported by the STFC consolidated grant ST/V000853/1. JFL acknowledges support from the NSFC through grant Nos. 11988101 and 11933004, and support from the New Cornerstone Science Foundation through the New Cornerstone Investigator Program and the XPLOREER PRIZE.
This research has made use of the NASA/IPAC Infrared Science Archive, which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. This research is based on observations made with the NASA/ESA Hubble Space Telescope obtained from the Space Telescope Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program(s) 9490, 9720, and 6829.
Data used in this work are all publicly available from the NASA/IPAC Infrared Science Archive ([https://sha.ipac.caltech.edu/applications/Spitzer/SHA/](https://sha.ipac.caltech.edu/applications/Spitzer/SHA/)) and the Mikulski Archive for Space Telescope at the Space Telescope Science Institute via 10.17909/bhr8-jp04
|
2301.10309 | Interactive-Chain-Prompting: Ambiguity Resolution for Crosslingual
Conditional Generation with Interaction | Crosslingual conditional generation (e.g., machine translation) has long
enjoyed the benefits of scaling. Nonetheless, there are still issues that scale
alone may not overcome. A source query in one language, for instance, may yield
several translation options in another language without any extra context. Only
one translation could be acceptable however, depending on the translator's
preferences and goals. Choosing the incorrect option might significantly affect
translation usefulness and quality. We propose a novel method interactive-chain
prompting -- a series of question, answering and generation intermediate steps
between a Translator model and a User model -- that reduces translations into a
list of subproblems addressing ambiguities and then resolving such subproblems
before producing the final text to be translated. To check ambiguity resolution
capabilities and evaluate translation quality, we create a dataset exhibiting
different linguistic phenomena which leads to ambiguities at inference for four
languages. To encourage further exploration in this direction, we release all
datasets. We note that interactive-chain prompting, using eight interactions as
exemplars, consistently surpasses prompt-based methods with direct access to
background information to resolve ambiguities. | Jonathan Pilault, Xavier Garcia, Arthur Bražinskas, Orhan Firat | 2023-01-24T21:08:13Z | http://arxiv.org/abs/2301.10309v1 | Interactive-Chain-Prompting: Ambiguity Resolution for Crosslingual Conditional Generation with Interaction
###### Abstract
Crosslingual conditional generation (e.g., machine translation) has long enjoyed the benefits of scaling. Nonetheless, there are still issues that scale alone may not overcome. A source query in one language, for instance, may yield several translation options in another language without any extra context. Only one translation could be acceptable however, depending on the translator's preferences and goals. Choosing the incorrect option might significantly affect translation usefulness and quality. We propose a novel method _interactive-chain prompting_ -- a series of question, answering and generation intermediate steps between a _Translator_ model and a _User_ model -- that reduces translations into a list of subproblems addressing ambiguities and then resolving such subproblems before producing the final text to be translated. To check ambiguity resolution capabilities and evaluate translation quality, we create a dataset exhibiting different linguistic phenomena which leads to ambiguities at inference for four languages. To encourage further exploration in this direction, we release all datasets. We note that _interactive-chain prompting_, using eight interactions as exemplars, consistently surpass prompt-based methods with direct access to background information to resolve ambiguities.
Machine Learning, ICML
## 1 Introduction
Transformer Language Models (LM, Vaswani et al.2017) pretrained on large corpora have achieved outstanding results in a variety of NLP benchmarks (Devlin et al., 2019; Brown et al., 2020). Scaling the number of parameters, the size of the pretraining dataset, and the amount of computing budget gives Language Models better sample efficiency and ability to generalize for many tasks (Kaplan et al., 2020; Brown et al., 2020; Henighan et al., 2020; Hernandez et al., 2021; Lepikhin et al., 2021; Wei et al., 2022). However, for tasks such as commonsense and symbolic reasoning, where the solution requires multistep computation, or crosslingual conditional generation such as Neural Machine Translation (NMT), where there could be more than one plausible prediction for a given source sequence, scale alone may not be sufficient to achieve high accuracy (Rae et al., 2021; Ghorbani et al., 2022).
Chain-of-thought (Wei et al., 2022) and least-to-most (Zhou et al., 2022) methods have demonstrated, by prompting a (large-)LM such as PaLM (Chowdhery et al., 2022), that breaking down a task into subproblems that are solved sequentially greatly improves the quality of the final prediction. Such methods demonstrate that producing intermediate sub-results that address specific aspects of a bigger problem significantly improves performance on tasks like arithmetic, math word problems, and symbolic manipulation.While studies have investigated the translation capabilities of PaLM with various prompting strategies (Vilar et al., 2022; Zhang et al., 2023), prompting large and general purpose LMs such as PaLM to identify and solve subproblems
Figure 1: Interactive-Chain-Prompting (InterCPt).
in crosslingual conditional generation tasks such as NMT has not yet been fully explored.
Our approach, _Interactive-Chain-Prompting_ (InterCPt), sequentially solves translation subproblems before generating a final translation prediction. As shown in Figure 1, we first detect ambiguities in translation queries, then we resolve these ambiguities via question-answer interactions, and finally we generate translations. InterCPt departs from other prompt-based techniques that sequentially solve subproblems in two fundamental ways: (1) the subproblems are related but considerably different to the main task and (2) the solutions to subproblems requires interaction with another LLM. In this paper, we will look at how intermediate computation steps and interaction might assist overcome a typical problem in automated systems when a user's ambiguous query leads to a large number of viable and potentially inaccurate answers. In translation, for example, selecting the incorrect prediction has a significant impact on translation quality as illustrated in Fig. 2.
InterCPt has several advantages. First, the LM is able to identify and ask questions about translation query ambiguities with only a few in-context exemplars and no fine-tuning. This is crucial since large corpora with specific target ambiguities, labels to classify each ambiguity subtypes (i.e. feminine/masculine for gender or formal/informal for formality) and context are not common and are typically low-resource. Then, without readily available context, we rely on the _User_ to disambiguate translation queries. In the absence of additional background information or context, there are limited options to solve ambiguities. Interaction with the _User_ stands as a logical way to collect clarifying information. This interaction also benefits from multiple computation steps where ambiguity resolution leads to a more precise final prediction. Finally, the question-answer-translation interaction improves transparency and makes it easier to debug translation systems since we can assess the reasoning chain that led to an error (Wu et al., 2022). For NMT, there are two main questions to consider to make the most of out of intermediate computation steps:
**A) What subproblem are we trying to solve?** Multistep reasoning tasks can often be explicitly decomposed into subproblems: ambiguity detection, disambiguation via Q&A and translation. For NMT, decomposing the translation task is not trivial. We assume in this work that our subproblems are ambiguities which arise when translating. As seen in Fig. 1, the first step in InterCPt is to discover and resolve the translation ambiguity subproblem. We study five types of ambiguities: polysemous words, pronoun resolution, formality, gender-neutral names and neutral professions. Since datasets that cover multiple translation ambiguities and language pairs while providing context are rare, we create our own datasets (see Table 5 in Section A for an overview of other publicly available datasets).
**B) Where do answers to subquestions come from?** When we apply least-to-most prompting to math word problems for example, the answers to subquestions can often be derived from the problem's text. It is not necessarily the case for NMT where the query may not contain enough context to resolve ambiguities. As seen in Fig. 2, English sentence 'S' does not contain enough information about "you" and "it". The incorrect prediction made by a model leads to large variations in translation quality scores. With more context, the model may have the necessary information to narrow down possible predictions. However, in industrial applications, translation queries are often too short (Badeka, 2016) or additional context is not existent. In this work, we automate interaction between a _PaLM Translator_ model, that detects ambiguities, asks clarifying questions and translates, and a _PaLM User_ model, that has access to context and answers questions. Both models engage in a multiturn dialog to zero-in on a narrower set of predictions. We argue that a type of question-answer interaction with a "user" is necessary to resolve ambiguous queries, especially when a user (1) is unfamiliar with the main task and may not possess the skills to choose from many model prediction options; (2) knows how to answer simple pointed questions about a query but may not be able or willing to decide and add appropriate context on the fly.
This work marks Large-LM's potential to learn, with a few in-context examples, how to use natural language answers to deliver results closer to a user's intent. Our contributions are the following:
1. We propose InterCPt, a new way to design crosslingual conditional generation systems that disambiguate queries
Figure 2: Translation queries with multiple possible predictions. Correctly solving subproblems around ambiguities with you and it greatly affects the bleu(Papineni et al., 2002) translation metric.
via interaction (Section 2).
2. We release AmbigMT, a new dataset with five specific types of ambiguities covering four languages (Section 3).
3. We show that InterCPt achieves better translation performance and ambiguity resolution (Section 5) and improved generalization on zero-shot ambiguities (Section 6) over strong baselines.
4. We provide analysis on interactions and evidence that InterCPt abilities emerge with scale (Section 6).
## 2 Interactive-Chain-Prompting (InterCPt)
When interacting with a model, a user may have some well-conceived query in mind that is inadvertently under-specified. For example, a monolingual English speaker may be unaware that the pronoun "you" in a sentence can lead to formal or informal constructs in other languages and may therefore not provide additional information on the level of formality needed to adequately translate the text.
A human translator, when asked to translate queries with "you", may want to first probe the user's latent context about the query by asking clarifying questions. In doing so, the human translator can use the answers to better align the translation to a User's request and context. Our method endows language models (LMs) with the ability to generate a similar chain of interactions between a Translator LM and a User LM as seen in Fig. 1. In real applications, it is expected that a human replaces the User LM. InterCPt uses in-context exemplars to resolve ambiguities before completing the crosslingual conditional generation task that the model is originally asked to do.
It consists of a three step reasoning chain (See Fig. 1):
1. **The first step is for identifying ambiguities.** The prompt in this step always contains the same constant exemplars, showing multiple queries to translate and questions about each query's ambiguities. During inference, the _Translator_ LM uses the prompt to generate a pointed question that identifies the specific ambiguity.
2. **The second step is for resolving ambiguities.** The prompt in this step contains exemplars answering the question to the ambiguity subproblems in step one. The _User_ LM answers each question using additional information from the provided context. In real life applications, we assume that a real user has similar background information about the text to be translated.
3. **The third step is for translating.** Generated questions and answers are appended to the prompt in step 1 before the final translation is produced. Constant prompts in this step demonstrate how to translate in the specified target language using only details provided by the _User_ LM and no-context. During inference, the _Translator_ LM uses the prompt to generate the translation.
## 3 AmbigMT Datasets (AmbigMT)
In this section, we introduce AmbigMT, a dataset that covers four language pairs, for translations from English into French (en-fr), German (en-de), Spanish (en-es) or Japanese (en-ja) -- 18 sub-tasks in total. The code and datasets are released here. The parallel translation corpora contain five types of ambiguities: "it" resolution, formality, polysemy, gender1 neutral names, neutral professions. Unless otherwise specified, all datasets include 1000 diverse samples for each {en-fr, en-de, en-es, en-ja} language pair extracted from Opensubtitles corpora (Lison & Tiedemann, 2016). In Section A of the Appendix, we provide more details on datasets and describe the heuristics to identify ambiguities in each language.
Footnote 1: Please note that due to the lack of large translation corpora with various genders and the complexity in creating non-binary gender datasets, our data is limited to feminine and masculine.
"it" resolutiondata contains English sentences where the pronoun "it" does not clearly refer to a noun within the query. In English, the pronoun "it" is a singular, neuter and impersonal pronoun. In other languages, "it" may translate into gender specific pronouns (either feminine or masculine) or get dropped entirely from the sentence. The choice depends on what the pronoun refers to. To correctly translate, the model must first determine what "it" is. In the first example of table 1 where the target language \(x\) is Spanish, knowing that "it" is a postcard, or _una tarieta postal_ in Spanish, disambiguates gender in the translation. While the gender affects two words in the target sentence, the wrong gender choice is not only qualitatively inappropriate but also decreases quality metrics (44 bleu score drop from 100).
\begin{table}
\begin{tabular}{l|l|l|l|l} \hline \hline
**Dataset** & _en Query_ & **Context** & **\(\mathbf{x}\) Target** & \(\Delta\) B \\ \hline
**“it” resolution** & He has read it to one so many times that _T_ve & - I remember when the _postcard_ came, Ernesto & **Me la sé de** \\ & learnt **it** by & - I have so pleaseed. - He said; & -ato leerla. & -44 \\ & heart. & - I have written to me’. & - \\ \hline
**Polysemy** & head & If you don’t feel well, & \% & -100 \\ \hline
**Formality** & The closer you can get to him, the better. & - I’m aware of the risks, & **Plus vows serez** & -58 \\ \hline
**Gender** & Blair should be wrapping up _I_per breakfast & - I have her doorman on _retainer_. - There’s a fine line between surveillance and stalking. & Blair solile lir & -40 \\ & with Bcarlage. & - and stalking. & - and stalking. & - \\ \hline
**Neutral professions** & **[pr]** worked previously as a & **Margaret** & **Mhango** & **Previamente, tra-** \\ & businesswoman, & **Mwankare is a Zam** & **bjo com** & **presaria, conta-** \\ & accountant, and & was the director for & **for** **door y elecurlage** & **bancaria. \\ & bank executive. & business development [...] & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: AmbigMT data examples for each ambiguity for target language \(x\). \(\Delta\) B is the bleu performance drop from 100 if the highlighted ambiguity is resolved incorrectly.
Polysemyis a dataset that contains words that have multiple meanings and the query is insufficiently informative to zero-in on a specific sense. The context uses the word within a sentence to provide the necessary background information. In the second example of Table 1 where the target language \(x\) is Japanese, the context shows that "head" is a verb. In conjunction with the noun "home", we disambiguate "head" as "to move in the direction of". In the absence of such context, "head" has various senses such as "upper part of the body", "side of a coin", "end of a hammer or tool", "a toilet on a boat", "to hit the ball with the head", "to lead".
Formalityis a dataset where English queries contain the pronoun "you". In the target languages studied, "you" can be formal or informal. As seen in the third example of table 1 where the target language \(x\) is French, the speaker addresses the listener "you" as "Master Jedi" in the context, a title implying a formal style of politeness. The formality is ambiguous without the context and may impact the generated translation quality. Indeed, an incorrect choice in formality level changes "vous serez" to "tu seras" and "cela" to "ca", decreasing bleu scores by 58 points from 100.
Gender Neutral Namesdata includes queries where the name is gender neutral and ambiguous. The fourth example in table 1 shows a query where the name "Blair" is gender neutral. In this dataset, we replace gendered pronouns in the English query by the token _[pr]_ to remove hints about gender type. From the context, the speaker employs "her" and we can infer that a feminine pronoun "ihr" should be used in the translated German text.
Neutral Professionshas 600 unique samples for two language pairs. This dataset is derived from the Translated Wikipedia Biographies dataset2 that covers {en-de, en-es}. In this dataset, the gender of typically gender-neutral professional designations is not clear from the English query alone. In the fifth example of table 1, the context provides additional hints that the query is talking about "Margeret", also designated by the feminine pronoun "she". Resolving gender allows the model to correctly translate the list of professions in the query and potentially limiting the 70 points drop in bleu scores from 100.
Footnote 2: [https://ai.googleblog.com/2021/06/a-dataset-for-studying-gender-bias-in.html](https://ai.googleblog.com/2021/06/a-dataset-for-studying-gender-bias-in.html)
## 4 Related Works
Prompting for Cross-Lingual Generationusing Large LMs is a technique that has garnered increasing attention of late. Works on GPT-3 (Vaswani et al., 2017) and PaLM (Chowdhery et al., 2022) show competitive \(n\)-shot bleu translation results on WMT. The prompt demonstrations are populated with \(n\) random sentence pairs taken from the WMT training corpora and evaluated on the test corpora at inference. Orthogonal to our work, POMP (Vilar et al., 2022) improves upon this PaLM-based prompting technique by explicitly optimizing for the selection of \(n\) demonstration sentence pairs and obtaining results competitive with the state-of-the-art. More recent work (Garcia and Firat, 2022) using mT5 (Xue et al., 2021) investigated adding prompt-based natural language specifications to influence translated text properties such as formality level or dialect type. Experiments show that prepending textual artifacts such as "your majesty" to the English query conditions mT5 to generate translations in a formal tone. Our work prompts PaLM with \(n\) random translation pair exemplars as well. Different from previous research, we prompt with exemplars to interactively discover background knowledge or clarify ambiguities before translating.
Interactive Machine Learning(Ware et al., 2001; Fails and Olsen, 2003; Amershi et al., 2014) is an approach where information is interactively and iteratively supplied to a learning system. In prior interactive translation work, machine interactivity has assisted translators in writing translations by displaying automated word suggestions that update incrementally (Green et al., 2014; Santy et al., 2019). The approach however is limited by drop-down menu options and requires a certain level of sophistication from the user in the _target language_. Our approach discovers preferences and background knowledge about an input query in the _source language_ and more flexibly adapts translations according to a user's natural language response. The interaction is similar to Conversational AI systems where user utterances influence generated outputs. Task or goal oriented conversational AI systems (Konstantinova and Orasan, 2013; Gao et al., 2018; Hussain et al., 2019) are typically deployed to answer knowledge-based questions, seek information or solve basic queries (e.g. making reservations, purchase an item). To our knowledge, our work is the first to explore conversational interaction in cross-lingual generation.
Resolving ambiguitiesby asking for clarifications has been a recent topic of research, for QA and conversational search systems (Lee et al., 2019; Aliannejadi et al., 2019; Zamani et al., 2020; Dhole, 2020; Wang and Li, 2021; Wu et al., 2022). Departing from such methods, InterCPt does not produce sentences from a preset list of questions but is generated from a large LM without constrain. Concurrently to our work, Krasheninnikov et al. (2022) explored fine-tuning GPT-3 to generate clarifying questions and provide answers using human generated data from AmbigQA (Min et al., 2020) for open-domain QA. Another GPT-3 model simulates the user and generates answers while conditioned on ground-truth clarification questions. In contrast, our prompt-based method only needs few-shot demonstrations. Further, our simulated user does not rely on ground-truth clarification questions to provide an answer, which could be
more realistic for a number of applications (including QA, text simplification, code generation).
## 5 Experimental Setup and Results
In this section, we present the main cross-lingual generation results of InterCPt for formality, "it" resolution and polysemy ambiguity resolution subtasks. We use PaLM (Chowdhery et al., 2022), a 540B-parameter decoder-only LM pre-trained on primarily English-centric data with \(\sim\)20% of the data obtained from non-parallel multilingual corpora. The _generalist_ prompt template is composed of two formality, three polysemy and three "it" resolution exemplars. All prompt-based methods are \(8\)-shot with the same source sentences \(S\) to translate and corresponding translated sentences \(A\) in the target language. Each target language has it's own prompt template since \(A\) differs with every language. The simulated LM user is based on a single English-only \(8\)-shot prompt template for all target languages. Example 5.1 shows the structure of an the LM user prompt exemplars for polysemy. A complete overview of all prompts and exemplars used in experiments can be found in Sections B.1 for the User LM and Sections B.2 for the generalist Translator LM.
**Example 5.1**.: _Given a Context (C), provide an Answer (A) to the Question (Q): **S:** about **C:** About 2% of the households are enumerated using the canvasser method. **Q:** Is "about" an adverb that means approximately, near or a preposition that means regarding, over, surrounding? **A:** "about" means approximately._
Baselines.Our main baselines were chosen to compare the cross-lingual generation abilities of a large multipurpose LMs given interaction, context or no additional information. We compare our results against two different types of prompting techniques and a commercially available multilingual baseline with the Google Cloud Translation v2 model3. Please note that we do not add other baselines since contextual NMT systems are not common and since we introduce a new dataset. Our strongest baseline, _PaLM-with-context_, is the only method that benefits from having **all of the background information required** to resolve ambiguities. PaLM-with-context has a prompt with exemplars formulated as the one in example 5.2. In the example, references to **you** and **it** are directly accessible in context \(C\).
Footnote 3: [https://translate.google.ca/](https://translate.google.ca/)
**Example 5.2**.: _Given context (C), Translate (S) from English to French: **S:** Are **you** sure that **it** is pretty?_
_C:** She was trying on a new hat. Looking at herself in the mirror, she asked her friend Isabelle. A: Es-tu certaine qu'il est beau?_
To evaluate the impact of context or interaction, we also run _PaLM-no-extras_, prompting without any additional information. The structure of a PaLM-no-extras exemplar is similar to example 5.2 without the context \(C\). The model must translate the source sentence \(S\) in the target language without knowing details about "it" or the level of formality to employ for "you". The baseline is not only of interest for performance comparison and to evaluate model bias but also it can provide insights on the usefulness of additional background information to disambiguate queries. Finally, we test our datasets with a multilingual and general purpose Neural Translation Model using the Google Translate API. This baseline allows us to set performance expectations that our PaLM-no-extras model should reach.
Metrics.Our evaluation includes the standard bleu and bleutr(Sellam et al., 2020) automatic translation quality metrics as well as additional measures that assess specific ambiguity resolution capabilities. For formality, we use a rule-based classifier to quantify generated sentence formality levels (F-Acc) in the target language. We discuss details of the heuristics in Appendix C. Note that the formality classifier is based on the formality data creation scripts that allowed us to automatically identify formal and informal sentences in the source corpus. For "it resolution", we found that the PaLM 62B-parameter model was surprisingly accurate at identifying translated sentence genders (G-Acc). As seen in Table 7 of Appendix C, PaLM 62B achieves 97% and 93% accuracy in classifying samples of generated translations for Spanish and French respectively. For polysemy, we found that exact match metrics did not fully describe the performance of models. Whenever the model generated a synonym of the ground truth, the exact match metric would not consider the prediction correct. The PaLM-no-extras polysemy exemplars are a comma-separated list of synonyms. Our hit@\(n\) measures whether the ground truth exists in the first \(n\) generated words. For example, if the model outputs the list of Spanish words ["aproximadamente", "cerca de", "alrededor de", "casi", "mas o menos"], for \(n=3\), hit@\(3\) would return a match for a ground truth target "cerca de" and no-match for a ground truth target "casi". To supplement the hit@\(n\) metric, we also report results of a new metric that we call bleurt@\(n\) (B@\(n\)) which returns the highest bleurt score of the first \(n\) generated word phrases. Since bleutr captures the non-trivial semantic similarities between words using its contextual representations from BERT, we found that the metric better measures if correct synonyms were generated by the model. Note that we did not report the Google Translate hit@\(n\) or B@\(n\) numbers since the API only provides single word outputs.
Discussion.Our test results for en-es, en-fr, en-de and en-ja are summarized in Table 2. We first notice that InterCPt surpasses all other baselines. Surprisingly, PaLM-with-context, even with all the necessary background to resolve ambiguities, significantly lags behind InterCPt on F-Acc. for formality, G-Acc. for "it resolution" and both hit@\(3\) and B@\(3\) for polysemy. This results suggests that the multistep computation approach of fist resolving the ambiguity subproblems and then generating text has an advantage over other baselines. bleu scores are also 2-3 points higher while bleut scores are only slightly higher. This suggest that InterCPt generates sentences syntactically much closer to the ground truth while conserving the correct semantics.
## 6 Analysis
In this section, we analyse interesting behaviors about our approach such as ambiguity generalization in Subsection 6.1, the importance of ambiguity resolution specialization in Subsection 6.2, the effects of scale for both the Translator LM in Subsection 6.3 and User LM in Subsection 6.4, an error analysis in Subsection 6.6 and bias in generated outputs in Subsection 6.5.
### How does interaction generalize?
In Table 3, we provide translation test results on two held-out datasets that are described in Section 3: (1) Gender Neutral Names and (2) Neutral Professions. We use the same _generalist_ prompt template as in Section 5 with exemplars that cover only formality, "it" resolution and polysemy. Specifically, our exemplars for both the Translator LM and the User LM do not contain exemplars to resolve the gender for a person's name or profession. We observe that on the Gender Neutral Names dataset InterCPt performs best on bleu and bleurt and is much more able to resolve ambiguities with 6 to 10 points G-Acc improvements over Palm-with-context. On the Neutral Professions data, where
\begin{table}
\begin{tabular}{l|l|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Lang. Pairs} & \multirow{2}{*}{Method} & \multicolumn{3}{c|}{Formality} & \multicolumn{3}{c|}{“it” resolution} & \multicolumn{3}{c}{Polysemy} \\ & & **bleu** & **bleurt** & **F-Acc.** & **bleu** & **bleurt** & **G-Acc.** & **Hit@3** & **Hit@10** & **B@3** & **B@10** \\ \hline \multirow{3}{*}{**en\(\rightarrow\)es**} & InterCPt & **36.3\({}^{\dagger}\)** & **77.9\({}^{\dagger}\)** & **67\%** & **33.6\({}^{\dagger}\)** & **78.9\({}^{\dagger}\)** & **77\%** & **46\%** & **48\%** & **54.6\({}^{\dagger}\)** & **56.8\({}^{\dagger}\)** \\ & PaLM-with-context & 34.7 & 77.1 & 64\% & 30.8 & 77.2 & 68\% & 40\% & 46\% & 46.9 & 55.1 \\ & PaLM-no-extras & 34.6 & 77.0 & 62\% & 29.6 & 75.9 & 63\% & 33\% & 40\% & 44.9 & 51.0 \\ & Google Translate & 31.4 & 75.3 & 50\% & 27.5 & 73.0 & 54\% & — & — & — & — \\ \hline \multirow{3}{*}{**en\(\rightarrow\)fr**} & InterCPt & **39.1\({}^{\dagger}\)** & **70.6** & **72\%** & **35.3\({}^{\dagger}\)** & **71.7\({}^{\dagger}\)** & **73\%** & **46\%** & **48\%** & **46.9\({}^{\dagger}\)** & **48.5\({}^{\dagger}\)** \\ & PaLM-with-context & 36.4 & 69.9 & 65\% & 33.5 & 68.4 & 68\% & 36\% & 40\% & 40.1 & 44.7 \\ & PaLM-no-extras & 35.7 & 69.2 & 63\% & 32.3 & 66.7 & 66\% & 33\% & 37\% & 38.1 & 41.8 \\ & Google Translate & 30.7 & 67.4 & 58\% & 29.1 & 65.4 & 61\% & — & — & — & — \\ \hline \multirow{3}{*}{**en\(\rightarrow\)de**} & InterCPt & **35.8\({}^{\dagger}\)** & **75.0** & **69\%** & **24.0\({}^{\dagger}\)** & **76.0** & **75\%** & **43\%** & **45\%** & **45.1\({}^{\dagger}\)** & **47.6\({}^{\dagger}\)** \\ & PaLM-with-context & 33.6 & 74.6 & 61\% & 22.4 & 75.0 & 69\% & 35\% & 39\% & 36.1 & 44.9 \\ & PaLM-no-extras & 32.5 & 74.4 & 62\% & 22.8 & 73.2 & 63\% & 32\% & 35\% & 36.7 & 41.3 \\ & Google Translate & 27.5 & 72.3 & 53\% & 22.1 & 73.0 & 59\% & — & — & — & — \\ \hline \multirow{3}{*}{**en\(\rightarrow\)ja**} & InterCPt & **28.6\({}^{\dagger}\)** & **69.7\({}^{\dagger}\)** & **67\%** & **23.1\({}^{\dagger}\)** & **72.4\({}^{\dagger}\)** & **74\%** & **41\%** & **44\%** & **44.7\({}^{\dagger}\)** & **47.0\({}^{\dagger}\)** \\ & PaLM-with-context & 26.3 & 68.0 & 60\% & 21.4 & 70.8 & 67\% & 34\% & 38\% & 35.8 & 43.8 \\ \cline{1-1} & PaLM-no-extras & 25.9 & 67.4 & 61\% & 21.2 & 70.3 & 61\% & 30\% & 33\% & 34.6 & 37.0 \\ \cline{1-1} & Google Translate & 23.5 & 66.7 & 50\% & 19.9 & 68.6 & 52\% & — & — & — \\ \hline \hline \end{tabular}
\end{table}
Table 2: Translation results using an 8-shot generalist template that contains exemplars for formality, “it” resolution and polysemy ambiguity types. F-Acc = formality accuracy, G-Acc = gender accuracy, B@n = bleurt@n. bleu and bleurt results for InterCPt labelled with † are significantly better than all other systems based on pair-wise significance testing (Koehn, 2004) with p = 0.05.
\begin{table}
\begin{tabular}{l|l|c c c} \hline \hline Pair & Method & **bleu** & **bleurt** & **G-Acc.** \\ \hline \multicolumn{4}{c|}{**Gender Neutral Names**} & \multicolumn{3}{c}{unseen ambiguities**} \\ & InterCPt & **31.8\({}^{\dagger}\)** & **74.1\({}^{\dagger}\)** & **76\%** \\
**en\(\rightarrow\)es** & ParLM-with-context & 29.9 & 72.4 & 66\% \\ & PaLM-no-extras & 30.9 & 71.6 & 59\% \\ & Google Translate & 27.8 & 66.1 & 56\% \\ \hline \multirow{3}{*}{**en\(\rightarrow\)fr**} & InterCPt & **31.0** & **63.5\({}^{\dagger}\)** & **71\%** \\ & PaLM-with-context & 29.5 & 62.6 & 64\% \\ & PaLM-no-extras & 30.0 & 60.9 & 63\% \\ & Google Translate & 24.5 & 57.7 & 56\% \\ \hline \multirow{3}{*}{**en\(\rightarrow\)de**} & InterCPt & **17.9\({}^{\dagger}\)** & **72.2** & **73\%** \\
**en\(\rightarrow\)de** & ParLM-with-context & 15.6 & 71.5 & 67\% \\ & PaLM-no-extras & 15.2 & 70.8 & 61\% \\ & Google Translate & 17.1 & 67.1 & 55\% \\ \hline \multirow{3}{*}{**en\(\rightarrow\)ja**} & InterCPt & **16.1\({}^{\dagger}\)** & **70.3\({}^{\dagger}\)** & **71.7\({}^{\dagger}\)** \\ & PaLM-with-context & 14.7 & 69.1 & 65\% \\ & PaLM-no-extras & 14.4 & 68.3 & 60\% \\ & Google Translate & 14.1 & 66.0 & 54\% \\ \cline{1-1} & Neutral Professions = unseen ambiguities + unseen domain \\ \cline{1-1} & InterCPt & **37.3** & 75.8 & **70\%** \\
**en\(\rightarrow\)es** & ParLM-with-context & 37.1 & **76.1** & 69\% \\ & PaLM-no-extras & 35.5 & 75.7 & 59\% \\ & Google Translate & 37.0 & 72.7 & 56\% \\ \hline \multirow{3}{*}{**en\(\rightarrow\)de**} & InterCPt & **14.3** & 70.0 & **68\%** \\
**en\(\rightarrow\)de** & ParLM-with-context & 14.0 & **71.9** & 66\% \\ & PaLM-no-extras & 12.2 & 70.0 & 62\% \\ & Google Translate & 13.8 & 67.2 & 54\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Translation results on unseen ambiguity subproblems using the Gender Neutral Names data and with added unseen domain using the Neutral Professions data. InterCPt results labelled with † are significantly better with p = 0.05.
test samples are taken from a different domain (Wikipedia biographies instead movie scripts), Palm-with-context and InterCPt have similar performances. It is possible that PaLM-with-context benefits from additional sentences in the context to better determine the style of the output. Nonetheless, InterCPt provides a 1-2 point increase on G-Acc.
### Are specialist better than generalist prompts?
So far, we have studied a _generalist_\(8\)-shot template covering three different types of ambiguities with at most three exemplars per ambiguity. In Fig. 4, we present results of _specialist_ template that only covers one type ambiguity at the time (either all formality or all polysemy). Interestingly, specialization does not seem to provide much additional benefit in resolving ambiguities as evidenced by F-Acc, Hit@3 and B@\(3\) results that are on par and often lower than the _generalist_ approach. However, the _specialist_ template does have a higher bleu score, implying greater syntactic alignment with the target translation when more ambiguity-specific exemplars are added.
### Are interactive generation abilities emergent?
We show in Fig. 3 for each prompt template the effects of scaling PaLM parameters on the performance of formality, "it" resolution and polysemy for Spanish (ES), French (FR), German (DE) and Japanese (JA) target languages. Please note that while we vary the parameter count (8B, 62B and 540B) of the Translator LM, the User LM is a 540B parameters PaLM model for all experiments. The plots provide interesting insights. First, at the 8B parameter scale, PaLM-no-extras performs best across all languages for Formality and "it" resolution across all language pairs. Neither context or interaction seem to provide benefits to translation. Second, at the 62B parameter scale, the PaLM-with-context and InterCPt methods have on par performances. Context or interaction in this case are only clearly beneficial for polysemy. Third, the PaLM 540B parameter InterCPt outpaces other prompt-based methods across language pairs and ambiguity subproblems. At this stage, baselines scaling trend decelerates, with _scaling curves flattening_, compared to InterCPt. It shows that InterCPt is an emergent ability of model scale (Wei et al., 2022). We conjecture that the emergent behavior of InterCPt is due to a better ability to ask questions and incorporate answers before generating final prediction.
### How important is User LM parameter scale?
While the User LM allows us to automate the evaluation of interactivity for cross-lingual generation, it is not clear if the quality of the answer to the Translator LM questions impact performance. We hypothesize that a larger User LM model
Figure 4: Generalist vs Specialist prompt templates for Spanish (ES), French (FR), German (DE) and Japanese (JA) targets.
Figure 3: InterCPt enables large LMs to solve ambiguity subproblems in cross-lingual generation. The multistep disambiguate-translate capability is an emergent ability that is reached at higher parameter scales. Note that interactive = InterCPt.
would provide higher quality answers and allow the Translator LM to better generate translated text. Fig. 5 shows that, when the Translator LM is a 62B PaLM model, a higher parameter User LM improve overall performance. It is therefore possible that answer quality has a significant impact on translation quality and that human-generated answers can further improve overall performance.
### Can interaction help solve NLG bias issues?
Gender bias is a common phenomenon in automated NMT systems (Borkan et al., 2019; Stanovsky et al., 2019; Saunders and Byrne, 2020). Even when there are explicit gender pronouns in the input query or in the context, NMT systems generated text tends to be masculine when translated into languages with grammatical gender (Stanovsky et al., 2019; Saunders and Byrne, 2020; Stafanovics et al., 2020; Wang et al., 2022).
To measure gender bias, all generated translations are passed through the gender classifier for the "it" resolution balanced dataset. Similarly, to measure formality bias, generated translations are passed through the formality classifier for the formality balanced dataset. NMT systems can also suffer from formality bias (Rippeth et al., 2022). However, we notice that InterCPT is much closer to evenly producing masculine and feminine sentences. Our results shows that interactive ambiguity resolution via multistep computation better addresses gender and formality biases.
### When is context better than interaction?
In this section, we provide analysis that describes common areas of improvement for _generalist interactive-chain prompting_. We first isolated test samples for French and Spanish for four ambiguities (formality, "it" resolution, neutral professions and gender neutral names) where the bleurt scores were less than or equal to PaLM-with-context scores. We then randomly sampled 50 interactions and manually analysed the interaction chains (query, question, context, answer, translation). This led us to five types of errors: (1) wrong question, when the Translator LM asked a question not related to the ambiguity; (2) wrong answer, when the User LM did not provide correctly disambiguate; (3) many ambiguities, when the query had multiple unresolved ambiguities or the User LM answer also contained ambiguities; (4) limited context, when the context was not sufficiently informative to resolve ambiguities; (4) style or other, when generated translated text had discernible differences with the ground truth. Fig. 7 shows that the majority of errors are from wrong User LM answers for formality and "it" resolution. This partially confirms our hypothesis in Subsection 6.4. For tasks involving unseen ambiguities, the majority of errors come from the Translator LM with
\begin{table}
\begin{tabular}{p{42.7pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}} \hline \hline
**Error** & **_en Query (S) and_ **Question (Q)** & **Sim User Context (C)** & **Observation** \\ \hline
**Wrong** & **S: But I we start to **Question** & **C: I just thought that the **no** blame for fine- **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** ** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** ** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** ** **do** **do** **do** ** **do** ** **do** **do** ** **do** **do** ** **do** ** **do** **do** ** **do** ** **do** ** **do** **do** ** **do** **do** ** **do** ** **do** ** **do** **do** ** **do** ** **do** ** **do** ** **do** ** **do** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** **do** ** **do** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** **do** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** ** **do** **do** ** **do** **do** ** **do** **do** ** **do** **do** ** **do** ** **do** ** **do** **do** **do** **do** ** **do** **do** ** **do** **do** ** **do** ** **do** **do** ** **do** **do** **do** **do** **do** **do** ** **do** ** **do** **do** **do** ** **do** ** **do** **do** **do** **do** **do** **do** ** **do** **do** **do** ** **do** **do** ** **do** **do** **do** ** **do** **do** **do** ** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do**** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do** **do**do** **do**** **do** **do** **do** **do**do** **do**do** **do** **do** **do** **do** **do**do** **do** **do** **do** **do**do** **do** **do**do** **do** **do** **do**do** **do**do** **do** **do**do** **do** **do**do** **do** **do** **do**do** **do**do** **do**do** **do**do** **do** **do**do** **do**do** **do**do** **do** **do** **do**do** **do**do** **do**do** **do** **do**do** **do**do** **do** **do**do** **do**do** **do**do** **do**do
68% to 78% of sample chains having the wrong question or noticeable differences in generated translated text style or form. We provide examples of interaction chains for each type of error in Table 4.
## 7 Conclusion
We propose _interactive-chain prompting_ (InterCPt), a prompt-based interactive multistep computation technique that first resolves cross-lingual ambiguities in the input queries and then performs conditional text generation. We have created and released a new datasets that covers five ambiguities: formality, "it" resolution, polysemy, gender neutral names and neutral professions for four different language pairs. Empirical results show that InterCPt outperforms other prompt-based techniques that have access to all background information and context to directly resolve ambiguities. We find that InterCPt MT is an emergent property of parameter scale that allows Large LMs to perform interactive generation tasks while other prompt-based techniques exhibit flattening scaling curves. InterCPt can be considered a step forward more efficiently interacting with machine learning systems.
## Acknowledgements
For all the useful discussions and comments, we thank George Foster, Colin Cherry, Rick Genter, Patrick Fernandes and Jason Wei. For feedback on German and Japanese templates and translation examplars used, we thank Julia Kreutzer, Anja Austermann and Mikio Hirabayashi. |
2302.09308 | Consistency Tests for Comparing Astrophysical Models and Observations | In astronomy, there is an opportunity to enhance the practice of validating
models through statistical techniques, specifically to account for measurement
error uncertainties. While models are commonly used to describe observations,
there are instances where there is a lack of agreement between the two. This
can occur when models are derived from incomplete theories, when a
better-fitting model is not available or when measurement uncertainties are not
correctly considered. However, with the application of specific tests that
assess the consistency between observations and astrophysical models in a
model-independent way, it is possible to address this issue. The consistency
tests (ConTESTs) developed in this paper use a combination of non-parametric
methods and distance measures to obtain a test statistic that evaluates the
closeness of the astrophysical model to the observations. To draw conclusions
on the consistency hypothesis, a simulation-based methodology is performed. In
particular, we built two tests for density models and two for regression models
to be used depending on the case at hand and the power of the test needed. We
used ConTEST to examine synthetic examples in order to determine the
effectiveness of the tests and provide guidance on using them while building a
model. We also applied ConTEST to various astronomy cases, identifying which
models were consistent and, if not, identifying the probable causes of
rejection. | Fiorenzo Stoppa, Eric Cator, Gijs Nelemans | 2023-02-18T11:50:25Z | http://arxiv.org/abs/2302.09308v1 | # Consistency Tests for Comparing Astrophysical Models and Observations
###### Abstract
In astronomy, there is an opportunity to enhance the practice of validating models through statistical techniques, specifically to account for measurement error uncertainties. While models are commonly used to describe observations, there are instances where there is a lack of agreement between the two. This can occur when models are derived from incomplete theories, when a better-fitting model is not available or when measurement uncertainties are not correctly considered. However, with the application of specific tests that assess the consistency between observations and astrophysical models in a model-independent way, it is possible to address this issue. The consistency tests (ConTESTs) developed in this paper use a combination of non-parametric methods and distance measures to obtain a test statistic that evaluates the closeness of the astrophysical model to the observations. To draw conclusions on the consistency hypothesis, a simulation-based methodology is performed. In particular, we built two tests for density models and two for regression models to be used depending on the case at hand and the power of the test needed. We used ConTEST to examine synthetic examples in order to determine the effectiveness of the tests and provide guidance on using them while building a model. We also applied ConTEST to various astronomy cases, identifying which models were consistent and, if not, identifying the probable causes of rejection.
## 1 Introduction
In astrophysics and astronomy, most models depend on a complex combination of physical assumptions and mathematical parameters, making model validation, i.e. testing if the fitted model is a good representation of the observed sample, difficult. In statistics, model validation methods such as the Chi-square test (Snedecor and Cochran, 1989), Kolmogorov-Smirnov test (Chakravarti et al., 1967), the Anderson-Darling test (Stephens, 1974), etc., are well defined and perform well if the preliminary statistical assumptions hold. However, most well-known statistical tests have preliminary assumptions that are often not satisfied in astronomical settings due to complex multidimensional problems, small sample sizes and observational biases. Model validation is also strictly dependent on the data used to build the model and the data used to check its goodness of fit. Validation methods based only on the data used to construct the model are often inadequate; in astrophysics, however, the two sets of data often coincide due to the lack of observations for many exotic objects. In astronomy, it is common to assume that the best-fitting model among those under consideration is an accurate representation of the observations. However, it is worth noting that there may be instances where, if correctly accounting for measurement uncertainties, the chosen model may still be too far from the observations to be statistically consistent.
Model validation is an essential step in the process of building and assessing the performance of a model. There are two main model validation approaches: Bayesian and non-parametric methods. In Bayesian model validation, Bayesian statistics principles are employed; in particular, Bayes' theorem is applied to define a parametric description of the problem at hand. Bayesian methods are flexible and can handle complex models. Methods like Posterior Predictive assessment of model fitness (Gelman et al., 1996, 2013; Lucy, 2018) are widely applied by the astronomy community (Feeney et al., 2019; Kiziltan et al., 2013). Non-parametric methods, on the contrary, focus on comparing the observed data with the model predictions using a variety of statistical tests and metrics. They do not make any assumptions about the underlying distribution of the data and are often easy to understand and implement. However, goodness-of-fit tests like Chi-square and Kolmogorov-Smirnov, although being used in hundreds of astronomical papers every year, have been proven to have limitations and biases that are often not fully recognized among astronomers (Babu and Feigelson, 2006).
In practical applications, if the data distribution is known (Gaussian, Poisson etc.), a parametric bayesian method will be more powerful than a non-parametric one; however, this is often not the case in astronomy. In scenarios where the parametric form is unknown, either due to the complexity of the problem, observational biases, small sample size etc., a wrong assumption on the parametric form will bring flawed results, while a non-parametric method will capture the problem not needing any additional information.
Statisticians often advise using methods from both branches of statistics, Bayesian and Frequentist, to capture a problem entirely. Since a non-parametric set of tests
that could deal with both regression and density models, both in low and high dimensional settings, with no initial assumptions whatsoever, was not available in the astronomical literature, we developed ConTEST. This paper presents a set of new statistical tests to answer the question, "Is the model consistent with the observations?". Less sensitive with respect to goodness-of-fit methods, it can, however, be used in virtually any model validation situation.
ConTEST can efficiently assess, in a model-independent way, the consistency between observations and an astrophysical model in two scenarios: regression and density models. The non-parametric component ensures that no assumption on the parametric form of the model is needed, while the test framework gives the model-independent component. A hypothesis-testing process is set to evaluate the agreement between the model and the observations. It uses a combination of non-parametric methods and distance measures to obtain a test statistic followed by a simulation-based methodology. In the latter, the model is assumed to be the ground truth, and a high number of samples are generated from it; comparing the test statistics of the simulation samples with the original test statistic of the observations allows us to interpret whether the assumption of consistency holds or whether the assumption has been violated.
There are two different formulations of ConTEST depending on the type of data and model under consideration. In Section 2 we will discuss the structure of ConTEST for regression models. As shown in Fig. 1, a typical scenario where this test could be applied is testing the consistency between the modelled radial velocity curve of a White Dwarf and its observations with associated uncertainties.
In Section 3, we will discuss the structure of ConTEST for density models. An example of these scenarios is testing the consistency between the luminosity distribution of globular clusters and a parametric model, as shown in Fig. 2.
Testing the consistency in a model-independent way makes ConTEST useful both during the construction of a model and to ensure its validity for successive use for inference purposes. The ideal moment to apply ConTEST is when a set of fitted models have been selected, and we want to assess which subset of these models is consistent with the data.
In the following sections, we will present the consistency tests, explore why a model may be rejected, and investigate the primary component that brought us to this conclusion. The three most common reasons found are a faulty model, an observed biased sample, and wrongly estimated uncertainties. In Section 4, we apply ConTEST to multiple real astronomical examples.
## 2 Consistency test for regression models
In astronomy and astrophysics, regression modelling is used both to quantify the relationships in the observed data, like the linear relationship in the Fundamental Plane of elliptical galaxies (Djorgovski and Davis, 1987) or the power-law index of solar flare energies (Hudson, 1991) and to create physical driven models based on the understanding of a specific phenomenon, giving insights on things like mass, temperature, compositions etc. (Feigelson and Babu, 2012). ConTEST can assess the agreement between the observations and the suggested model for both these types of regression.
### ConTEST for regression
We first introduce the main version of the consistency test, ConTEST for regression models. It is applicable for checking the consistency of an observed sample, its associated measurement uncertainties, and a regression model with one dependent variable \(Y\) and one or more independent variables \(X\). This test is powerful but strict,
Figure 1: The radial velocity vs phase of WD 0326-273 and relative uncertainties (black dots and bars), and the proposed model to evaluate (blue line).
Figure 2: The apparent magnitude of 360 GCs in the Andromeda Galaxy (black ticks) and their density (orange line), and the model density (blue dashed line).
often rejecting the null hypothesis when outliers, bias, or wrong uncertainty estimation come into play.
We use the average of the standardised residuals as a distance measure; this test statistic does not assume any parametric form on its distribution, nor any assumption on the data distribution is needed. Furthermore, it is applicable for any homoscedastic or heteroscedastic measurement uncertainties, with the only caveat of being able to generate samples from it.
ConTEST follows these four steps:
1. Calculate the distance between the observations, \(y_{i}\) for \(i=1,...,n\), and the astrophysical model evaluated at the observations, \(\hat{y}_{i}\), weighted by the observed uncertainties: \[D=\sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(\frac{y_{i}-\hat{y}_{i}}{\sigma_{i}} \right)^{2}},\] where \(\sigma_{i}\) is the uncertainty of each observation.
2. Simulate \(K\) datasets assuming the astrophysical model \(\hat{y}\) as ground truth and adding Gaussian noise based on the uncertainties of the observations: \[y_{ik}=\hat{y}_{i}+\epsilon_{i}\] where \(\epsilon_{i}\sim N(0,\sigma_{i})\).
3. Calculate the distance between the \(K\) simulated datasets and the model \(\hat{y}\), weighted by the observed uncertainties: \[D_{k}=\sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(\frac{y_{ik}-\hat{y}_{i}}{\sigma_{i }}\right)^{2}}\]
4. Compare the distribution of the simulations' distances with the test statistic. Reject/not reject the null hypothesis with a significance level \(\alpha=0.05\).
ConTEST for regression is a two-tailed test. It does not reject the consistency hypothesis when the test statistic falls between the two critical values, that, for a significance level \(\alpha=0.05\), are the 2.5 and 97.5 percentiles of the simulations' distances distribution. Due to the dependence of the test statistic on the observed uncertainties, this test is sensitive to any outlier or wrongly estimated uncertainty. In the steps above, we assume the observed uncertainties to be gaussian, however, any other distribution would work if we can generate samples from it.
To evaluate the effectiveness of the test, we built three synthetic examples for which we know everything: model, observations, and their uncertainties distribution. For the true model, we choose an arbitrary function:
\[f(x)=\exp(\beta_{1}x)\ sin(\beta_{2}x)+m \tag{1}\]
and for the uncertainties model, \(\epsilon(x)\), we simply model the standard deviation at \(x\) as a constant fraction of \(f(x)\). The examples simulate three main scenarios:
1. Both the astrophysical model and observations come from the true model and uncertainties, \(f(x)\) and \(\epsilon(x)\).
2. The astrophysical model is biased, i.e. Eqn. 1 is fitted without the exponential term. While the observations and relative uncertainties come from the true model and uncertainties, \(f(x)\) and \(\epsilon(x)\).
3. The astrophysical model and the observations come from the true model and uncertainties, \(f(x)\) and \(\epsilon(x)\). But the estimate of the uncertainties of the observations is underestimated by a factor of 2.
As shown in Fig. 3, ConTEST can correctly not reject the null hypothesis when model, observation and uncertainties are coming from the truth. It can also correctly reject scenarios (2) and (3) where bias and underestimated uncertainties are present. However, ConTEST shows strict behaviour mainly triggered by scenario (3), where the uncertainties are underestimated. This behaviour is expected even more for real cases where outliers and uncertainty estimation come almost always into play.
To evaluate the power of the test as a function of the number of observed samples, we repeatedly apply ConTEST to the same three examples for three different sample sizes. Being able to generate observed samples and applying ConTEST a high number of times for each example gives us an estimate of its rejection rate. The results can be found in Table 1.
For scenario (1), the rejection rate is, as expected, close to the significance level \(\alpha=0.05\) chosen for the hypothesis testing. This value would converge to 0.05, increasing the number of repetitions to calculate the rejection rate. For scenarios (2) and (3), the rejection rate improves as a function of the sample size reaching 100% in both cases with 100 observations.
### Smoothed ConTEST for regression
Built specifically to be used in real scenarios where the presence of outliers, bias and problems with the estimated uncertainties come into play, Smoothed ConTEST includes a non-parametric smoothing process to reduce their effects. The non-parametric method used, Local Linear Regression (LLR), is part of the more general family of local polynomial regressors (Muller, 1998; Nadaraya, 1964).
\begin{table}
\begin{tabular}{c c c c} \hline \hline Example & \(n=10\) & \(n=100\) & \(n=1000\) \\ \hline (1) & 5.5\% & 4.7\% & 5.3\% \\ (2) & 68.3\% & 100\% & 100\% \\ (3) & 88.1\% & 100\% & 100\% \\ \hline \end{tabular}
\end{table}
Table 1: Rejection rate of ConTEST for the three examples as a function of the number of observations.
Local linear regression is a non-parametric method used to estimate the relationship between a dependent variable \(Y\) and one or more independent variables \(X\). It is particularly useful when the relationship between \(X\) and \(Y\) is not linear or when the data has high variability. LLR fits a linear regression model to a subset of the data rather than the entire dataset; this subset, also known as a local neighbourhood, is defined by a kernel function. The latter assigns a weight only to the points in the proximity of the prediction location so that distant points do not contribute to the estimate. A specific description of the kernel concept can be found in Section 3.1.
The main advantage of this non-parametric method is the complete absence of a parametric form, making it an excellent tool for Smoothed ConTEST, where we want to smooth out the observation without any a priori knowledge. The only parameter choice we allow in Smoothed ConTEST is between a fixed or adaptive kernel's bandwidth. An adaptive kernel's bandwidth is especially advised in scenarios where the range of values for the covariates \(X\) has gaps; however, for big datasets, the computation time will significantly increase with respect to a fixed one. In both cases, fixed or adaptive, the values of the bandwidth themselves are automatically estimated.
In Smoothed ConTEST, the LLR not only mitigates the effect of outliers but also provides a confidence interval that is used in the denominator of the new test statistic, effectively removing the dependency of the test on the estimated uncertainties of the observations. However, the measurement uncertainties are still used in the simulation process to perform hypothesis testing.
Smoothed ConTEST follows the same structure of ConTEST presented in Section 2.1. The main difference is that the distance is calculated between the model and the LLR instead of directly from the observations:
1. Apply the LLR to the observations \(y_{i}\), for \(i=1,...,n\) and obtain \(\hat{y}_{LLR,\,i}\) and its uncertainties \(\hat{\sigma}_{LLR,\,i}\).
2. Calculate the distance between the LLR and the astrophysical model \(\hat{y}\): \[D=\sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(\frac{\hat{y}_{LLR,\,i}-\hat{y}_{i}}{ \hat{\sigma}_{i,\,LLR}}\right)^{2}}\]
3. Simulate \(K\) dataset assuming the astrophysical model
Figure 3: Results of ConTEST for regression models; each column represents a scenario. In the first, the model, observations, and uncertainties come from the true model. The test statistic (orange dashed line) lies within the simulations’ distances density; the null hypothesis is not rejected. In the second, the model is biased, while the observations and their uncertainties come from the true model. The test statistic (orange dashed line) lies outside the simulations’ distances density; the null hypothesis is rejected. In the third, the model and the observations come from the true model, but the uncertainties of the observations are underestimated. The test statistic (orange dashed line) lies outside the simulations’ distances density; the null hypothesis is rejected.
as ground truth and adding Gaussian noise based on the uncertainties of the observations: \[y_{ik}=\hat{y}_{i}+\epsilon_{i}\] where \(\epsilon_{i}\sim N(0,\sigma_{i})\).
4. Apply the LLR to each simulated dataset \(y_{ik}\) and obtain \(\hat{y}_{LLR,\,ik}\) and their uncertainties \(\hat{\sigma}_{LLR,\,ik}\).
5. Calculate the distance between each LLR and the astrophysical model \(\hat{y}\): \[D_{k}=\sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(\frac{\hat{y}_{LLR,\,ik}-\hat{y}_{ i}}{\hat{\sigma}_{Lk,\,LLR}}\right)^{2}}\]
6. Compare the distribution of the simulations' distances with the test statistic. Reject/not reject the null hypothesis with a significance level \(\alpha=0.05\).
We use the same three examples introduced in Section 2.1 to evaluate the power of this new non-parametric consistency test.
As shown in Fig. 4, Smoothed ConTEST correctly does not reject the consistency hypothesis for scenario (1), where both observations and model are coming from the same ground truth. The biased model of scenario (2) is also correctly rejected, and it is not consistent with the observed data; however, for scenario (3), Smoothed ConTEST is not as strict as ConTEST, and it does not reject the consistency hypothesis when the uncertainties are underestimated by a factor of 2. Smoothed ConTEST results in a more relaxed test, being less sensitive to the uncertainties of the observations but a more appealing method for testing the biases of an astrophysical model.
We evaluate the power of the test as a function of the number of observed samples, repeatedly applying Smoothed ConTEST to the three examples for three different sample sizes. The results can be found in Table 2.
For scenario (1), the rejection rate is, as expected, close to the significance level \(\alpha=0.05\). For scenario (2), the rejection rate improves as a function of the sample size, correctly reaching 100% rejection rate with less than 100 observations. For scenario (3), however, the test cannot reject the null hypothesis independently of the number of observations.
Applying both ConTEST and Smoothed ConTEST on an astrophysical model allows for identifying its weaknesses in terms of biases and under/over-estimation of the observed uncertainties.
### ConTEST for families of models
As briefly introduced in Section 2, in astronomy and astrophysics, regression modelling is used either to quantify the relationships in the observed data or to create physically driven models based on the understanding of a specific phenomenon.
ConTEST can be used to assess the consistency of both the specific fit of a model to the given data, as well as the consistency of the overall model family being used. This method involves re-estimating the model parameters for every simulated dataset during hypothesis testing, allowing for testing a more general model family while accounting for the variations in fit due to measurement uncertainties.
We use ConTEST for regression to explain the framework, however, this variant of the test can also be applied with Smoothed ConTEST:
1. Calculate the distance between the observations, \(y_{i}\) for \(i=1,...,n\), and the astrophysical model evaluated at the observations, \(\hat{y}_{i}\), weighted by the observed uncertainties: \[D=\sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(\frac{y_{i}-\hat{y}_{i}}{\sigma_{i}} \right)^{2}},\] where \(\sigma_{i}\) is the uncertainty of each observation.
2. Simulate \(K\) datasets assuming the astrophysical model \(\hat{y}\) as ground truth and adding Gaussian noise based on the uncertainties of the observations: \[y_{ik}=\hat{y}_{i}+\epsilon_{i}\] where \(\epsilon_{i}\sim N(0,\sigma_{i})\).
3. Fit the model under consideration to the \(K\) simulated datasets, obtaining a set of fitted models \(\hat{y}_{ik}\)
4. Calculate the distance between the \(K\) simulated datasets and their associated model \(\hat{y}_{k}\), weighted by the observed uncertainties: \[D_{k}=\sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(\frac{y_{ik}-\hat{y}_{ik}}{\sigma_{ i}}\right)^{2}}\]
5. Compare the distribution of the simulations' distances with the test statistic. Reject/not reject the null hypothesis with a significance level \(\alpha=0.05\).
We show its application in the white dwarf example of Section 4.3.
## 3 Consistency test for density models
Models formulated as densities or as clouds of points from a simulation are often the case in many astrophysical studies; codes like Modules for Experiments in Stellar Astrophysics (MESA, Paxton et al., 2010), Compact Object Mergers: Population Astrophysics and Statistics (COMPAS, Riley
\begin{table}
\begin{tabular}{c c c c} \hline \hline Example & \(n=10\) & \(n=100\) & \(n=1000\) \\ \hline (1) & 5.7\% & 4.1\% & 5.2\% \\ (2) & 53.7\% & 100\% & 100\% \\ (3) & 29.8\% & 11.2\% & 18.1\% \\ \hline \end{tabular}
\end{table}
Table 2: _Rejection rate of Smoothed ConTEST for the three examples as a function of the number of observations._
et al., 2022). The Gravitational Wave Universe Toolbox (GWToolbox, Yi, Shu-Xu et al., 2022), and many more, often provide as output simulated observations based on a set of input parameters. A way to compare these samples with real observations is not well defined in astronomy, and we hope to create an intuitive framework with ConTEST for densities.
As for the regression case, we want to build a statistical test to evaluate the consistency between the observations and the model in a non-parametric way without specifying any form for either observations or models. To do so, we use a non-parametric method called Kernel Density Estimation (KDE, Parzen, 1962; Rosenblatt, 1956), one of the primary data smoothing methods available in statistics. KDE is commonly used to summarize a cloud of points into a continuous distribution allowing us to infer the properties of a population. In astronomical settings, it has been used, for instance, in the study of star clusters (Seleznev, 2016), the study of binary black holes mergers (Yi, Shu-Xu et al., 2022), and often as a data representation method being preferable to histograms (e.g. Weglarczyk, 2018).
In Section 3.1, we briefly explain the idea behind the KDE, and in Section 3.2 and 3.3, we present two tests: ConTEST for outliers and ConTEST for densities.
### Kernel density estimation
Here we present the kernel density estimation and its use in our consistency test. For \((X_{1},X_{2},\ldots,X_{n})\) independent and identically distributed variables, KDE is defined as
\[\hat{f}(x,h)=\frac{1}{n}\sum_{i=1}^{n}K_{h}(x-X_{i})=\frac{1}{nh}\sum_{i=1}^{ n}K\left(\frac{x-X_{i}}{h}\right), \tag{2}\]
where \(K\) is the kernel and \(h>0\) is the smoothing parameter called bandwidth. Due to its convenient mathematical properties, we chose a Gaussian kernel, \(K(x)=\Phi(x)\), for ConTEST; however, a range of kernel functions are available such as triangular, biweight, triweight, Epanechnikov, etc. The choice of the kernel does not affect the density estimate if the number of observations is high enough; however, the value of the bandwidth \(h\) does. The most common methods to estimate the bandwidth are rules-of-thumb like Scott's rule (Scott, 1992; Scott and Terrell, 1987)
Figure 4: Results of Smoothed ConTEST for regression models; each column represents a scenario. In the first, the model, observations, and uncertainties come from the true model. The test statistic (orange dashed line) lies within the simulations’ distances density; the null hypothesis is not rejected. In the second, the model is biased, while the observations and their uncertainties come from the true model. The test statistic (orange dashed line) lies outside the simulations’ distances density; the null hypothesis is rejected. In the third, the model and the observations come from the true model, but the uncertainties of the observations are underestimated. The test statistic (orange dashed line) lies within the simulations’ distances density; the null hypothesis is not rejected.
\(H=(n)^{-2/(m+4)}\,\Sigma\) and Silverman's rule (Lauter, 1988) \(H=\left(\frac{4}{n(m+2)}\right)^{2/(m+4)}\Sigma\), where \(m\) is the number of variables, n is the sample size and \(\Sigma\) is the empirical covariance matrix; for our application, we use Scott's rule being on average good for most scenarios.
KDE is easily extendable to the multivariate case making it an excellent tool for our non-parametric tests. To estimate densities \(f\) in \(R^{p}\), it simply performs an average of multivariate kernels centred at the data points. One disadvantage of KDE is that it is not accurate when estimating a density near the finite endpoints of the support, e.g. near 0 for a strictly positive variable. A set of solutions for boundary-corrected KDE are available and have been explored in Jones 1993 and Karunamuni and Alberts 2005; however, an easy solution to the problem is transforming the data such that the hard boundaries are removed, e.g. applying a logarithmic transformation to both model and observations.
### ConTEST for outliers
In astronomy, a natural approach often found in the literature is calculating the likelihood of an observed sample with respect to the density model under consideration and using it as a test statistic. The test introduced below is based on the same principle; however, we argue that, due to the formulation of its test statistic, this is not a consistency test but an outliers detection method. ConTEST for outliers follows these steps:
1. If the model is given as a simulated cloud of points: apply KDE to the model sample and obtain its density \(g\).
2. Calculate the loglikelihood of the observations \(y_{i}\), for \(i=1,...,n\), with respect to the density \(g\): \[D=-\frac{1}{n}\sum_{i=1}^{n}log(\,g(y_{i})\,)\] (3)
3. Simulate \(K\) datasets \(y_{ik}\) of size \(n\) from the model's density \(g\).
4. Calculate the loglikelihood for each simulated dataset with respect to the density \(g\): \[D_{k}=-\frac{1}{n}\sum_{i=1}^{n}log(\,g(y_{ik})\,)\] (4)
5. Compare the distribution of simulations' distances with the test statistic. Reject/not reject the null hypothesis with a significance level \(\alpha=0.05\).
The resulting test is simple and can be used with a low amount of observations. However, this test is not a consistency test but an outlier detection method. The reason for this is that there exist distributions \(f\neq g\) such that if \(Y\sim g\) and \(X\sim f\), we have that \(E\left[-log(g(Y))\right]\approx E\left[-log(g(X))\right]\). This means that even if we have a considerable amount of data, we would not be able to reject \(g\) when in fact, \(f\) is the true distribution. Of course, it is rather unlikely that the true distribution has this property with respect to \(g\). Furthermore, an outlier would have a very low likelihood and, therefore, would be easily detected with this method.
We evaluate the effectiveness of this test with synthetic examples. For the true model, we chose an arbitrary bivariate Gaussian distribution \(\mathcal{N}(\boldsymbol{\mu}_{true},\,\boldsymbol{\Sigma}_{true})\) with mean
\[\boldsymbol{\mu}_{true}=\left[\begin{array}{c}5\\ 5\end{array}\right]\]
and covariance matrix
\[\boldsymbol{\Sigma}_{true}=\left[\begin{array}{cc}1.5&0.8\\ 0.8&2.5\end{array}\right].\]
The examples simulate three main scenarios:
1. Both the astrophysical model and observations come from the true model.
2. The astrophysical model is biased, \[\boldsymbol{\mu}_{mod}=\left[\begin{array}{c}4\\ 4\end{array}\right].\] While the observations come from the true model.
3. The astrophysical model has an overestimated covariance matrix, \[\boldsymbol{\Sigma}_{mod}=\left[\begin{array}{cc}2.5&0.8\\ 0.8&3.5\end{array}\right].\] While the observations come from the true model.
As can be seen in Fig. 5, the test correctly does not reject scenario (1), while it does reject scenarios (2) and (3).
Although we call it an outlier detection method, this test can also reject in the opposite scenario where an unlikely high number of observations are lying in a high-density region of the model.
As for the regression tests, we repeatedly apply the test to the three examples for three different sample sizes to evaluate the power of ConTEST for outliers as a function of the number of observations. Repeating the test a thousand times for each scenario gives Table 3 rejection rates.
For scenario (1), the rejection rate is, as expected, approximately equal to the significance level \(\alpha=0.05\). For
\begin{table}
\begin{tabular}{c c c c} \hline \hline Example & \(n=10\) & \(n=100\) & \(n=1000\) \\ \hline (1) & 5.3\% & 5.4\% & 4.9\% \\ (2) & 18.4\% & 90.7\% & 100\% \\ (3) & 25.1\% & 99.4\% & 100\% \\ \hline \end{tabular}
\end{table}
Table 3: _Rejection rate of ConTEST for outliers for the three examples as a function of the number of observations._
scenarios (2) and (3), the rejection rate improves as a function of the sample size and both reach more than 90% rejection rate with 100 observations.
### ConTEST for densities
Here we introduce the framework to test the consistency between observations and astrophysical models with ConTEST for densities. The test uses only non-parametric methods and a distance measure to reject/not reject the null hypothesis. ConTEST follows these steps:
1. If the model is given as a simulated cloud of points: Apply KDE to the model sample and obtain its density \(g\).
2. Apply KDE to the observed sample and obtain its density \(f\).
3. Calculate the distance between \(f\) and \(g\) with: \[D=\int_{\rm R}\left|f(y)-g(y)\right|dy\] (5)
4. Simulate K dataset of size n from the model's density \(g\).
5. Apply KDE to each of the K datasets and obtain their densities \(f_{k}\).
6. Calculate the distance for each simulated dataset: \[D_{k}=\int_{\rm R}\left|f_{k}(y)-g(y)\right|dy\] (6)
7. Compare the distribution of simulations' distances with the test statistic. Reject/not reject the null hypothesis with a significance level \(\alpha=0.05\).
The test statistic is approximated with an MCMC method simulating a high number of observations from \(g\) and then calculating the distance with \(D=\frac{1}{N}\sum_{i=1}^{N}\frac{f(x_{i})}{\left|g(x_{i})\right|}-1\)]. Due to the formulation of its test statistic, ConTEST for density is a one-tailed test; it rejects the consistency hypothesis only when the test statistic is greater than the critical value. We use the same three synthetic examples introduced in Section 3.2 to test the power of ConTEST for density models.
As shown in Fig. 6, for scenario (1), where both model and observations come from the truth, ConTEST for densities does not reject the consistency hypothesis. For scenario (2) correctly rejects the consistency hypothesis being the model biased. And in scenario (3), it can assess that the
Figure 5: Result of ConTEST for outliers; each column represents a scenario. In the first, the model and observations come from the true model. The test statistic (orange dashed line) lies within the simulations’ distances density; the null hypothesis is not rejected. In the second, the model is biased, while the observations come from the true model. The test statistic (orange dashed line) lies outside the simulations’ distances density; the null hypothesis is rejected. In the third, the astrophysical model has an overestimated covariance matrix, while the observations come from the true model. The test statistic (orange dashed line) lies outside the simulations’ distances density; the null hypothesis is rejected.
model has an overestimated covariance matrix making it not consistent with the observed sample.
Also, for ConTEST for densities, we evaluate the power of the test dependency on the number of observations; the results can be found in Table 4.
For scenario (1), the rejection rate is, as expected, approximately equal to the significance level \(\alpha=0.05\). For scenario (2), the rejection rate improves as a function of the sample size and reaches 100% rejection rate with 100 observations. For scenario (3), the test shows low sensitivity to overestimating the covariance and only rejects the null hypothesis 31.1% of the time. However, with a bigger sample size, it reaches a 100% rejection rate.
## 4 Examples
We now present a series of examples to show the applicability of ConTEST to real scenarios in astronomy and, at the same time, evaluate some well-known astrophysical models.
### Cosmic Microwave background
The first example concerns the Cosmic Microwave Background (CMB) radiation from which essential properties of the Universe as a whole and its evolutionary history can be determined. The many sky surveys of the CMB, such as the Wilkinson Microwave Anisotropy Probe (WMAP Bennett et al., 2013; Hinshaw et al., 2013, Planck (Adam et al., 2014)), the South Pole Telescope (SPT, Story et al., 2013) and the Atacama Cosmology Telescope (ACT, Sievers et al., 2013), although agreeing on a standard \(\Lambda CDM\) model with six cosmological parameters, derive different parameter estimates. This is a clear case where testing the consistency of the models with the observations is necessary. The relevant data is the power spectrum of the temperature fluctuations measured over the sky, which shows several peaks. We apply both ConTEST methods for regression to test the
\begin{table}
\begin{tabular}{c c c c} \hline \hline Example & \(n=10\) & \(n=100\) & \(n=1000\) \\ \hline (1) & 4.9\% & 4.6\% & 5.2\% \\ (2) & 40.8\% & 100\% & 100\% \\ (3) & 12.0\% & 31.1\% & 100\% \\ \hline \end{tabular}
\end{table}
Table 4: _Rejection rate of ConTEST for densities for the three examples as a function of the number of observations._
Figure 6: _Result of ConTEST for densities; each column represents a scenario. In the first, the model and observations come from the true model. The test statistic (orange dashed line) lies within the simulations’ distances density; the null hypothesis is not rejected. In the second, the model is biased, while the observations come from the true model. The test statistic (orange dashed line) lies outside the simulations’ distances density; the null hypothesis is rejected. In the third, the astrophysical model has an overestimated covariance matrix, while the observations come from the true model. The test statistic (orange dashed line) lies outside the simulations’ distances density, and the null hypothesis is rejected._
consistency of the Planck 2018 temperature power spectrum data (Aghanim et al., 2020) with the best-fitting model made available to the public from the Planck collaboration.
In Fig. 7, we show the data, and the best-fitting model on the left and on the right ConTEST result shows the consistency of the model with the data. In Fig.8, we show the LLR approach's results that are very close to the best-fit model and also lead to the conclusion that the model is consistent with the data. As expected, Planck's \(\Lambda CDM\) model is consistent with the observations and does not suffer from either bias or under/over-estimation of uncertainties.
### Spectral analysis
A similar type of data set occurs in spectroscopy, where the intensity of light as a function of wavelength provides essential information about the temperature and chemical composition of the observed object and shifts in velocity due, e.g. to doppler motion, can be detected. It is common practice to develop different synthetic spectra based on an object's temperature, composition and other properties and compare them to the observed data to select the best-fit model. This is a clear example where a consistency test helps validate the theoretical model.
Here investigate a typical spectra analysis and fitting from Martocchia et al. (2021), which determine C and N abundances in stars of star clusters in the Magellanic Clouds. We test its consistency with Smoothed ConTEST.
In Fig. 9, we show a part of the normalized spectrum around 430 nm, where absorption from CH is present. The best-fit model is shown with the data and the LLR result. Although the best fit matches the data for a significant part of the spectral range, ConTEST rejects the best fit. In this case, as often in theoretical modelling, the model does not fully capture the complexity of the data. Astronomers can use ConTEST to test the consistency and comment on the outcome, making the limitations of the theoretical model explicit.
### Binary white dwarf radial velocity
In the following example, we test the consistency between the follow-up observations of a double white dwarf binary detected in the ESO SN Ia Progenitor SurveY (SPY) and its orbital solution model from Nelemans et al. (2005). In particular, we apply both ConTEST methods for regression to the binary white dwarf (WD) \(0326-273\), consisting of a close double white dwarf with a possible outer M star third companion in a very wide orbit.
This example differs from the ones introduced before; here, the model comes from simple Newtonian laws and thus is very unlikely to be wrong or incomplete, while the observations, often combined data from different telescopes, and with uncertainties estimated a posteriori, are the main suspects of possible rejection.
First, in Fig. 10, we show the data and the best-fit model from the paper. ConTEST for regression rejects the consistency hypothesis. This is likely due to the observations' tiny uncertainties and possible systematic errors and offsets, not the model itself. Indeed, testing the model again with Smoothed ConTEST, which is less sensitive to the effect of the observed uncertainties, the consistency hypothesis is not rejected, further confirming that the rejection cause is indeed the uncertainties. This result can be found in Fig 11.
For this example, we also apply Smoothed ConTEST for the family of models coming from Newtonian laws. Since the model is estimated directly from the observations, we can re-estimate a new model for every simulated dataset. The result for this test is shown in Fig. 12, and it strongly hints that the family of models is indeed correct, not rejecting the consistency hypothesis between the model and the observations of the binary WD \(0326-273\).
We also used this specific example to test the adaptive bandwidth; being the support unevenly sampled, the LLR with an adaptive bandwidth can produce a smoother estimate of the regression function.
### Globular cluster luminosity
We now apply ConTEST on density models. As a first example, we look at the Globular Cluster (GC) luminosity function. GCs are large collections of stars gravitationally bound in a compact configuration, fundamentally distinct from the field population of stars in the same galaxy. GCs can offer clues regarding different aspects of their hosting galaxies, such as the galactic star formation history, the cannibalism of smaller merging galaxies, and the galactic structure (Ashman and Zepf, 1998; Michael Fall and Rees, 1988; West et al., 2004).
The distribution of GC luminosities, known as the globular cluster luminosity function (GCLF), can provide essential insights for inferring cosmological distances (Hanes, 1977; Racine, 1968). Early analyses of the Milky Way's GCLFs and the Andromeda Galaxy (M 31) showed that a Gaussian distribution was an excellent analytical fit to the observed distribution of luminosities (Harris, 1991; Racine and Shara, 1979). Further analyses indicated many more possible analytical formulas that better fit the observed luminosities; however, the problem is beyond the purpose of this example and will not be discussed here. We test the initial Gaussian fit proposed for the GCLF of 360 GCs in the Andromeda Galaxy made available in Nantais et al. 2006.
In Fig. 13, we show the distribution of GC luminosities and the best fit Gaussian model. As shown in the figure, ConTEST for densities rejects the Gaussian fit. This is an excellent example of a model that comes from the best-fitting set of parameters estimated from the observations but is still too incomplete to be considered consistent.
### Double neutron star population synthesis
Finally, we give an example of a two-dimensional distribution. Since the discovery of the Hulse-Taylor double neutron star (DNS) binary, the number of known DNS in the Milky Way has increased, opening to more accurate population analyses (Tauris et al., 2017). In particular, the properties of the systems are often analysed in the period - eccentricity plane. According to Andrews and Mandel (2019), the current sample hints at the presence of three distinct sub-populations based on their orbital characteristics: (i) short-period, low-eccentricity binaries; (ii) wide binaries; and (iii) short-period, high-eccentricity binaries.
We have produced several synthetic populations based on different assumptions for the direct progenitors of DNS and specific properties of the supernova in which the second neutron star is formed (Fontein et al. in preparation). We use ConTEST to test the consistency of three alternative models for the first sub-population. Because the number of observations is low, we only apply ConTEST for outliers.
As it can be seen in Fig 14, 15, and 16, ConTEST for outliers does not reject any of the three possible sub-populations models.
## 5 Discussion
Astronomical data often comes with an estimate of the so-called measurement uncertainties. However, the use of estimated uncertainties in model validation is not the same for regression and density models. In this paper, for regression models, the estimated uncertainties of the observations are a fundamental part of the two tests allowing the creation of simulated samples from the model under consideration. The same cannot be done in validation methods for density models. However, two possible ways to include uncertainties in our density consistency test exist: correct the observed density accounting for the measurement uncertainties or modify the model to account for the uncertainties. The first method is a more accurate approach, although its practical application is still under discussion in the statistical literature with the concept of Deconvolution Kernel Density Estimation (Delaigle and Meister, 2008; Stefanski and Carroll, 1990). The second approach is certainly easier to apply once an estimate of the measurement error distribution is obtained from theory or observations. It can be used in combination with ConTEST,
Figure 8: Results of Smoothed ConTEST. On the left, the Planck 2018 temperature power spectrum data and relative uncertainties (black dots and bars), the publicly available best-fitting model (blue line) and the LLR model (orange line). On the right, the test statistics (orange dashed line) lie within the simulations’ distances density; the consistency hypothesis is not rejected.
Figure 7: Results of ConTEST. On the left are the Planck 2018 temperature power spectrum data, relative uncertainties (black dots and bars), and the publicly available best-fitting model (blue line). On the right, the test statistics (orange dashed line) lie within the simulations’ distances density; the consistency hypothesis is not rejected.
simply convolving the model sample with the estimated error, getting a broader model that includes the information of measurement uncertainties. Either solution makes a better comparison between models and observations and is always advised if measurement uncertainties are significant.
We also want to point out that if a set of possible models is being tested, the consistency tests proposed in this paper can create a confidence region for the parameter space under consideration. Testing a grid of the model's parameters and looking at the subset that is not rejected will automatically create a multidimensional ellipse that can be used as confidence intervals for the parameters.
## 6 Conclusions
In this paper, we propose multiple frameworks for assessing the consistency between observations and astrophysical models in a non-parametric and model-independent way. We developed four tests, two for regression and two for density models.
For regression models, we tested both methods on three synthetic examples exploring possible problems that can affect an astrophysical model and the observations under consideration. The first method, ConTEST for regression, is powerful but often strict, rejecting the null hypothesis in real cases, especially when under/overestimation of the uncertainties is present; the second method, Smoothed ConTEST for regression, instead, is developed specifically for real applications and it is less sensitive to the uncertainties of the observations, enabling it to concentrate on testing the presence of biases between model and observations. Applying both methods allows evaluation of the three most common problems that affect regression models in astronomy: bias in the model, outliers in the observations, and incoherence of the uncertainties of the observations.
For density models, the first method, ConTEST for outliers, allows for assessing the presence of outliers in the observations with respect to the model under consideration. The second test, ConTEST for densities, is an excellent tool for evaluating the consistency between density models, either parametric or simulated clouds of points, and observations. Applying both tests allows first to assess the presence of outliers, eventually remove them, and then draw conclusions on the consistency of the model.
Figure 10: Result of ConTEST. On the left is the radial velocity vs phase of WD\(0326-273\), relative uncertainties (black dots and bars), and the proposed model to evaluate (blue line). On the right, the test statistics (orange dashed line) lie outside the simulations’ distances density, and the consistency hypothesis is not rejected.
Figure 9: Result of Smoothed ConTEST. On the left, the normalized flux spectrum and relative uncertainties (black dots and bars), the proposed model to evaluate (blue line) and the LLR model (orange line). On the right, the test statistics (orange dashed line) lie outside the simulations’ distances density, and the consistency hypothesis is rejected.
All four tests were applied extensively on real cases in Section 4, and comments on the results were given as guidelines for possible users of this set of non-parametric tests.
In the future, we will concentrate on expanding the use of observations uncertainties while testing models. We will specifically explore the use of uncertainties in testing the consistency of density models and the possibility of including the model's uncertainties in the framework when available.
## Acknowledgements
F. S. and G. N. acknowledge support from the Dutch Science Foundation NWO.
## Data Availability
All four ConTEST methods are developed in the statistical software R and available to the public in Python at [https://github.com/FiorenSt/ConTEST](https://github.com/FiorenSt/ConTEST) (Stoppa, 2022).
|
2303.09021 | Encoding acyclic orientation of complete multipartite graphs | In this work we study the acyclic orientations of complete multipartite
graphs. We obtain an encoding of the acyclic orientations of the complete
$p$-partite graph with size of its parts $n:=n_1,n_2,\ldots,n_p$ via a vector
with $p$ symbols and length $n_1+n_2+\ldots+n_p$ when the parts are fixed but
not the vertices in each part. We also give a recursive way to construct all
acyclic orientations of a complete multipartite graph, this construction can be
done by computer easily in order $\mathcal{O}(n)$. Besides, obtained
codification of the acyclic orientations allows us to count the number of
non-isomorphic acyclic orientations of the complete multipartite graphs.
Furthermore, we obtain a closed formula for non-isomorphic acyclic orientations
of the complete multipartite graphs with a directed spanning tree. In addition,
we obtain a closed formula for the ordinary generating functions for the number
of strings in the alphabet $\{s_1,s_2,\ldots,s_p\}$ with $k_1$ characters
$s_1$, $k_2$ characters $s_2$, and so on with $k_p$ characters $s_p$ such that
no two consecutive characters are the same. Finally, we obtain a closed formula
for the number of acyclic orientation of a complete multipartite graph
$K_{n_1,\ldots,n_p}$ with labelled vertices. | Walter Carballosa, Jessica Khera, Francisco Reyes | 2023-03-16T01:24:27Z | http://arxiv.org/abs/2303.09021v1 | # Encoding acyclic orientation of complete multipartite graphs
###### Abstract
In this work we study the acyclic orientations of complete multipartite graphs. We obtain an encoding of the acyclic orientations of the complete \(p\)-partite graph with size of its parts \(n:=n_{1},n_{2},\ldots,n_{p}\) via a vector with \(p\) symbols and length \(n_{1}+n_{2}+\ldots+n_{p}\) when the parts are fixed but not the vertices in each part. We also give a recursive way to construct all acyclic orientations of a complete multipartite graph, this construction can be done by computer easily in order \(\mathcal{O}(n)\). Besides, obtained codification of the acyclic orientations allows us to count the number of non-isomorphic acyclic orientations of the complete multipartite graphs. Furthermore, we obtain a closed formula for non-isomorphic acyclic orientations of the complete multipartite graphs with a directed spanning tree. In addition, we obtain a closed formula for the ordinary generating functions for the number of strings in the alphabet \(\{s_{1},s_{2},\ldots,s_{p}\}\) with \(k_{1}\) characters \(s_{1}\), \(k_{2}\) characters \(s_{2}\), and so on with \(k_{p}\) characters \(s_{p}\) such that no two consecutive characters are the same. Finally, we obtain a closed formula for the number of acyclic orientation of a complete multipartite graph \(K_{n_{1},\ldots,n_{p}}\) with labelled vertices.
keywords: Acyclic graphs, encoding graphs, Complete multipartite graphs, enumerating graphs Msc: [2020] 05C20, 05C30 +
Footnote †: journal:
## 1 Introduction
Throughout this paper, we will focus our attention on finite simple graphs \(G\), with \(V(G)\) denoting the set of vertices of the graph and \(E(G)\) denoting the set of edges. A multipartite graph is a graph whose vertices can be partitioned into distinct sets \(V_{i}\) so that every edge of the graph connects a vertex in \(V_{i}\) to a vertex in \(V_{j}\), with \(i\neq j\). A multipartite graph with \(k\) parts is referred to as a \(k\)-partite graph, and the complete \(k\)-partite graph is the graph with \(k\) parts which contains an edge between every pair of vertices from different parts. This is denoted \(K_{n_{1},n_{2},\ldots,n_{k}}\), where each \(n_{i}\) is the number of vertices within part \(V_{i}\).
An orientation of a graph is an assignment of a direction to each edge in some way. An orientation of a graph is said to be _acyclic_ if the assignment does not form any directed cycles. In an orientation of a graph a vertex is said to be a _source_ if its in-degree is zero, and similarly, a vertex is said to be a _sink_ if its out-degree is zero. We denote the multinomial coefficients by \(\binom{n}{a_{1},a_{2},\ldots,a_{k}}=\frac{n!}{a_{1}!\cdot a_{2}!\cdot\ldots \cdot a_{k}!}\) where \(n=\sum_{i=1}^{k}a_{i}\) and \(a_{1},a_{2},\ldots,a_{k}\) are nonnegative integers. Given a graph \(G\), we denote the chromatic number of \(G\), by \(\chi(G)\). The chromatic number of a graph \(G\) is the smallest number of colors needed to color the vertices of \(G\) so that no two adjacent vertices share the same color. Similarly, \(\chi_{G}\) denotes the chromatic polynomial of \(G\). The chromatic polynomial \(\chi_{G}\) at a positive integer \(\lambda\), \(\chi_{G}(\lambda)\), is defined by the number of \(\lambda\)-colorings of the graph \(G\).
Acyclic orientations of a graph have been widely studied since the last century. One of the well-known result about acyclic orientation of a graph, the Gallai-Hasse-Roy-Vitaver theorem [9, Theorem 3.13, p. 42], was discovered (several times) during the 60's by these four authors above. The theorem states that the chromatic number of a connected graph equals one more than the length of the longest path in an acyclic orientation chosen to minimize this path length. In 1973, Stanley [14] obtains that the number of acyclic orientation of a graph \(G\) with order \(n\) is \((-1)^{n}\chi_{G}(-1)\). In 1986, Linial [8] used this result to prove that the number of acyclic orientation of a graph is an #P-complete problem. In 1984, Johnson [6] investigated the relationship between certain acyclic orientations and network reliability. In that paper, Johnson describes how generating the acyclic orientations of a network with a unique source can be used in computing its reliability. Other constructive methods for listing the acyclic orientation of a graph \(G\) were investigated in [10, 11, 12, 13]. Squire's algorithm in [13] requires \(\mathcal{O}(n)\) time per acyclic orientation generated. Acyclic orientations of complete bipartite graphs were studied in [7] and [15], and here, in this work, we extend to complete multipartite graphs.
In this paper we study the acyclic orientations of complete multipartite graphs. In Section 2, we focus primarily on acyclic orientations with unlabelled vertices. We obtain a recursive way to construct all acyclic orientations of a complete multipartite graph with unlabelled vertices, and this construction can be done easily with a computer in order \(\mathcal{O}(n)\) supported on the encoding given by Theorem 2.5 that is shown in (2.2). Additionally, Theorem 2.7 counts the number of non-isomorphic acyclic orientations of the complete multipartite graphs; in particular, we obtain that \(B(n_{1},\ldots,n_{p})=\frac{\binom{n_{1}+\ldots+n_{p}}{n_{1}!\mid\ldots\mid n_ {p}}}{\frac{n_{1}+\ldots+n_{p}}{n_{1}!\mid\ldots\mid n_{p}}}\) where \(r_{1},r_{2},\ldots,r_{s}\) are the number of parts in \(K_{n_{1},\ldots,n_{p}}\) grouped by the same size. Theorem 2.8 gives a closed formula for non-isomorphic acyclic orientations of the complete multipartite graphs with a directed spanning tree, \(\mathcal{C}(n_{1},\ldots,n_{p})=\frac{\binom{n_{1}+\ldots+n_{p}}{n_{1},\ldots, n_{p}}}{r_{1}!\mid\ldots r_{s}!}\cdot\frac{(n_{1}+\ldots+n_{p})^{2}-(n_{1}^{2}+ \ldots+n_{p}^{2})}{(n_{1}+\ldots+n_{p})^{2}-(n_{1}+\ldots+n_{p})}\). In Section 3, we deal with the acyclic orientation of complete multipartite graphs with labelled vertices. In this sense, we obtain a closed formula for the ordinary generating functions \(\mathcal{F}(x_{1},x_{2},\ldots,x_{p}):=\sum_{k_{1},\ldots,k_{p}\in\mathbb{N}} X_{k_{1},k_{2},\ldots,k_{p}}\,x_{1}^{k_{1}}x_{2}^{k_{2}}\ldots x_{p}^{k_{p}}\), where \(X_{k_{1},\ldots,k_{p}}\) is the number of strings in the alphabet \(\{s_{1},s_{2},\ldots,s_{p}\}\) with \(k_{1}\) characters \(s_{1}\), \(k_{2}\) characters \(s_{2}\), and so on, with \(k_{p}\) characters \(s_{p}\) such that no two consecutive characters are the same. In Theorem 3.4, we obtain a closed formula for the number of acyclic orientation of a complete multipartite graph \(K_{n_{1},\ldots,n_{p}}\) with labelled vertices. Finally, we discuss the longest paths in acyclic orientations of complete multipartite graphs. Proposition 3.5 gives that the length of the longest path in an acyclic orientation \(\mathcal{K}\) of a complete multipartite graph is the size of the partition induced by \(R_{\mathcal{K}}\) minus one. Furthermore, the number of longest paths in \(\mathcal{K}\) is given by the multiplication of the sizes of each part of the partition induced by \(R_{\mathcal{K}}\). Relation \(R_{\mathcal{K}}\) is defined on \(V(K_{n_{1},\ldots,n_{p}})\) such that two vertices are related by \(R_{\mathcal{K}}\) if the two vertices are represented in the code within a sub-string of consecutive and identical codes.
## 2 Encoding acyclic orientations of complete multipartite graphs
In this Section, we deal with encoding the acyclic orientations of the complete multipartite graphs with labelled parts but unlabelled vertices. The encoding we obtain allows us to count the number of non-isomorphic acyclic orientations of the complete multipartite graphs and for non-isomorphic acyclic orientations of the complete multipartite graphs with a directed spanning tree (_i.e._, an acyclic orientation with a unique source).
**Lemma 2.1**.: _In every acyclic orientation of a complete multipartite graph there are both source and sink vertices._
_Furthermore, all sources (sinks, respectively) are vertices in the same part of the multipartite graph, whenever there are more than one source (sink, respectively)._
Proof.: Let us consider \(K_{n_{1},n_{2},\ldots,n_{p}}\), a complete \(p\)-partite graph, \(n_{i}\geq 1\) for every \(1\leq i\leq p\). We show here below a proof for the existence of a source. Note that the existence of a sink is analogous since by reversing
the orientations in any acyclic orientation of a graph, the resulting orientation is still acyclic and a source in the reversed orientation is a sink in the original orientation. First, we claim the result is true for \(p=2\).
**Claim 1:**_There is a source in every acyclic orientation of \(K_{n_{1},n_{2}}\) for every \(n_{1},n_{2}\geq 1\)._
We prove this by induction on \(n_{1}+n_{2}\geq 2\). Note that there is a source in an acyclic orientation of \(K_{1,1}\). Assume now that there is a source for every acyclic orientation of \(K_{n_{1},n_{2}}\) with \(n_{1}+n_{2}=n\geq 2\). Consider an acyclic orientation of \(G:=K_{n_{1}^{*},n_{2}^{*}}\) with order \(n_{1}^{*}+n_{2}^{*}=n+1\) and a vertex \(v\) in Part 1 (Part with size \(n_{1}^{*}\)). If one of the parts of \(G\) has only one vertex, \(G\) is a star; and consequently, we have either the central vertex of the Star is a source or one of the other vertices is. Assume, \(n_{1}^{*},n_{2}^{*}>1\). Hence, there is a source by assumption in the orientation obtained by removing \(v\) from \(G\), _i.e._, in \(G^{\prime}:=G-\{v\}\). If one of its sources is a vertex in Part 1, the result is done. So, we can assume the source in \(G^{\prime}\) is a vertex in Part 2. Denote that vertex, \(w\). Note that if the orientation assigned to edge \(vw\) is oriented from \(w\) to \(v\), then \(w\) is a source in \(G\), and the result is given. Assume the orientation assigned to edge \(vw\) is oriented from \(v\) to \(w\). There is a source by assumption in the orientation obtained by removing \(w\) from \(G\), _i.e._, in \(G^{\prime\prime}:=G-\{w\}\). If one of the source in \(G^{\prime\prime}\) is a vertex in Part 2, the result is done. We can assume the source in \(G^{\prime\prime}\) is a vertex in Part 1. Denote that vertex, \(v^{\prime}\). If \(v\) is a source of \(G^{\prime\prime}\), then \(v\) is a source in \(G\). Thus, assume \(v\) is not a source in \(G^{\prime\prime}\) and there exists a vertex \(w^{\prime}\) in Part 2 such that edge \(vw^{\prime}\) is oriented from \(w^{\prime}\) to \(v\) creating a cycle \(v\to w\to v^{\prime}\to w^{\prime}\) which is impossible. Therefore, either \(w\) or \(v\) is a source in \(G\). Note that could be more than one source, but in that case all the sources are in the same part.
**Claim 2:**_There is a source in every acyclic orientation of \(K_{n_{1},n_{2},n_{3}}\) for every \(n_{1},n_{2},n_{3}\geq 1\)._
Let \(A,B,C\) be the three parts of \(K_{n_{1},n_{2},n_{3}}\) with corresponding sizes \(n_{1},n_{2},n_{3}\), respectively. Let \(\mathcal{K}\) be an acyclic orientation of \(K_{n_{1},n_{2},n_{3}}\). Consider \(A-B\) (\(B-C\) and \(C-A\), respectively) the acyclic orientation in \(K_{n_{1},n_{2}}\) (\(K_{n_{2},n_{3}}\) and \(K_{n_{3},n_{1}}\), respectively) considering only Parts \(A\cup B\) (\(B\cup C\) and \(C\cup A\), respectively). By Claim 1, there is a source in each of \(A-B\), \(B-C\) and \(C-A\). Since \(\mathcal{K}\) is an acyclic orientation, the orientation of the sources cannot be all three \(A\to B\), \(B\to C\) and \(C\to A\); nor its corresponding inverse orientations. However, two of its sources (out of the three sub-orientations) must have consecutive orientations by the Pigeonhole Principle. Without loss of generality we can assume that there is a source in \(A-B\) that is a vertex in Part \(A\) and a source in \(B-C\) that is a vertex in Part \(B\), see Figure 1. Thus, the source of \(A-B\) in Part \(A\) is also a source in \(C-A\); otherwise, there is a cycle in \(\mathcal{K}\) with length three.
**Claim 3:**_There is a source in every acyclic orientation of \(K_{n_{1},n_{2},\ldots,n_{p}}\) for every \(n_{1},n_{2},\ldots,n_{p}\geq 1\) and \(p\geq 2\)._
Figure 1: Two sources in consecutive orientations \(A\to B\) and \(B\to C\) for visualizing the argument in Claim 2.
Claims 1 and 2 give the result for \(p=2,3\). Assume now that Claim 3 is true for some \(p\geq 3\). Consider \(G:=K_{n_{1},\ldots,n_{p},n_{p+1}}\) and let \(\mathcal{K}\) be an acyclic orientation of \(G\). Let \(P_{i}\) be the part of \(K_{n_{1},\ldots,n_{p},n_{p+1}}\) with corresponding sizes \(n_{i}\), for every \(1\leq i\leq p+1\). Hence, there is a source by assumption in the orientation \(\mathcal{K}^{\prime}\) obtained by removing the part \(P_{p+1}\) from \(G\), _i.e._, in \(G^{\prime}:=G-P_{p+1}\). Without loss of generality we can assume that such sources are vertices in \(P_{1}\). Similarly, there is a source by assumption in the orientation \(\mathcal{K}^{\prime\prime}\) obtained by removing the part \(P_{1}\) from \(G\), _i.e._, in \(G^{\prime\prime}:=G-P_{1}\). So, without loss of generality we can assume that the sources in \(G^{\prime\prime}\) are vertices in \(P_{2}\). Consider now the acyclic orientation \(\mathcal{K}^{\prime\prime}\) obtained from \(G\) by removing all edges with both endpoints in \(P_{3}\cup P_{4}\cup\ldots\cup P_{p+1}\). Note that this is an acyclic orientation of a complete tripartite graph \(K_{n_{1},n_{2},n_{3}+\ldots+n_{p}+n_{p+1}}\). Thus, by Claim 2 there is a source in \(\mathcal{K}^{\prime\prime\prime}\) that is a vertex in either \(P_{1}\) or \(P_{2}\). Therefore, this is a source of \(\mathcal{K}\), too.
Finally, arguments above give that if there are more than one source (sink, respectively) then all sources (sinks, respectively) are vertices in the same part of the complete multipartite graph.
Lemma 2.1 has the following direct consequences.
**Corollary 2.2**.: _In every acyclic orientation of a non-empty graph there are both source and sink vertices._
Proof.: Let \(\mathcal{K}\) be an acyclic orientation of a given graph \(G(V,E)\). It is well-known that an acyclic orientation induces a partial ordering \(<\) on the vertices of \(G\), defined by two vertices \(v_{1},v_{2}\in V\) which satisfy \(v_{1}<v_{2}\) if and only if there is a directed path from \(v_{1}\) to \(v_{2}\) along the edges of the orientation. Let's consider a listing \(\mathcal{L}\) of the vertices of \(G\) that preserve the partial ordering above induced by the acyclic orientation. Clearly, such a listing provides to \(V\) with a total order. Hence, we can complete an acyclic orientation \(\mathcal{K}^{\prime}\) of \(K_{n}\simeq K_{\underbrace{1,\ldots,1}_{n}}\) from \(\mathcal{K}\) by adding new directed edges according to ordering in \(\mathcal{L}\), _i.e._, adding \((v_{1},v_{2})\) to \(\mathcal{K}^{\prime}\) if \(v_{1}\) precedes \(v_{2}\) in \(\mathcal{L}\). Thus, by Lemma 2.1 there are both source and sink vertices in \(\mathcal{K}^{\prime}\) that also are source and sink, respectively, in \(\mathcal{K}\) after removing the same directed edges.
**Corollary 2.3**.: _Let \(G\) be an undirected graph without isolated vertices and let \(\mathcal{G}\) be an acyclic orientation of \(G\). Then, \(\mathcal{G}\) has a directed spanning tree if and only if \(\mathcal{G}\) has a unique source._
Proof.: Note that if there is a spanning tree in \(\mathcal{G}\) then the root of the spanning tree is the only one source. Assume now that there is only one source in \(\mathcal{G}\) and there does not exist a directed spanning tree in \(\mathcal{G}\). Denote \(v\) the source in \(\mathcal{G}\). Consider the largest directed tree in \(\mathcal{G}\) with root \(v\), denoted \(\mathcal{T}\). Since \(\mathcal{T}\) is not a spanning tree, there is a non-empty collection of vertices in \(G\) not included in \(\mathcal{T}\), _i.e._, \(V(G)\setminus V(\mathcal{T})\neq\emptyset\). Thus, since \(G\) has no isolated vertices, every isolated vertex in \(\mathcal{R}:=\langle V(G)\setminus V(\mathcal{T})\rangle\) is a source in \(\mathcal{G}\), but this is a contradiction. If \(\mathcal{R}\) has no isolated vertices, by Corollary 2.2, \(\mathcal{R}\) has a source \(w\neq v\) that is also a source of \(\mathcal{G}\). This is the contradiction we were looking for, therefore this completes the proof.
For every collection of natural numbers \(n_{1},n_{2},\ldots,n_{p}\), with \(p\geq 2\), we define the number of acyclic orientations of a complete multipartite graph with fixed parts with respective sizes \(n_{1},n_{2},\ldots,n_{p}\) and unlabelled vertices, by \(\mathcal{A}(n_{1},n_{2},\ldots,n_{p})\). We define \(\mathcal{A}(n_{1},n_{2},\ldots,n_{p})\) such that one or more parts of the multipartite graph may be zero. This notation will be convenient. For instance, \(\mathcal{A}(0,1,2)\) actually represents the number of acyclic orientations of a complete bipartite graph \(K_{1,2}\) since the first part of the tripartite graph is empty. In the same way, \(\mathcal{A}(1,0,2)\), \(\mathcal{A}(0,1,0,0,2)\) and many others also represent \(\mathcal{A}(1,2)\). Note that \(\mathcal{A}(0,0,n)\) represents the number of acyclic orientations of a tripartite graph with at most one non-null (third) part with \(n\) vertices, _i.e._, the empty graph \(E_{n}\), and consequently, \(\mathcal{A}(0,0,n)=1\) for all natural numbers \(n\). For obvious reasons \(\mathcal{A}(n_{1},n_{2},\ldots,n_{p})\) is symmetric with respect to the permutation of their entries.
**Proposition 2.4**.: \(\mathcal{A}(n_{1},n_{2},\ldots,n_{p})\) _satisfies_
\[\mathcal{A}(n_{1},n_{2},\ldots,n_{p})=\mathcal{A}(n_{1}-1,n_{2},\ldots,n_{p})+ \mathcal{A}(n_{1},n_{2}-1,\ldots,n_{p})+\ldots+\mathcal{A}(n_{1},n_{2},\ldots, n_{p}-1). \tag{2.1}\]
_with initial values \(\mathcal{A}(n_{1},n_{2},\ldots,n_{p})=0\) if \(n_{i}=-1\) for some \(1\leq i\leq p\), and \(\mathcal{A}(0,\ldots,0)=1\)._
Proof.: Consider a complete multipartite graph \(K_{n_{1},\ldots,n_{p}}\) and let \(\mathcal{K}\) be one of its acyclic orientations. By Lemma 2.1 there is at least one source in \(\mathcal{K}\) and all its sources are vertices in the same part. Let \(v\) be one of the sources in \(\mathcal{K}\). Without loss of generality we can assume that \(v\) is a vertex in Part 1, \(n_{1}\geq 1\). Note that the acyclic orientation \(\mathcal{K}^{\prime}\) obtained by removing \(v\) and the edges started at \(v\) from \(\mathcal{K}\) we obtain an acyclic orientation of \(K_{n_{1}-1,n_{2},\ldots,n_{p}}\). Since the arbitrary choice of the source may run over all parts, one at the time, and the parts of \(K_{n_{1},\ldots,n_{p}}\) are fixed, we obtain
\[\mathcal{A}(n_{1},n_{2},\ldots,n_{p})=\mathcal{A}(n_{1}-1,n_{2},\ldots,n_{p})+ \mathcal{A}(n_{1},n_{2}-1,\ldots,n_{p})+\ldots+\mathcal{A}(n_{1},n_{2},\ldots,n_{p}-1).\]
the initial values of the recursive formula (2.1) follows from the natural meaning of \(\mathcal{A}(0,\ldots,0)\), _i.e._, the number of acyclic orientations in a multipartite graph with empty parts of vertices \(K_{0,\ldots,0}\). Clearly, from a part with no vertices there are no source to remove from, so \(\mathcal{A}(n_{1},n_{2},\ldots,n_{p})=0\) if \(n_{i}=-1\) for some \(1\leq i\leq p\).
Figure 2 shows the sequence of sources in an acyclic orientation of the \(5\)-wheel graph, a complete tripartite graph \(K_{2,2,1}\), providing a general view for whatever acyclic orientations of a multipartite graph. Proposition 2.4 gives a direct procedure to generate all acyclic orientations of a multipartite graph, as well as encoding each orientation in an easy way. Define \([n]:=\{0,1,2,\ldots,n-1\}\) for every natural number \(n\).
**Theorem 2.5**.: _There is a one-to-one correspondence from the set of all acyclic orientations of a complete multipartite graph with at most \(p\) fixed parts and \([p^{N}]\), where \(N\) is the order of \(G\). This correspondence assigns to each acyclic orientation with an \(N\)-vector code in \([p]^{N}\)._
Proof.: Consider a complete multipartite graph \(G=K_{n_{0},n_{1},\ldots,n_{p-1}}\) with \(p\geq 2\) parts labelled \(0,1,2,\ldots,p-1\) where part \(i\) has exactly \(n_{i}\) unlabelled vertices for \(i\in[p]\). Let \(\mathcal{L}\) be the set of all non-null acyclic orientations of complete multipartite graphs that are subgraphs of \(G\), \(\overline{\mathcal{L}}:=\mathcal{L}\cup\{K_{0}\}\) and let \(N\) be the order of \(G\) (_i.e._, \(N=n_{0}+n_{1}+\ldots+n_{p-1}\)). Conveniently consider that all vertices in an empty graph is both a source and a sink. Define the function \(s:\mathcal{L}\to[p]\) as \(s(\mathcal{K})=i\) if the sources of \(\mathcal{K}\) are vertices in the part \(i\). Define the function \(f:\mathcal{L}\to\overline{\mathcal{L}}\) as \(f(\mathcal{K})\) is the acyclic orientation \(\mathcal{K}^{\prime}\) obtained from \(\mathcal{K}\) by removing one of the sources of \(\mathcal{K}\). Let's consider \(\mathcal{K}\) a non-null acyclic orientation of \(G\). Clearly, the functions \(s\) and \(f\) are well defined.
Now, we encode each acyclic orientation of \(G\) as follows. Thus, we recursively define the code function \(C:\overline{\mathcal{L}}\to[p]^{*}\) for \(\mathcal{K}\) as
\[C(\mathcal{K})=\begin{cases}s(\mathcal{K}).C\left(f(\mathcal{K})\right),\, \text{if $\mathcal{K}$ is non-null}\\ \qquad\qquad\qquad\lambda,\,\text{if $\mathcal{K}$ is null}\end{cases} \tag{2.2}\]
where \(\lambda\) is the empty string, \(.\) denotes the concatenation of strings, and \([p]^{*}\) denotes the set of all strings in the alphabet \(\{0,1,2,\ldots,p-1\}\). Note that code function is well defined. Specifically, the code for an acyclic orientation of \(G\) will be a string in \([p]^{*}\) with length \(N\) since it concatenates one (source) vertex at the time until the null graph appears.
Figure 2: An acyclic orientation of \(K_{2,2,1}\) (\(5\)-wheel graph) with consecutive sources: \(x_{1},y_{1},x_{2},y_{2}\) respectively
Figure 3 shows the encoding of an acyclic orientation of \(K_{2,3}\). Table 1 shows all acyclic orientations of \(K_{2,2}\) with its corresponding codes. Encoding given in Theorem 2.5 allows to obtain a closed formula for \(\mathcal{A}(n_{1},n_{2},\ldots,n_{p})\), see Theorem 2.6.
**Theorem 2.6**.: \[\mathcal{A}(n_{1},n_{2},\ldots,n_{p})=\binom{n_{1}+n_{2}+\ldots+n_{p}}{n_{1},n_ {2},\ldots,n_{p}}:=\frac{(n_{1}+n_{2}+\ldots+n_{p})!}{n_{1}!n_{2}!\ldots n_{p}!}.\] (2.3)
Proof.: The result is a direct consequence of counting the number of codes as in Theorem 2.5, _i.e._, numbers in the numerical system of base \(p\) with length \(n_{1}+n_{2}+\ldots+n_{p}\) and with exactly \(n_{1}\) digits \(0\), \(n_{2}\) digits \(1\), \(\ldots\), \(n_{p}\) digits \(p-1\).
We can also obtain the result above by proving that (2.3) satisfies the recurrence given in Proposition 2.4. Let us now denote by \(\mathcal{B}(n_{1},n_{2},\ldots,n_{p})\) the number of non-isomorphic acyclic orientations of the complete
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(x_{2}\) & \(y_{2}\) & \(x_{2}\) & \(y_{2}\) & \(x_{2}\) & \(y_{2}\) \\ \(x_{1}\) & \(y_{1}\) & \(x_{1}\) & \(y_{1}\) & \(x_{1}\) & \(y_{1}\) \\ \(x_{1}\) & \(y_{1}\) & \(x_{1}\) & \(y_{1}\) & \(x_{1}\) & \(y_{1}\) \\ \(x_{2}\) & \(y_{2}\) & \(x_{2}\) & \(y_{2}\) & \(x_{2}\) & \(y_{2}\) \\ \(x_{1}\) & \(y_{1}\) & \(x_{1}\) & \(y_{1}\) & \(x_{1}\) & \(y_{1}\) \\ \(x_{1}\) & \(y_{1}\) & \(x_{1}\) & \(y_{1}\) & \(x_{1}\) & \(y_{1}\) \\ \(x_{2}\) & \(y_{2}\) & \(x_{2}\) & \(y_{2}\) & \(x_{2}\) & \(y_{2}\) \\ \(x_{1}\) & \(y_{1}\) & \(x_{1}\) & \(y_{1}\) & \(x_{1}\) & \(y_{1}\) \\ \(x_{1}\) & \(y_{1}\) & \(x_{1}\) & \(y_{1}\) & \(x_{1
multipartite graph \(K_{n_{1},n_{2},\ldots,n_{p}}\). The encoding given in Theorem 2.5 allows us to obtain the number of non-isomorphic acyclic orientations of complete multipartite graphs, see Theorem 2.7.
**Theorem 2.7**.: \[\mathcal{B}(n_{1},n_{2},\ldots,n_{p})=\frac{\binom{n_{1}+n_{2}+\ldots+n_{p}}{n_{ 1},n_{2},\ldots,n_{p}}}{r_{1}!r_{2}!\ldots r_{s}!}\]
_where \(\binom{n_{1}+n_{2}+\ldots+n_{p}}{n_{1},n_{2},\ldots,n_{p}}\) is the multinomial coefficient and \(r_{1},r_{2},\ldots,r_{s}\) are the number of parts in \(K_{n_{1},n_{2},\ldots,n_{p}}\) grouping by the same size (i.e. \(r_{i}\) for \(i\in\{1,...,s\}\) suggests there are \(r_{i}\) many parts that contain the same fixed number of vertices)._
Proof.: Let \(\mathcal{K}_{1},\mathcal{K}_{2}\) be two isomorphic acyclic orientations of a multipartite graph \(G\). Note that every isomorphism \(\sigma:V(\mathcal{K}_{1})\to V(\mathcal{K}_{2})\) matches all vertices of each part of \(\mathcal{K}_{1}\) with all vertices in a part of \(\mathcal{K}_{2}\); otherwise, \(\sigma\) doesn't preserve adjacency. Therefore, \(\mathcal{A}(n_{1},\ldots,n_{k})=\mathcal{B}(n_{1},\ldots,n_{k})\) if \(n_{i}\neq n_{j}\) when \(i\neq j\) since the vertices in each part are unlabelled. Then \(\sigma\) matches all sources of \(\mathcal{K}_{1}\) with the sources of \(\mathcal{K}_{2}\). Moreover, if we remove all sources from both \(\mathcal{K}_{1}\) and \(\mathcal{K}_{2}\) obtaining \(\mathcal{K}_{1}^{\prime}\) and \(\mathcal{K}_{2}^{\prime}\), respectively, then \(\sigma\) matches the sources of \(\mathcal{K}_{1}^{\prime}\) and \(\mathcal{K}_{2}^{\prime}\) as well. We can repeat this removing procedure until a null-graph is obtained. Hence, the codes assigned to \(\mathcal{K}_{1}\) and \(\mathcal{K}_{2}\) by Theorem 2.5 match each other except by a possible permutation within of the parts with the same size. Besides, if there are exactly \(r_{1}\) parts with the same size in \(G\), we have \(r_{1}!\) different codes associated to isomorphic acyclic orientations when the (digits) codes assigned to the \(r_{1}\) equal-size parts are permuted and kept the same places for the other digits. Therefore, by grouping the parts of \(G\) with the same size we obtain the result.
From Theorem 2.7, we have that \(\mathcal{B}(2,2)=3\). Table 2 shows all \(3\) non-isomorphic acyclic orientations of \(K_{2,2}\) with their corresponding codes. Note that by swapping the digits \(0\)'s and \(1\)'s we obtain isomorphic acyclic orientations due to both parts have the same size. Permuting vertices within each part also obtains an isomorphic orientation. Let us denote by \(\mathcal{C}(n_{1},n_{2},\ldots,n_{p})\) the number of non-isomorphic acyclic orientations of complete multipartite graph \(K_{n_{1},n_{2},\ldots,n_{p}}\) containing a directed spanning tree. Similarly, we can obtain the number of non-isomorphic acyclic orientations of complete multipartite graphs containing a directed spanning tree, see Theorem 2.8.
**Theorem 2.8**.: _Let \(n_{1},\ldots,n_{p}\) be \(p\) positive integer numbers and \(N=n_{1}+\ldots+n_{p}\). We have_
\[\mathcal{C}(n_{1},\ldots,n_{p})=\frac{\binom{N}{n_{1},\ldots,n_{p}}}{r_{1}! \ldots r_{s}!}\cdot\frac{N^{2}-\sum_{r=1}^{p}(n_{r})^{2}}{N(N-1)}.\]
_where \(r_{1},r_{2},\ldots,r_{s}\) are the number of parts in \(K_{n_{1},n_{2},\ldots,n_{p}}\) grouping by the same size._
Proof.: The result is a direct consequence of counting the number of codes as in Theorem 2.5, _i.e._, numbers in the numerical system of base \(p\) with length \(N:=n_{1}+n_{2}+\ldots+n_{p}\) and with exactly \(n_{1}\) digits \(0\), \(n_{2}\) digits
\begin{table}
\begin{tabular}{c c c} \(n_{0}\) & \(n_{1}\) & \(n_{0}\) & \(n_{1}\) \\ \hline \(n_{0}\) & \(n_{1}\) & \(n_{0}\) & \(n_{1}\) \\ \end{tabular}
\end{table}
Table 2: All non-isomorphic acyclic orientations of \(K_{2,2}\) with unlabeled vertices and their corresponding codes
\(1\), \(\ldots\), \(n_{p}\) digits \(p-1\) such that the first two digits are distinct. That is
\[\sum_{1\leq i<j\leq p}2\,\mathcal{A}(\ldots,n_{i}-1,\ldots,n_{j}-1,\ldots)= \frac{\mathcal{A}(n_{1},\ldots,n_{p})}{N(N-1)}\sum_{1\leq i<j\leq p}\,2n_{i}n_{j}\]
Then, considering the isomorphic directed graphs obtained by permuting the parts with the same size, we obtain
\[\mathcal{C}(n_{1},\ldots,n_{p})=\frac{\mathcal{A}(n_{1},\ldots,n_{p})}{r_{1}! \ldots r_{s}!}\,\frac{N^{2}-\sum_{r=1}^{p}(n_{r})^{2}}{N(N-1)}.\]
## 3 Acyclic orientations of a complete multipartite graph with labelled vertices
In this section we deal with the number of acyclic orientation of a complete multipartite graph with labelled vertices. Note that using the coding in (2.2) we can also obtain the poly-Bernoulli numbers \(B_{n_{1},n_{2}}\), _i.e._, number of acyclic orientation of a labelled complete bipartite graph \(K_{n_{1},n_{2}}\) with size of each part \(n_{1}\) and \(n_{2}\), respectively, see Proposition 3.1. It is well-known that poly-Bernoulli number also counts, for instance, the lonesum matrices [4] and Callan permutations [3] among other things. The closed formula below was given by Arakawa and Kaneko in [1].
**Proposition 3.1**.: _The number of acyclic orientation of a labelled vertices complete bipartite graph \(K_{n_{1},n_{2}}\) with size of each part \(n_{1}\) and \(n_{2}\), respectively, is_
\[B_{n_{1},n_{2}}=\sum_{m=0}^{\min\{n_{1},n_{2}\}}(m!)^{2}\,\genfrac{\{}{0.0pt}{} {n_{1}+1}{m+1}}{m+1}\,\genfrac{\{}{0.0pt}{}{n_{2}+1}{m+1}}{m+1}, \tag{3.4}\]
_where \(\genfrac{\{}{0.0pt}{}{r}{s}}{}\) denotes the Stirling number of the second kind._
Proof.: Without loss of generality we can assume that \(n_{1}\leq n_{2}\). Notice that we may count the number of acyclic orientations of \(K_{n_{1},n_{2}}\) by grouping them according to \(m\geq 1\), the number of groups of \(0\)'s codes (code \(0\) associated to part with size \(n_{1}\)) separated by at least a code \(1\). In this case, the number of groups of \(1\)'s codes separated by at least a code \(0\) must be either \(m-1\), \(m\) or \(m+1\). We recall \(\genfrac{\{}{0.0pt}{}{r}{s}}{}\) is the number of partitions of a set with \(r\) elements into \(s\) parts is \(\genfrac{\{}{0.0pt}{}{r}{s}}{}\), and the well-known identity
\[\genfrac{\{}{0.0pt}{}{r+1}{s}}{}=s\genfrac{\{}{0.0pt}{}{r}{s}}{}+\genfrac{\{ }{0.0pt}{}{r}{s-1}}{}\qquad\text{for all }r,s\geq 0.\]
Thus we count it by consider the number of partitions of the \(n_{1}\) elements in Part 1 and the number of partitions of the \(n_{2}\) elements in Part 2; follow by multiplying by the corresponding factorial given by the
permutation of the distinct groups in the partition of each part. Then, we have
\[\sum_{m=1}^{n_{1}}\left(m!(m-1)!\genfrac{\{}{\}}{0.0pt}{}{n_{1}}{m} \genfrac{\{}{\}}{0.0pt}{}{n_{2}}{m-1}+2\,m!m!\genfrac{\{}{\}}{0.0pt}{}{n_{1}}{m} \genfrac{\{}{\}}{0.0pt}{}{n_{2}}{m}+m!(m+1)!\genfrac{\{}{\}}{0.0pt}{}{n_{1}}{m} \genfrac{\{}{\}}{0.0pt}{}{n_{2}}{m+1}\right)\] \[= \sum_{m=1}^{n_{1}}m!(m-1)!\genfrac{\{}{\}}{0.0pt}{}{n_{1}}{m} \genfrac{\{}{\}}{0.0pt}{}{n_{2}}{m-1}+m\genfrac{\{}{\}}{0.0pt}{}{n_{2}}{m} \genfrac{\{}{\}}{0.0pt}{}{n}+\sum_{m=1}^{n_{1}}(m!)^{2}\genfrac{\{}{\}}{0.0pt }{}{n_{1}}{m}\genfrac{\{}{\}}{0.0pt}{}{\left(\gengen{}{\}{\}}{0.0pt}{}{m}+(m +1)\genfrac{\{}{\}}{0.0pt}{}{n_{2}}{m+1}\right)\] \[= \sum_{m=1}^{n_{1}}m!(m-1)!\genfrac{\{}{\}}{0.0pt}{}{n_{1}}{m} \genfrac{\{}{\}}{0.0pt}{}{n_{2}+1}{m}+\sum_{m=1}^{n_{1}}(m!)^{2}\genfrac{\{} {\}}{0.0pt}{}{n_{1}}{m}\genfrac{\{}{\}}{0.0pt}{}{n_{2}+1}{m+1}\] \[= \sum_{m=0}^{n_{1}}(m+1)!m!\genfrac{\{}{\}}{0.0pt}{}{n_{1}}{m+1} \genfrac{\{}{\}}{0.0pt}{}{n_{2}+1}{m+1}+\sum_{m=0}^{n_{1}}(m!)^{2}\genfrac{\{ }{\}}{0.0pt}{}{n_{1}}{m}\genfrac{\{}{\}}{0.0pt}{}{n_{2}+1}{m+1}\] \[= \sum_{m=0}^{n_{1}}(m!)^{2}\genfrac{\{}{\}}{0.0pt}{}{(m+1)\genfrac{ \{}{\}}{0.0pt}{}{n_{1}}{m+1}+\genfrac{\{}{\}}{0.0pt}{}{n_{1}}{m}} \genfrac{\{}{\}}{0.0pt}{}{n_{2}+1}{m+1}\] \[= \sum_{m=0}^{n_{1}}(m!)^{2}\genfrac{\{}{\}}{0.0pt}{}{n_{1}+1}{m+1} \genfrac{\{}{\}}{0.0pt}{}{n_{2}+1}{m+1}\] \[= B_{n_{1},n_{2}}.\]
In order to obtain the result for the number of acyclic orientation of a labelled multipartite graph with \(p\geq 3\) parts in a similar way, we define \(X_{k_{1},k_{2},\ldots,k_{p}}\) by the number of strings in the alphabet \(\mathcal{S}:=\{s_{1},s_{2},\ldots,s_{p}\}\) with \(k_{1}\) characters \(s_{1}\), \(k_{2}\) characters \(s_{2}\), and so on with \(k_{p}\) characters \(s_{p}\) such that no two consecutive characters are the same. We define \(X^{(i)}_{k_{1},k_{2},\ldots,k_{p}}\) by the number of strings in \(\mathcal{S}\) with \(k_{1}\) characters \(s_{1}\), \(k_{2}\) characters \(s_{2}\), and so on with \(k_{p}\) characters \(s_{p}\) such that there are no two consecutive identical characters and the first character is \(s_{i}\) for \(1\leq i\leq p\). Clearly, we have
\[X_{k_{1},\ldots,k_{p}}=X^{(1)}_{k_{1},\ldots,k_{p}}+X^{(2)}_{k_{1},\ldots,k_{p }}+\ldots+X^{(p)}_{k_{1},\ldots,k_{p}}. \tag{3.5}\]
Note that for some \(p\)-tuples \((k_{1},\ldots,k_{p})\in\mathbb{N}^{p}\), we have \(X_{k_{1},\ldots,k_{p}}=0\), for instance, \(X_{2,0,\ldots,0}=0\) since we cannot alternate two characters \(s_{1}\) and no other characters without leaving two consecutive characters \(s_{1}\). Moreover, \(X_{k_{1},\ldots,k_{p}}>0\) if and only if \((k_{1},\ldots,k_{p})\in\mathbb{T}\) where
\[\mathbb{T}:=\left\{(k_{1},\ldots,k_{p})\in\mathbb{N}^{p}\,:\,\max\{k_{1},k_{2}, \ldots,k_{p}\}\leq\frac{1+\sum_{i=1}^{p}k_{i}}{2}\right\}.\]
Hence, we have
\[X^{(j)}_{k_{1},k_{2},\ldots,k_{p}}\,=\sum_{i=1}^{p}\,X^{(i)}_{k_{1},\ldots,k_{j }-1,\ldots,k_{p}}\,-\,X^{(j)}_{k_{1},\ldots,k_{j}-1,\ldots,k_{p}}\,+\chi_{\{e_{j }\}}(k_{1},k_{2},\ldots,k_{p}) \tag{3.6}\]
for all \(k_{1},k_{2},\ldots,k_{p}\in\mathbb{N}\) and \(1\leq j\leq p\) where \(e_{j}\) represents the \(j^{th}\) canonical vector of \(\mathbb{R}^{p}\) and \(\chi_{A}\) is the indicator of \(A\). For obvious reasons consider \(X_{k_{1},k_{2},\ldots,k_{p}}=X^{(j)}_{k_{1},k_{2},\ldots,k_{p}}=0\) if \(k_{i}<0\) for some \(1\leq i\leq p\) and for every \(1\leq j\leq p\). Note that, on the one hand, if \(X^{(1)}_{k_{1},k_{2},\ldots,k_{p}}>0\), by removing the first character (a \(s_{1}\)) from each string counted in \(X^{(1)}(k_{1},k_{2},\ldots,k_{p})\) we obtain a string counted in \(X^{(j)}_{k_{1}-1,k_{2},\ldots,k_{p}}\) for some
\(2\leq j\leq p\), except when \((k_{1},k_{2},\ldots,k_{p})=(1,0,\ldots,0)\) where we consider that the empty string \(\lambda\) did not count. Besides, by adding a character \(s_{1}\) to the front of each string counted in \(X_{k_{1}-1,k_{2},\ldots,k_{p}}^{(j)}\) for \(2\leq j\leq p\) we obtain distinct strings in \(X_{k_{1},k_{2},\ldots,k_{p}}^{(1)}\); analogously, we obtain the corresponding relations for \(X_{k_{1},k_{2},\ldots,k_{p}}^{(j)}\) for \(2\leq j\leq p\), respectively. On the other hand, if \(X_{k_{1},k_{2},\ldots,k_{p}}^{(1)}=0\), then we have either \(k_{1}>1+\sum_{i=2}^{p}k_{i}\) and consequently \(X_{k_{1}-1,k_{2},\ldots,k_{p}}^{(j)}=0\) for \(2\leq j\leq p\), or there are more characters \(s_{j}\) than the other characters for some \(2\leq j\leq p\) making \(X_{k_{1}-1,k_{2},\ldots,k_{p}}^{(i)}=0\) for every \(2\leq i\leq p\). We can obtain analogous equations to (3.6).
Now we define a \(p\)-variables ordinary generating function
\[\mathcal{F}(x_{1},\ldots,x_{p}):=\sum_{k_{1},\ldots,k_{p}\in\mathbb{N}}X_{k_{ 1},\ldots,k_{p}}\,x_{1}^{k_{1}}x_{2}^{k_{2}}\cdot\ldots\cdot x_{p}^{k_{p}}.\]
Note that \(\mathcal{F}(k_{1},\ldots,k_{p})\) converges absolutely on \(|x_{1}|+\ldots+|x_{p}|<1\) since \(0\leq X_{k_{1},\ldots,k_{p}}\leq{k_{1}+\ldots+k_{p}\choose k_{1},\ldots,k_{p}}\) for all \(k_{1},\ldots,k_{p}\in\mathbb{N}\).
**Proposition 3.2**.: _We have_
\[\mathcal{F}(x_{1},\ldots,x_{p})=\frac{\frac{x_{1}}{x_{1}+1}+\ldots+\frac{x_{p }}{x_{p}+1}}{1-\left(\frac{x_{1}}{x_{1}+1}+\ldots+\frac{x_{p}}{x_{p}+1}\right)}.\]
Proof.: Since \(\mathcal{F}(x_{1},\ldots,x_{p})\) converges absolutely in a domain including \(\Omega:=\{(x_{1},\ldots,x_{p})\in\mathbb{R}^{p}\,:\,|x_{1}|+\ldots+|x_{p}|<1\}\). We may reorder its terms of summation without affecting the sum. Define now
\[\mathcal{F}^{(j)}(x_{1},\ldots,x_{p}):=\sum_{k_{1},\ldots,k_{p}\in\mathbb{N}} X_{k_{1},\ldots,k_{p}}^{(j)}\,x_{1}^{k_{1}}x_{2}^{k_{2}}\ldots x_{p}^{k_{p}} \quad\text{for every $1\leq j\leq p$}.\]
Now by performing the summation of (3.6) for every \((k_{1},\ldots,k_{p})\in\mathbb{N}^{p}\), we obtain
\[\mathcal{F}^{(j)}(x_{1},\ldots,x_{p})=x_{j}\,\sum_{i=1}^{p}\mathcal{F}^{(i)}(x _{1},\ldots,x_{p})-x_{j}\,\mathcal{F}^{(j)}(x_{1},\ldots,x_{p})+x_{j}\quad \text{for every $1\leq j\leq p$}. \tag{3.7}\]
Note that we can re-write (3.7) as follows
\[(x_{j}+1)\,\mathcal{F}^{(j)}(x_{1},\ldots,x_{p})=x_{j}\,\mathcal{F}(x_{1}, \ldots,x_{p})+x_{j}\quad\text{for every $1\leq j\leq p$}. \tag{3.8}\]
Note that \(x_{1},\ldots,x_{p}\neq-1\) since \(|x_{1}|+\ldots+|x_{p}|<1\). So, we can rewrite (3.8) as follows
\[\mathcal{F}^{(j)}(x_{1},\ldots,x_{p})=\frac{x_{j}}{x_{j}+1}\,\mathcal{F}(x_{1},\ldots,x_{p})+\frac{x_{j}}{x_{j}+1}\quad\text{for every $1\leq j\leq p$}. \tag{3.9}\]
Indeed, by adding the equations involved in (3.9) we obtain
\[\mathcal{F}(x_{1},\ldots,x_{p})=\frac{\frac{x_{1}}{x_{1}+1}+\ldots+\frac{x_{p }}{x_{p}+1}}{1-\frac{x_{1}}{x_{1}+1}-\ldots-\frac{x_{p}}{x_{p}+1}}=\sum_{n \geq 1}\left(\frac{x_{1}}{x_{1}+1}+\ldots+\frac{x_{p}}{x_{p}+1}\right)^{n} \tag{3.10}\]
Note that \(\mathcal{F}(x_{1},\ldots,x_{p})\) converges (absolutely) if and only if \(\left|\frac{x_{1}}{x_{1}+1}+\ldots+\frac{x_{p}}{x_{p}+1}\right|<1\). Indeed, we have that \(\mathcal{F}(x_{1},\ldots,x_{p})\) converges absolutely on
\[\Omega:=\left\{(x_{1},\ldots,x_{p})\in\mathbb{R}^{p}\,:\,|x_{1}|+\ldots+|x_{p}|< 1\,,\,\left|\frac{x_{1}}{x_{1}+1}+\ldots+\frac{x_{p}}{x_{p}+1}\right|<1\right\}.\]
Note that there is a \(p\)-dimension ball centered at the origin included in \(\Omega\). Now, using the closed formula of \(\mathcal{F}\), we can obtain a closed formula for \(X_{k_{1},\ldots,k_{p}}\), see the following result.
**Theorem 3.3**.: _For every \(k_{1},\ldots,k_{p}\in\mathbb{N}^{+}\) we have_
\[X_{k_{1},\ldots,k_{p}}=(-1)^{k_{1}+\ldots+k_{p}}\sum_{r_{1}=1}^{k_{1}}\ldots \sum_{r_{p}=1}^{k_{p}}\binom{r_{1}+\ldots+r_{p}}{r_{1},\ldots,r_{p}}\prod_{1 \leq i\leq p}(-1)^{r_{i}}\binom{k_{i}-1}{r_{i}-1}. \tag{3.11}\]
Proof.: \(X_{k_{1},\ldots,k_{p}}\) is the coefficient of \(x_{1}^{k_{1}}x_{2}^{k_{2}}\ldots x_{p}^{k_{p}}\) in \(\mathcal{F}(x_{1},\ldots,x_{p})\). We also have
\[\frac{z}{z+1}=\sum_{n\geq 1}(-1)^{n-1}z^{n}\qquad\text{ for every }|z|<1.\]
Moreover, using the Taylor polynomial of \(\frac{z}{z+1}\) with Peano's form of remainder, we have
\[\frac{z}{z+1}=z-z^{2}+z^{3}-\ldots-(-z)^{k}\,+\,\mathcal{O}(z^{k+1})\quad \text{for every }k\geq 1.\]
Note that \(D(k,r)\) is also the coefficient of \(z^{k}\) in \(\big{(}z-z^{2}+z^{3}-\ldots-(-z)^{k}\big{)}^{r}\). Thus, from (3.10) we have
\[\mathcal{F}(x_{1},\ldots,x_{p})=\sum_{n\geq 1}\big{(}x_{1}-\ldots-(-x_{1})^{k_{1} }+\mathcal{O}(x_{1}^{k_{1}+1})\,+\,\ldots\,+\,x_{p}-\ldots-(-x_{p})^{k_{p}}+ \mathcal{O}(x_{p}^{k_{p}+1})\big{)}^{n}\]
Thus, we have that \(X_{x_{1},\ldots,k_{p}}\) is the coefficient of \(x_{1}^{k_{1}}x_{2}^{k_{2}}\ldots x_{p}^{k_{p}}\) in
\[\sum_{n=p}^{k_{1}+\ldots+k_{p}}\big{(}x_{1}-\ldots-(-x_{1})^{k_{1 }}\,+\,x_{2}-\ldots-(-x_{2})^{k_{2}}\,+\,\ldots\,+\,x_{p}-\ldots-(-x_{p})^{k_{ p}}\big{)}^{n}\] \[=\sum_{n=p}^{k_{1}+\ldots+k_{p}}\sum_{r_{1}+\ldots+r_{p}=n}\binom {n}{r_{1},\ldots,r_{p}}\big{(}x_{1}-\ldots-(-x_{1})^{k_{1}}\big{)}^{r_{1}} \big{(}x_{2}-\ldots-(-x_{2})^{k_{2}}\big{)}^{r_{2}}\ldots\big{(}x_{p}-\ldots-( -x_{p})^{k_{p}}\big{)}^{r_{p}}\]
Then, by adding the coefficients of \(x_{1}^{k_{1}}x_{2}^{k_{2}}\ldots x_{p}^{k_{p}}\) for each \(n\) we obtain
\[X_{k_{1},\ldots,k_{p}}=\sum_{n=1}^{k_{1}+\ldots+k_{p}}\sum_{r_{1}+\ldots+r_{p} =n}\binom{n}{r_{1},\ldots,r_{p}}D(k_{1},r_{1})\,D(k_{2},r_{2})\,\ldots\,D(k_{p },r_{p})\]
Now, we can use combinatorial arguments to obtain a recurrence relation involving \(\{D(k,r)\}_{k\geq r}\) and initial conditions that allow to solve \(\{D_{k,r}\}_{k\geq r}\). Note that we have trivial relations on the double indices sequence that could work as initial conditions
\[D(k,0)=0,\,\forall k\in\mathbb{N},\quad D(k,r)=0,\text{ if }k<r,\quad D(k,1)=(-1)^{k-1},\,\forall k>0,\quad D(k,k)=1,\,\forall k>0. \tag{3.12}\]
Using the fact that
\[\big{(}z-z^{2}+z^{3}-z^{4}+\ldots-(-z)^{k}\big{)}^{r+1}=\big{(}z-z^{2}+z^{3}-z ^{4}+\ldots-(-z)^{k}\big{)}^{r}\,\big{(}z-z^{2}+z^{3}-z^{4}+\ldots-(-z)^{k} \big{)},\]
we can also obtain the following recurrence relation that could be used to obtain \(D(k,r)\) for whatever pair \((k,r)\) whenever \(k\geq r\)
\[D(k,r+1)=D(k-1,r)-D(k-2,r)+D(k-3,r)-...(-1)^{r-1}D(k-r,r). \tag{3.13}\]
Hence, we have that (3.12) and (3.13) give an iterative way to solve \(\{D_{k,r}\}_{k\geq r}\). Moreover, subtracting (3.13) from the identity \(D(k,r)=D(k,r)\), we obtain
\[D(k+1,r+1)=D(k,r)-D(k,r+1) \tag{3.14}\]
Let \(E(a,b)\) be the sequence that verifies
\[D(k,r):=(-1)^{k+r}E(k-1,k-r)\]
Then, from (3.12) we have the following initial conditions for \(E(a,b)\)
\[E(a-1,a)=0,\quad E(a,-b)=0,\quad E(a,a)=1,\quad E(a,0)=1,\qquad\forall a\in \mathbb{N},\forall b>0. \tag{3.15}\]
Besides, we obtain the following recurrence relation for \(E(a,b)\) from (3.14)
\[E(a+1,b)=E(a,b)+E(a,b-1)\quad\text{for every }a,b\in\mathbb{N}. \tag{3.16}\]
Then, uniqueness of \(E(a,b)\) satisfying (3.15) and (3.16) gives
\[E(a,b)=\begin{pmatrix}a\\ b\end{pmatrix}\quad\text{for every }a,b\in\mathbb{N}.\]
Indeed, we have some identities involving \(\big{\{}X_{k_{1},\ldots,k_{p}}\big{\}}\) due to combinatorial arguments. For instance, since \(X_{k_{1},\ldots,k_{p}}=0\) if \((k_{1},\ldots,k_{p})\notin\mathbb{T}\), we have, _e.g._,
\[X_{k+2,k}=\sum_{n=1}^{2k+2}(-1)^{n}\sum_{r=0}^{n}\begin{pmatrix}n\\ r\end{pmatrix}\begin{pmatrix}k+1\\ k+2-r\end{pmatrix}\begin{pmatrix}k-1\\ k-n+r\end{pmatrix}=0\qquad\text{for every }k\in\mathbb{N}.\]
and since \(X_{k,k}=2\) for \(k>0\) and \(X_{k,k+1}=1\) for \(k\geq 0\) we also have
\[\sum_{n=1}^{2k+1}(-1)^{n+1}\sum_{r=0}^{n}\begin{pmatrix}n\\ r\end{pmatrix}\begin{pmatrix}k\\ k+1-r\end{pmatrix}\begin{pmatrix}k-1\\ k-n+r\end{pmatrix}=1\quad\text{for every }k\in\mathbb{N}\]
and
\[\sum_{n=1}^{2k}(-1)^{n}\sum_{r=0}^{n}\begin{pmatrix}n\\ r\end{pmatrix}\begin{pmatrix}k-1\\ k-r\end{pmatrix}\begin{pmatrix}k-1\\ k-n+r\end{pmatrix}=2\quad\text{for every }k\in\mathbb{N}^{+}.\]
The following result shows a closed formula of the number of acyclic orientation of a complete multipartite graphs with labelled vertices, see Theorem 3.4.
**Theorem 3.4**.: _The number of acyclic orientation of a complete multipartite graph \(K_{n_{1},n_{2},\ldots,n_{p}}\) with labelled vertices and \(p\) parts with sizes of the parts \(n_{1},n_{2},\ldots,n_{p}\), respectively, is_
\[B_{n_{1},n_{2},\ldots,n_{p}}=\sum_{k_{1}\leq n_{1}}\sum_{k_{2}\leq n_{2}} \ldots\sum_{k_{p}\leq n_{p}}k_{1}!k_{2}!\ldots k_{p}!\genfrac{\{}{\}}{0.0pt}{}{ n_{1}}{k_{1}}\genfrac{\{}{\}}{0.0pt}{}{n_{2}}{k_{2}}\cdots\genfrac{\{}{\}}{0.0pt}{}{ n_{p}}{k_{p}}X_{k_{1},\ldots,k_{p}} \tag{3.17}\]
_where \(\genfrac{\{}{\}}{0.0pt}{}{r}{s}\) denotes the Stirling number of the second kind._
Proof.: Notice that we can use the same argument as in Proposition 3.1. We may count the number of acyclic orientations of \(K_{n_{1},n_{2},\ldots,n_{p}}\) by grouping them according to \(k_{i-1}\), the number of groups of \(i\)'s codes (code \(i\) associated to part with size \(n_{i+1}\)) separated by at least another group of code in \([p]\). Then, consider the distinct ways to obtain \(k_{i}\) groups out of \(n_{i}\) codes \(i-1\), _i.e._, \(\genfrac{\{}{\}}{0.0pt}{}{n_{i}}{k_{i}}\), and its corresponding permutation of the \(k_{i}\) groups, _i.e._, \(k_{i}!\)
The length of the longest path in an acyclic orientation of a labelled complete multipartite graph was discussed in [7]. In this direction, we have the following result below. In an acyclic orientation of a complete multipartite graph, the longest directed paths always start from the part that includes the sources, and end at the part that includes the sinks. Note that a longest path cannot start from another part of the multipartite graph since a source makes it one edge longer. Analogously, a sink could make a path one edge longer if it doesn't end in the part that includes the sinks. Moreover, the codification (2.2), given in Theorem 2.5, gives a partition on the vertices of \(\mathcal{K}\) induced by the equivalence relation \(R_{\mathcal{K}}\) defined by:
_Two vertices are related by \(R_{\mathcal{K}}\) if they are sources in some subsequent acyclic orientation obtained during the sources removing decomposition, i.e., if the two vertices are represented in the code within a sub-string of consecutive and identical codes._
Notice that since the code assigned to \(\mathcal{K}\) is unique when you fix the code assigned of each part, _i.e._, unique unless you consider permutation of their parts. Then, we may verify that \(R_{\mathcal{K}}\) is an equivalence relation on the set of vertices of the complete multipartite graph. Indeed, the code of \(\mathcal{K}\) induces a total order \(\prec_{\mathcal{K}}\) in the partition \(V(K_{n_{1},n_{2},\ldots,n_{p}})/R_{\mathcal{K}}\) given by the order of appearance on the code of \(\mathcal{K}\). Note that \(|V(K_{n_{1},n_{2},\ldots,n_{p}})/R_{\mathcal{K}}|=k_{1}+k_{2}+\ldots+k_{p}\) whenever one of the codifications given by (2.2) of \(\mathcal{K}\) is counted in \(X_{k_{1},k_{2},\ldots,k_{p}}\).
**Proposition 3.5**.: _The length of the longest path in an acyclic orientation \(\mathcal{K}\) of a complete multipartite graph \(K_{n_{1},n_{2},\ldots,n_{p}}\) is the size of the partition induced by \(R_{\mathcal{K}}\) minus one, i.e., \(|V(K_{n_{1},n_{2},\ldots,n_{p}})/R_{\mathcal{K}}|-1\)._
_Furthermore, the number of longest paths in \(\mathcal{K}\) is given by the multiplication of the sizes of each part of the partition induced by \(R_{\mathcal{K}}\)._
Proof.: Consider now a path \(\mathcal{P}\) in \(\mathcal{K}\). Notice that \(\mathcal{P}\) contains at most \(1\) (one) vertex of each part of the partition; otherwise, there is not a directed path joining them. Moreover, a path containing vertices in each part of the given partition \(V_{K_{n_{1},n_{2},\ldots,n_{p}}}/R_{\mathcal{K}}\) contains the maximum number of vertices and therefore, is one of the longest directed paths in \(\mathcal{K}\). Note that such a \(\mathcal{P}\) is one of the longest paths in \(\mathcal{K}\). Therefore, a longest path includes \(k_{1}+\ldots+k_{p}\) vertices, and consequently, its length is \(k_{1}+\ldots+k_{p}-1\).
Indeed, the number of longest paths in \(\mathcal{K}\) is given by the multiplication of the size of each part of the partition induced by \(R\) since we may choose a random vertex from each element of \(V_{K_{n_{1},n_{2},\ldots,n_{p}}}/R\), see Table 3.
Notice that we may use Proposition 3.5 to investigate the distribution of the longest paths in the acyclic orientations of a complete multipartite graphs. Proposition 3.5 and Gallai-Hasse-Roy-Vitaver theorem give the trivial result
\[\chi(k_{n_{1},\ldots,k_{p}})=\min_{\mathcal{K}}|V(K_{n_{1},n_{2},\ldots,n_{p}} )/R_{\mathcal{K}}|=p.\]
Note that for any acyclic orientation \(\mathcal{K}\) of a general graph \(G\) we can similarly define \(R_{\mathcal{K}}\) as
_Two vertices are related by \(R_{\mathcal{K}}\) if they are sources in some subsequent acyclic orientation obtained during the sources removing decomposition._
Moreover, we may define a unique code for each acyclic orientation \(\mathcal{K}\) of a graph \(G\) if we establish certain priority in the parts of graph \(G\). Another recommendation is to select a partition in \(G\) associated to the chromatic number of \(G\).
**Acknowledgements**
The first author was supported by a grant from Agencia Estatal de Investigacion (PID2019-106433GB-I00 /AEI/10.13039/501100011033), Spain. |
2310.00883 | Charged spinning fermionic configurations and a mass gap | We consider a self-consistent axially symmetric system supported by a
classical nonlinear spinor field minimally coupled to electric and magnetic
Maxwell fields. The presence of the nonlinearity of the spinor field ensures
the existence of a minimum positive energy of the system (a mass gap), of a
minimum charge (a charge gap), and of a minimum magnetic moment. In turn, the
presence of the electric charge results in qualitative changes in the behavior
of physical characteristics of the systems under consideration as compared with
the case of an electrically neutral spinor field. It is shown that, with a
suitable choice of free system parameters, there exists a regular finite-energy
particlelike solution describing a localized spinning object whose physical
parameters correspond to the main characteristics of an electron/positron
(including the spin equal to $1/2$), but with the characteristic size
comparable to the corresponding Compton wavelength. Also, we show that four
local Dirac equations are equivalent to two nonlocal equations. | Vladimir Dzhunushaliev, Vladimir Folomeev | 2023-10-02T03:57:07Z | http://arxiv.org/abs/2310.00883v1 | # Charged spinning fermionic configurations and a mass gap
###### Abstract
We consider a self-consistent axially symmetric system supported by a classical nonlinear spinor field minimally coupled to electric and magnetic Maxwell fields. The presence of the nonlinearity of the spinor field ensures the existence of a minimum positive energy of the system (a mass gap), of a minimum charge (a charge gap), and of a minimum magnetic moment. In turn, the presence of the electric charge results in qualitative changes in the behavior of physical characteristics of the systems under consideration as compared with the case of an electrically neutral spinor field. It is shown that, with a suitable choice of free system parameters, there exists a regular finite-energy particlelike solution describing a localized spinning object whose physical parameters correspond to the main characteristics of an electron/positron (including the spin equal to 1/2), but with the characteristic size comparable to the corresponding Compton wavelength. Also, we show that four local Dirac equations are equivalent to two nonlocal equations.
nonlinear spinor field; spinning particlelike solutions; mass, charge, and magnetic moment gaps; total angular momentum pacs: 11.90.+t,11.15.-q
## I Introduction
Nonlinear equations describing various physical systems have been the object of numerous investigations in different aspects. The bulk of such studies have been mainly focused on a consideration of the nonlinear Shrodinger and Klein-Gordon equations involving different potentials. The first of these equations permits one to describe various phenomena and processes within condensed matter physics, nonlinear optics, atomic and mathematical physics. In turn, the Klein-Gordon equation is widely used in modeling various particlelike objects in condensed matter and mathematical physics, including strongly gravitating systems.
Much less attention was paid to investigations of the nonlinear Dirac equation. Such an equation was initially introduced by D. Ivanenko [1]. Subsequently, it was analyzed in the works [2; 3], where possible forms of nonlinear terms were suggested. Following these ideas, W. Heisenberg tried to employ this equation as a fundamental equation suitable for describing the properties of an electron [4]. Later, one of forms of the nonlinear Dirac equation was employed for an approximate description of the properties of hadrons (this approach is called the Nambu-Jona-Lasinio model [5]; for a review, see Ref. [6]). In turn, bearing in mind that the systems with a nonlinear spinor field contain a mass gap [2; 3; 7; 8], in Ref. [9], the authors tried to describe extended particles (hadrons) possessing the smallest possible energy. On the other hand, in the case of fermions with zero bare mass, Ref. [10] suggests a toy model of quark confinement in quantum chromodynamics. In addition, the nonlinear Dirac equations may be used as effective theories in various fields of atomic, nuclear, particle, and gravitational physics [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23].
In quantum chromodynamics, there is a well-known problem to prove the existence of a minimum value of the mass, i.e., of a mass gap, in non-Abelian quantum Yang-Mills theory. This problem is very nontrivial and has not been solved yet. One of possible approaches towards solving this problem might be a consideration of simpler problems when quantum systems are replaced by some approximate classical systems. In this case, if one could show that for such classical configurations a mass gap might occur, this could be treated as a possible indication for the existence of the mass gap in quantum systems. As shown in our previous investigations [24; 25; 26; 27], in the systems supported by
classical non-Abelian fields coupled to nonlinear spinor fields, there is the possibility of obtaining a mass gap. From this point of view, such classical systems may be thought of as approximately describing realistic quantum systems.
The systems considered by us earlier in Refs. [24; 25; 26; 27] are spherically symmetric. Obviously, in the more general case the spherical symmetry can already be violated, for example, because of the presence of magnetic Maxwell and/or color fields. Correspondingly, this requires a generalization of the above models. As a first step in this direction, one can consider a simplified situation where the system with a nonlinear spinor field contains only Abelian fields. Consistent with this, the present paper studies a system supported by a classical nonlinear spinor field minimally coupled to Maxwell electric and magnetic (dipole) fields. Due to the presence of the dipole magnetic field, the system is inevitably axisymmetric, and therefore a consideration will be carried out in a general form without any simplifying assumptions as to the smallness of the electromagnetic fields, as was done earlier, for example, in Refs. [8; 9]. This will enable us to study the cases where the contribution to the energy-momentum tensor coming from the electromagnetic fields can be comparable to that of the spinor field. It will be shown that such a contribution results in qualitative changes in the physical characteristics of the systems under consideration.
Notice here that in the present paper we consider a system supported by a classical spinor field. Following Ref. [28], such a field is meant to be a set of four complex-valued spacetime functions transforming according to the spinor representation of the Lorentz group. In turn, realistic spin-1/2 particles must evidently be described by quantum spinor fields, and it is furthermore believed that there is no classical limit for quantum spinor fields. However, classical spinors can be thought of as arising from some effective description of more complex quantum systems (for arguments in favor of the possibility of the existence of classical spinors, see Ref. [28]).
## II The model
We consider localized configurations consisting of a spinor field \(\psi\) minimally coupled to Maxwell fields. The corresponding total Lagrangian for such a system can be represented in the form (we use natural units with \(c=\hbar=1\) throughout)
\[L_{\rm tot}=\frac{\imath}{2}\left(\bar{\psi}\gamma^{\mu}\psi_{;\mu}-\bar{\psi }_{;\mu}\gamma^{\mu}\psi\right)-m\bar{\psi}\psi-F(S)-\frac{1}{4}F_{\mu\nu}F^{ \mu\nu}, \tag{1}\]
where \(m\) is a bare mass of the fermion, \(F(S)\) is in general an arbitrary nonlinear term with the invariant \(S\) (see below), and the electromagnetic field tensor \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\). The semicolon denotes the covariant derivative defined as \(\psi_{;\mu}=[\partial_{\mu}+1/8\,\omega_{ab\mu}\left(\gamma^{a}\gamma^{b}- \gamma^{b}\gamma^{a}\right)+\imath\,QA_{\mu}]\psi\) with \(\gamma^{a}\) being the Dirac matrices in flat space; the term \(\imath\,QA_{\mu}\psi\) describes the interaction between the spinor and Maxwell fields with the coupling constant \(Q\). In turn, the Dirac matrices in curvilinear coordinates, \(\gamma^{\mu}=e_{a}^{\ \mu}\gamma^{a}\), are obtained using the tetrad \(e_{a}^{\ \mu}\), and \(\omega_{ab\mu}\) is the spin connection [for its definition, see Ref. [29], Eq. (7.135)]. In the above expressions, \(\mu,\nu=0,1,2,3\) are spacetime indices and \(a,b=0,1,2,3\) are tetrad indices. In what follows, we use the Weyl representation of the Dirac matrices,
\[\gamma^{0}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix},\quad\gamma^{k}=\begin{pmatrix}0&\sigma^{k}\\ -\sigma^{k}&0\end{pmatrix},\]
where \(k=1,2,3\) and \(\sigma^{k}\) are the Pauli matrices.
Varying the action with the Lagrangian (1) with respect to the spinor field and to the vector potential \(A_{\mu}\), we derive the corresponding Dirac and Maxwell field equations
\[\imath\gamma^{\mu}\psi_{;\mu}-m\psi-\frac{\partial F}{\partial \bar{\psi}} = 0, \tag{2}\] \[\frac{1}{\sqrt{-g}}\frac{\partial}{\partial x^{\nu}}\left(\sqrt{- g}F^{\mu\nu}\right) = -Q\bar{\psi}\gamma^{\mu}\psi. \tag{3}\]
In the present paper, we take the following simplest self-interaction term
\[F(S)=-\frac{\lambda}{2}\left(\bar{\psi}\psi\right)^{2},\]
where \(\lambda\) is some free nonlinearity parameter. Classical spinor fields with such a nonlinearity have been considered, for instance, in Refs. [2; 3; 7; 8; 9; 20; 21; 22].
From the Lagrangian (1), one can also obtain the corresponding energy-momentum tensor of the system under consideration (already in a symmetric form)
\[T^{\nu}_{\mu}=\frac{\imath}{4}g^{\nu\rho}\left[\bar{\psi}\gamma_{\mu}\psi_{; \rho}+\bar{\psi}\gamma_{\rho}\psi_{;\mu}-\bar{\psi}_{;\mu}\gamma_{\rho}\psi- \bar{\psi}_{;\rho}\gamma_{\mu}\psi\right]-\delta^{\nu}_{\mu}L_{\rm sp}-F^{\nu \rho}F_{\mu\rho}+\frac{1}{4}\delta^{\nu}_{\mu}F_{\alpha\beta}F^{\alpha\beta}. \tag{4}\]
Taking into account the Dirac equation (2) and the corresponding adjoint equation for \(\bar{\psi}\), the Lagrangian for the spinor field appearing in Eq. (4) becomes
\[L_{\rm sp}=-F(S)+\frac{1}{2}\left(\bar{\psi}\frac{\partial F}{\partial\bar{\psi}} +\frac{\partial F}{\partial\psi}\psi\right).\]
In the present paper, we take the stationary _Ansatz_ for the spinor field in the form similar to that of Ref. [30],
\[\psi=e^{\imath\left(M\varphi-\Omega t\right)}\begin{pmatrix}\psi_{1}\\ \psi_{2}\\ \psi_{2}^{*}\\ \psi_{1}^{*}\end{pmatrix}, \tag{5}\]
where \(\Omega\) is the spinor frequency, \(M\) is a half-integer parameter (the azimuthal number). For our purposes, it is convenient to represent the components of the spinor appearing in (5) in the following form:
\[\psi_{1}=\frac{1}{2}\left[X+Y+\imath\left(V+W\right)\right],\quad\psi_{2}= \frac{1}{2}\left[X-Y+\imath\left(V-W\right)\right],\]
where the functions \(X,Y,V\), and \(W\) depend only on the spherical coordinates \(r\) and \(\theta\), and the line element is
\[ds^{2}=dt^{2}-dr^{2}-r^{2}\left(d\theta^{2}+\sin^{2}\theta d\varphi^{2}\right).\]
The _Ansatz_ for the Maxwell field is taken to be
\[A_{\mu}=\{\phi(r,\theta),0,0,\sigma(r,\theta)\}, \tag{6}\]
i.e., it contains an electric and a magnetic potentials. This _Ansatz_ implies the presence of the following nonzero components of the electric and magnetic fields:
\[E_{r}=-\frac{\partial\phi}{\partial r},\quad E_{\theta}=-\frac{\partial\phi}{ \partial\theta},\quad H_{r}=-\frac{\csc\theta}{r^{2}}\frac{\partial\sigma}{ \partial\theta},\quad H_{\theta}=\csc\theta\frac{\partial\sigma}{\partial r}. \tag{7}\]
## III Equations and solutions
Substituting the _Ansatze_ (5) and (6) in the field equations (2) and (3), one can obtain the following set of six partial differential equations:
\[\tilde{X}_{,x}+\frac{\tilde{X}}{x}-\frac{\tilde{W}_{,\theta}}{x }-\frac{\cot\frac{\theta}{2}}{2x}\tilde{W}+\tilde{Q}\left(-\frac{\csc\theta} {x}\tilde{W}\tilde{\sigma}+\tilde{V}\tilde{\phi}\right)-\left(1+\tilde{ \Omega}\right)\tilde{V}+U_{2}\tilde{V}=0, \tag{8}\] \[\tilde{Y}_{,x}+\frac{\tilde{Y}}{x}-\frac{\tilde{V}_{,\theta}}{x }+\frac{\tan\frac{\theta}{2}}{2x}\tilde{V}+\tilde{Q}\left(\frac{\csc\theta} {x}\tilde{V}\tilde{\sigma}-\tilde{W}\tilde{\phi}\right)-\left(1-\tilde{ \Omega}\right)\tilde{W}+U_{2}\tilde{W}=0,\] (9) \[\tilde{V}_{,x}+\frac{\tilde{V}}{x}+\frac{\tilde{Y}_{,\theta}}{x }+\frac{\cot\frac{\theta}{2}}{2x}\tilde{Y}+\tilde{Q}\left(\frac{\csc\theta} {x}\tilde{Y}\tilde{\sigma}-\tilde{X}\tilde{\phi}\right)-\left(1-\tilde{\Omega }\right)\tilde{X}+U_{2}\tilde{X}=0,\] (10) \[\tilde{W}_{,x}+\frac{\tilde{W}}{x}+\frac{\tilde{X}_{,\theta}}{x }-\frac{\tan\frac{\theta}{2}}{2x}\tilde{X}+\tilde{Q}\left(-\frac{\csc\theta} {x}\tilde{X}\tilde{\sigma}+\tilde{Y}\tilde{\phi}\right)-\left(1+\tilde{ \Omega}\right)\tilde{Y}+U_{2}\tilde{Y}=0,\] (11) \[\tilde{\phi}_{,xx}+\frac{2}{x}\tilde{\phi}_{,x}+\frac{1}{x^{2}} \tilde{\phi}_{,\theta\theta}+\frac{\cot\theta}{x^{2}}\tilde{\phi}_{,\theta}+ \tilde{Q}\,U_{1}=0,\] (12) \[\tilde{\sigma}_{,xx}+\frac{1}{x^{2}}\tilde{\sigma}_{,\theta \theta}-\frac{\cot\theta}{x^{2}}\tilde{\sigma}_{,\theta}+2\,\tilde{Q}\,x\sin \theta\,U_{3}=0, \tag{13}\]
where
\[U_{1}=\tilde{X}^{2}+\tilde{Y}^{2}+\tilde{V}^{2}+\tilde{W}^{2},\quad U_{2}= \tilde{X}^{2}-\tilde{Y}^{2}-\tilde{V}^{2}+\tilde{W}^{2},\quad U_{3}=\tilde{X} \tilde{Y}+\tilde{V}\tilde{W}\]
and the azimuthal number is taken to be \(M=1/2\) throughout the paper. These equations are written in terms of the following dimensionless variables: \(x=mr\), \(\tilde{\Omega}=\Omega/m\), \(\tilde{Q}=Q/\left(m\sqrt{\lambda}\right)\), \(\tilde{X},\tilde{Y},\tilde{V},\tilde{W}=\sqrt{\lambda/m}\,X,Y,V,W\), \(\tilde{\phi}=\sqrt{\lambda}\,\phi\), \(\tilde{\sigma}=m\sqrt{\lambda}\,\sigma\). The lower indices denote differentiation with respect to the corresponding coordinate. Notice that these equations do not explicitly contain the nonlinearity parameter \(\lambda\) and are invariant with respect to multiplying the spinor functions by \(-1\). That is, the system contains only two free parameters, \(\tilde{\Omega}\) and \(\tilde{Q}\), whose values will be varied to obtain solutions describing configurations with different physical characteristics.
### Physical quantities
Let us now write down expressions for some physically interesting parameters of the systems under consideration. The total dimensionless mass of the system can be found in the form
\[\tilde{M}_{\text{tot}}\equiv m\lambda M_{\text{tot}}=2\pi\int_{0}^{\infty}dx\int _{0}^{\pi}d\theta\,\tilde{T}_{t}^{t}x^{2}\sin\theta, \tag{14}\]
where the dimensionless \(\binom{t}{t}\)-component of the energy-momentum tensor (4) is
\[\tilde{T}_{t}^{t}=\frac{1}{2}\left[\tilde{\phi}_{,x}^{2}+\frac{\csc^{2}\theta }{x^{2}}\tilde{\sigma}_{,x}^{2}+\frac{1}{x^{2}}\tilde{\phi}_{,\theta}^{2}+ \frac{\csc^{2}\theta}{x^{4}}\tilde{\sigma}_{,\theta}^{2}+2\left(\tilde{\Omega} -\tilde{Q}\tilde{\phi}\right)U_{1}+U_{2}^{2}\right]. \tag{15}\]
The total dimensionless angular momentum
\[\tilde{J}_{\text{tot}}\equiv m^{2}\lambda J_{\text{tot}}=-2\pi\int_{0}^{ \infty}dx\int_{0}^{\pi}d\theta\,\tilde{T}_{\varphi}^{t}x^{2}\sin\theta, \tag{16}\]
where the dimensionless \(\binom{t}{\varphi}\)-component of the energy-momentum tensor (4) is
\[\begin{split}\tilde{T}_{\varphi}^{t}=&\tilde{\phi} _{,x}\tilde{\sigma}_{,x}+\frac{1}{x^{2}}\tilde{\phi}_{,\theta}\tilde{\sigma}_ {,\theta}-\frac{1}{4}\left(1+2\,\tilde{Q}\,\tilde{\sigma}\right)U_{1}+x\sin \theta\left(\tilde{\Omega}-\tilde{Q}\tilde{\phi}\right)U_{3}\\ &+\frac{1}{2}\sin\theta\left(\tilde{W}\tilde{X}-\tilde{V}\tilde {Y}\right)+\frac{1}{4}\cos\theta\left(\tilde{X}^{2}-\tilde{Y}^{2}+\tilde{V}^{2 }-\tilde{W}^{2}\right).\end{split} \tag{17}\]
The occurrence of a nonzero angular momentum is due to the presence in the system of (i) a single fermion possessing an intrinsic angular momentum; and (ii) the crossed electric and magnetic fields. For this reason, in analogy to quantum particles possessing the quantum-mechanical spin, such configurations can be treated as spinning ones.
The total dimensionless Noether charge
\[\tilde{Q}_{\text{tot}}\equiv m^{2}\lambda\,Q_{\text{tot}}=2\pi\int_{0}^{ \infty}dx\int_{0}^{\pi}d\theta\,\tilde{j}^{t}x^{2}\sin\theta, \tag{18}\]
where the dimensionless temporal component of the current density \(\tilde{j}^{t}=U_{1}\). Note here that the normalization condition \(Q_{\text{tot}}=1\) corresponds to one-particle solutions; in this case the coupling constant \(Q\) will correspond to an electric charge of the system, and below we will be interested mostly in such configurations.
Magnetic moment of the system under investigation can be calculated in a standard way by considering the electric current flowing perpendicular to a meridional plane (see, e.g., the textbook [31]). As a result, one can obtain the following expression for the dimensionless magnetic dipole moment:
\[\tilde{\mu}_{m}\equiv m^{2}\sqrt{\lambda}\,\mu_{m}=-2\pi\tilde{Q}\int_{0}^{ \infty}dx\int_{0}^{\pi}d\theta\,U_{3}x^{3}\sin^{2}\theta. \tag{19}\]
Finally, the dimensionless gyromagnetic ratio \(g\), expressed in units of \(q_{e}/\left(2M_{\text{tot}}\right)\) [where \(q_{e}\) is the electric charge, see Eq. (22) below], is defined from the relation
\[\mu_{m}=g\frac{q_{e}}{2}\,\frac{J_{\text{tot}}}{M_{\text{tot}}}\quad\Rightarrow \quad g=2\frac{\tilde{\mu}_{m}\tilde{M}_{\text{tot}}}{\tilde{q}_{e}\tilde{J} _{\text{tot}}}, \tag{20}\]
where \(\tilde{q}_{e}\equiv m\sqrt{\lambda}\,q_{e}\) is the dimensionless electric charge.
### Boundary conditions and a numerical approach
We will seek globally regular finite-energy nodeless solutions of the set of six partial differential equations (8)-(13). To do this, it is necessary to impose appropriate boundary conditions for the spinor and Maxwell fields. The behavior of solutions of Eqs. (8)-(13) in the vicinity of the boundaries of the domain of integration implies the following
boundary conditions:
\[\left.\frac{\partial\tilde{X}}{\partial x}\right|_{x=0}=\left.\frac{ \partial\tilde{W}}{\partial x}\right|_{x=0}=\left.\frac{\partial\tilde{\phi}}{ \partial x}\right|_{x=0}=0,\,\tilde{Y}\Big{|}_{x=0}=\left.\tilde{V}\right|_{x =0}=\left.\tilde{\sigma}\right|_{x=0}=0;\] \[\left.\frac{\partial\tilde{X}}{\partial\theta}\right|_{\theta=0}= \left.\frac{\partial\tilde{V}}{\partial\theta}\right|_{\theta=0}=\left.\frac{ \partial\tilde{\phi}}{\partial\theta}\right|_{\theta=0}=0,\,\tilde{Y}\Big{|}_ {\theta=0}=\left.\tilde{W}\right|_{\theta=0}=\left.\tilde{\sigma}\right|_{ \theta=0}=0;\] \[\left.\frac{\partial\tilde{Y}}{\partial\theta}\right|_{\theta= \pi}=\left.\frac{\partial\tilde{W}}{\partial\theta}\right|_{\theta=\pi}=\left. \frac{\partial\tilde{\phi}}{\partial\theta}\right|_{\theta=\pi}=0,\,\tilde{X} \Big{|}_{\theta=\pi}=\left.\tilde{V}\right|_{\theta=\pi}=\left.\tilde{\sigma }\right|_{\theta=\pi}=0.\]
For numerical computations, it is convenient to introduce the compactified radial coordinate
\[\bar{x}=\frac{x}{1+x} \tag{21}\]
in order to map the infinite interval \([0,\infty)\) to the finite region \([0,1]\). The results of numerical computations for axisymmetric systems presented below have been obtained using the Intel MKL PARDISO sparse direct solver and the CESDSOL library, and also verified for some particular cases using the package FIDISOL [32]. These packages provide an iterative procedure for obtaining an exact solution starting from some approximate solution (an initial guess). As the initial guess, it is possible to use the solutions describing configurations in the absence of electric and magnetic fields [7]. The equations (8)-(13) have been solved on a grid of \(200\times 100\) points which covers the integration region \(0\leq\bar{x}\leq 1\) [given by the compactified radial coordinate (21)] and \(0\leq\theta\leq\pi\).
### Numerical solutions
In contrast to the case of a linear spinor field, in the nonlinear case, there is a family of solutions, depending continuously on two parameters - the frequency \(\tilde{\Omega}\) and the coupling constant \(\tilde{Q}\), whose values completely determine all physical characteristics of the configurations under consideration. To illustrate this, Fig. 1 shows the spectrum of the total mass (14) of the systems under investigation as a function of \(\tilde{\Omega}\) for some fixed values of \(\tilde{Q}\). It is seen from these graphs that the behavior of the dependence \(\tilde{M}_{\rm tot}(\tilde{\Omega})\) is largely determined by the value of the coupling constant \(\tilde{Q}\). Namely, the numerical calculations indicate that:
Figure 1: The total mass of the system \(\tilde{M}_{\rm tot}\) as a function of the spinor frequency \(\tilde{\Omega}\) for different values of the coupling constant \(\tilde{Q}\). The inset shows the dependence of the total mass on \(\tilde{Q}\) for different values of \(\tilde{\Omega}\) (shown by the numbers near the points) corresponding to the location of the mass gap.
1. When \(\tilde{Q}=0\), the total mass diverges as \(\tilde{\Omega}\to 1\). In turn, for small \(\tilde{\Omega}\), a rapid increase of \(\tilde{M}_{\rm tot}\) also occurs, and one may expect that in the limit \(\tilde{\Omega}\to 0\) the total mass will tend to infinity as well. But in this limit a calculation of the mass using the integral (14) is a difficult technical problem, and we cannot verify this assumption by direct calculation.
2. When \(\tilde{Q}\neq 0\), the total mass in the limit \(\tilde{\Omega}\to 1\) is already finite. In turn, there is some nonzero value \(\tilde{\Omega}<1\) for which one can still perform numerical calculations. In doing so, one can observe that the mass demonstrates a rapid increase (\(|\partial\tilde{M}_{\rm tot}/\partial\tilde{\Omega}|\gg 1\)); this can be regarded as an indication that there is some critical value \(\tilde{\Omega}_{\rm crit}\) for which the mass will eventually tend to infinity.
3. A distinctive feature of the configurations from (i) and (ii) is the presence of a minimum of the mass for all values of \(\tilde{Q}\) lying in the interval \(-0.27\lesssim\tilde{Q}\leq 0\). This minimum corresponds to the presence in the system of a mass gap where \(\partial\tilde{M}_{\rm tot}/\partial\tilde{\Omega}=0\). In turn, for \(\tilde{Q}\lesssim-0.27\), such a mass gap is already absent: the total mass demonstrates a gradual decrease as \(\tilde{\Omega}\) increases, and eventually \(\tilde{M}_{\rm tot}\) reaches some minimum value as \(\tilde{\Omega}\to 1\).
4. The dependence of the total mass on the coupling constant \(\tilde{Q}\) for different values of \(\tilde{\Omega}\) corresponding to the location of the mass gap is shown in the inset of Fig. 1. It is seen from this inset that for the system with \(\tilde{Q}=0\) (and correspondingly without the magnetic field), \(\tilde{\Omega}\approx 0.936\) (cf. Ref. [8]). On the other hand, there exists a maximum possible value of the coupling constant \(|\tilde{Q}|\approx 0.27\) for which the frequency \(\tilde{\Omega}\to 1\). For larger (modulus) values of \(\tilde{Q}\) the curve \(\tilde{M}_{\rm tot}(\tilde{\Omega})\) has no minimum already, that is, the derivative \(\partial\tilde{M}_{\rm tot}/\partial\tilde{\Omega}\) is nowhere equal to zero, and correspondingly the mass gap is absent.
5. There is some critical value of the coupling constant \(\tilde{Q}_{\rm crit}\) approaching that the interval \(\tilde{\Omega}_{\rm crit}\leq\tilde{\Omega}\leq 1\) (where the solutions do exist) becomes narrower, and eventually \(\tilde{\Omega}_{\rm crit}\to 1\) and the complete set of solutions with different \(\tilde{\Omega}\) degenerates to the only solution with \(\tilde{\Omega}=1\). Numerical calculations show that \(\tilde{Q}_{\rm crit}\approx-0.3119\), and in this limit the total mass \(\tilde{M}_{\rm tot}\approx 97.13\).
6. For all these spinning systems, a straightforward computation shows that the total angular momentum \(J_{\rm tot}\) from Eq. (16) and the total Noether charge \(Q_{\rm tot}\) from Eq. (18) are related by \(J_{\rm tot}=\frac{1}{2}Q_{\rm tot}\), although the angular momentum density and the Noether charge density are not proportional.
7. From the form of Eqs. (8)-(13), it is evident that the solutions under consideration are invariant with respect to a change in the sign \(\tilde{Q}\to-\tilde{Q},\tilde{\phi}\to-\tilde{\phi},\tilde{\sigma}\to-\tilde{\sigma}\), that is, they may describe systems with the coupling constant opposite in sign and the same physical characteristics.
Note here that, in order to apply the results obtained above for a description of one particle, it is necessary to normalize the solutions so that the Noether charge \(Q_{\rm tot}\) from Eq. (18) would be equal to 1. In this case the coupling constant \(Q\) will correspond to an electric charge, i.e., \(Q=q_{e}\). This one-particle condition can be fulfilled by a suitable choice of the free system parameters \(\lambda\) and \(m\). However, in doing so, one should bear in mind that in this case there will be its own particular set of the parameters \(\lambda\) and \(m\) for every point in the \(\left[\tilde{M}_{\rm tot}-\tilde{\Omega}\right]\)-plane, i.e., different points of the plane will correspond to different models.
We conclude this subsection with the expression for the effective radial pressure \(p_{r}\equiv-T_{r}^{r}\). Using the energy-momentum tensor (4) and the Dirac equations (8)-(11), it can be shown that the radial pressure contains the terms
\[p_{r}\sim\frac{\lambda}{2}\left(X^{2}-Y^{2}-V^{2}+W^{2}\right)^{2}-Q\phi\left( X^{2}+Y^{2}+V^{2}+W^{2}\right)\ldots\]
This implies the following physical meaning of the nonlinearity parameter: the case of \(\lambda>0\) corresponds to the attraction and the case of \(\lambda<0\) to the repulsion. Correspondingly, for the configurations considered above, the attraction of the spinor field related to the choice of positive values of the nonlinearity parameter provides a counter-balance to the effective repulsion due to the presence of the electric charge.
### Asymptotic behavior
For completeness of analysis of the numerical solutions obtained above, let us write down analytical expressions for asymptotic solutions. The Maxwell equations (12) and (13) have the following asymptotic (\(x\to\infty\)) behavior of the electric and magnetic fields:
\[\tilde{\phi}\approx\frac{1}{4\pi}\frac{\tilde{q}_{e}}{x},\quad\tilde{\sigma} \approx-\frac{1}{4\pi}\frac{\tilde{\mu}_{m}}{x}\sin^{2}\theta. \tag{22}\]
Using these expressions, the numerical values of the electric charge \(\tilde{q}_{e}\) and of the magnetic moment \(\tilde{\mu}_{m}\) can be found in the form
\[\tilde{q}_{e}=-4\pi\lim_{x\to\infty}x^{2}\frac{\partial\tilde{\phi}}{\partial x }=-4\pi\lim_{\bar{x}\to 1}\bar{x}^{2}\frac{\partial\tilde{\phi}}{\partial\bar{x}}, \quad\tilde{\mu}_{m}=4\pi\lim_{x\to\infty}\frac{x^{2}}{\sin^{2}\theta}\frac{ \partial\tilde{\sigma}}{\partial x}=4\pi\lim_{\bar{x}\to 1}\frac{\bar{x}^{2}}{\sin^{2} \theta}\frac{\partial\tilde{\sigma}}{\partial\bar{x}}. \tag{23}\]
Note that the value of \(\tilde{\mu}_{m}\) calculated using the above formula coincides with that of obtained using Eq. (19).
In turn, the asymptotic behavior of the spinor fields follows from the Dirac equations (8)-(11),
\[\tilde{X}\approx-2\cos\frac{\theta}{2}\,g(x),\quad\tilde{Y}\approx 2\sin\frac{ \theta}{2}\,f(x),\quad\tilde{V}\approx 2\cos\frac{\theta}{2}\,f(x),\quad\tilde{W} \approx-2\sin\frac{\theta}{2}\,g(x).\]
The form of the functions \(f(x)\) and \(g(x)\) appearing here depends on the value of \(\tilde{\Omega}\). Namely, for \(0<\tilde{\Omega}<1\), we have
\[f(x)\approx f_{\infty}\frac{e^{-\sqrt{1-\tilde{\Omega}^{2}}\,x}}{x}^{-\frac{ 1}{\sqrt{1-\tilde{\Omega}^{2}}}\frac{\tilde{Q}\tilde{q}_{e}}{4\pi}},\quad g(x )\approx f_{\infty}\sqrt{\frac{1+\tilde{\Omega}}{1-\tilde{\Omega}}}\,\frac{e^ {-\sqrt{1-\tilde{\Omega}^{2}}\,x}}{x}^{-\frac{1}{\sqrt{1-\tilde{\Omega}^{2}} }\,\frac{\tilde{Q}\tilde{q}_{e}}{4\pi}}, \tag{24}\]
where \(f_{\infty}\) is an integration constant. In the case of \(\tilde{\Omega}=1\), we have
\[f(x)\approx f_{\infty}\frac{e^{-\sqrt{\frac{2}{2}\tilde{Q}\tilde{q}_{e}\,x}} }{x^{5/4}},\quad g(x)\approx f_{\infty}\sqrt{\frac{8\pi}{\tilde{Q}\tilde{q}_ {e}}}\,\frac{e^{-\sqrt{\frac{2}{2}\tilde{Q}\tilde{q}_{e}\,x}}}{x^{3/4}}.\]
Notice here that regular solutions with \(\tilde{\Omega}=1\) are only possible in the presence of the charge.
### Particular example: an "electron"
The freedom in the choice of values of the parameters \(\lambda\) and \(m\) enables us to model various objects. In doing so, a choice of a specific value of \(\tilde{\Omega}\) can be made on the basis of a physically reasonable assumption that an energetically stable system must possess a minimum energy (or, equivalently, a minimum mass \(\tilde{M}_{\rm tot}\)). Consistent with this, consider, for example, the case where the coupling constant \(\tilde{Q}\) is understood to be so chosen that at a minimum of
Figure 2: The dependence of the physical quantities on the spinor frequency \(\Omega\) for a fixed \(\tilde{Q}=-0.04452\). Left panel: the graphs for the total mass \(\tilde{M}_{\rm tot}\) from Eq. (14), for the angular momentum \(\tilde{J}_{\rm tot}\) from Eq. (16), and for the charge \(\tilde{q}_{e}\) from Eq. (23). Middle panel: the same quantities, but in the dimensional form and with \(Q_{\rm tot}=1\) (normalized values). Right panel: the gyromagnetic ratio (20). The vertical dashed lines correspond to the minimum of the mass (the mass gap) located at the point \(\tilde{\Omega}\approx 0.937\).
Figure 3: The dimensionless spinor functions \(P_{+}\) and \(P_{-}\) from (10) and (11), energy density (15), angular momentum density (17), charge density \(\tilde{j}^{t}=U_{1}\), physical component of the current density \(\tilde{j}^{\varphi}=-2\,\tilde{Q}\,U_{3}\), electric, \(\tilde{\vec{E}}\equiv\sqrt{\lambda}/m\vec{E}\), and magnetic, \(\tilde{\vec{H}}\equiv\sqrt{\lambda}/m\vec{H}\), field strengths for the system with \(\tilde{Q}=-0.04452\) located at the mass gap (cf. Fig. 2). The plots are made in a meridional plane \(\varphi=\) const. spanned by the coordinates \(\rho=x\sin\theta\) and \(z=x\cos\theta\). Since the system is \(\mathbb{Z}_{2}\)-symmetric with respect to the equatorial plane \(z=0\), the electromagnetic field strength distributions are shown only for \(z>0\).
the curve \(\tilde{M}_{\rm tot}(\tilde{\Omega})\) the electric charge of the system \(q_{e}\) would be equal to the charge of an electron. The corresponding dependencies \(\tilde{M}_{\rm tot}(\tilde{\Omega})\) and \(\tilde{q}_{e}(\tilde{\Omega})\) are shown in the left panel of Fig. 2. It is seen that the minimum of the mass curve (the maximum of the charge curve) is located at \(\tilde{\Omega}\approx 0.937\), and the total mass and Noether charge are
\[M_{\rm tot}=\frac{47.512}{m\lambda},\quad Q_{\rm tot}=\frac{46.255}{m^{2} \lambda}.\]
This yields the dimensional mass \(M_{\rm tot}=1.027\,m\,Q_{\rm tot}\) (see the middle panel of Fig. 2). For a normalized solution, \(Q_{\rm tot}=1\); correspondingly, there is a mass renormalization of \(2.7\%\). Then, in order to make the total mass of the system \(M_{\rm tot}\) equal to the electron mass \(m_{e}\), it is necessary to take \(m=m_{e}/1.027\). This in turn leads to the corresponding renormalization of the magnetic moment and change in the value of the gyromagnetic ratio \(g\), whose graph is shown in the right panel of Fig. 2. For the case under consideration, \(g\approx 2.083\). Thus we have a configuration with the mass \(M_{\rm tot}=0.511\,{\rm MeV}\) and charge \(q_{e}=-0.3028\) equal to the mass and charge of an electron, but with \(g>2\).
Also, for a normalized solution, i.e., when \(Q_{\rm tot}=1\), the quantum-mechanical angular momentum, which is determined using the operator of the total angular momentum,
\[\hat{M}_{z}=\hat{L}_{z}+\hat{S}_{z}, \tag{25}\]
(here \(\hat{L}_{z}=-\imath\partial_{\varphi}\) is the operator which projects the orbital angular momentum on the \(z\)-axis and \(\hat{S}_{z}\) is the operator which projects the spin on the \(z\)-axis), defines the value of the total angular momentum as an eigenvalue \(M_{z}\),
\[\hat{M}_{z}\psi=M_{z}\psi.\]
For the spinor (5), this eigenvalue is \(M_{z}=1/2\), and it coincides with that calculated using the integral formula (16) (cf. the value of \(J_{\rm tot}\) from the middle panel of Fig. 2). It is worth noting that for the solutions that are not normalized to unity this coincidence no longer occurs.
The characteristic size of such a charged configuration supported by the spinor field can be estimated from the asymptotic behavior of the field (24) as
\[r_{\rm ch}\sim\frac{1}{\sqrt{1-\tilde{\Omega}^{2}\,m_{e}}}.\]
For \(\tilde{\Omega}\approx 0.937\), this yields \(r_{\rm ch}\sim 10^{-10}\,{\rm cm}\), a value that is comparable in order of magnitude to the electron Compton wavelength.
Note that the spinor functions \(\tilde{X},\tilde{Y},\tilde{V}\), and \(\tilde{W}\) appearing in the Dirac equations (8)-(11) are neither even nor odd functions with respect to the equatorial plane \(\theta=\pi/2\). Nevertheless, the system possesses a \(\mathbb{Z}_{2}\) symmetry with respect to this plane; this can be shown by considering the corresponding combinations of the spinor functions (see Appendix A). To demonstrate this fact in a pictorial way, the upper row of Fig. 3 shows the graphs of the functions \(P_{+}\) and \(P_{-}\) from Eqs. (10) and (11), respectively. Also, this figure shows the corresponding \(\mathbb{Z}_{2}\)-symmetric distributions of the components of the energy-momentum tensor (4) and current density, as well as the electric and magnetic field strengths defined by the expressions (7). The structure of the magnetic field strength corresponds to an axially symmetric dipole field sourced by the current associated with the spinor field given on the right-hand side of Eq. (3). The radial distribution of the current and the magnitude of the magnetic field are determined by the value of the coupling constant \(Q\). In turn, the structure of the electric field strength corresponds to a negative charge with force lines directed toward the center of the configuration.
In conclusion, note that by choosing \(m=m_{\mu}/1.027\), where \(m_{\mu}\) is the muon mass, we get characteristics typical for a muon/antimuon.
## IV Conclusions
The main purpose of the present paper is to study self-consistently the influence that an electromagnetic field has on a system supported by a nonlinear spinor field. To this end, we generalized the configurations considered in Refs. [7; 8] by including nonperturbatively electric and magnetic (dipole) Maxwell fields to take account of their backreaction on the physical characteristics of the system. In such a generalized case, the presence in the system of the dipole magnetic field requires a consideration of an axisymmetric problem.
In the absence of electromagnetic fields, an important distinctive feature of the systems supported by nonlinear spinor fields is the presence of a mass gap. For such systems, all solutions are parameterized by one parameter - the spinor frequency \(\tilde{\Omega}\), and regular stationary solutions describing configurations with finite values of various physical
parameters (for instance, of a total mass) do exist only for the values of \(\tilde{\Omega}\) lying in the range \(0<\tilde{\Omega}<1\), whereas for \(\tilde{\Omega}\to 0\) and \(\tilde{\Omega}\to 1\) the total mass diverges. The inclusion of the Maxwell fields results in the appearance of one more free parameter - the coupling constant \(\tilde{Q}\). For such a two-parametric system, we have considered all permissible values of the parameters \(\tilde{\Omega}\) and \(\tilde{Q}\) for which regular spinning solutions do exist. The studies of the present work indicate that there are the following qualitative changes in the characteristics of the configurations as compared with the electrically neutral (\(\tilde{Q}=0\)) case:
* Apart from the mass and angular momentum gaps, the system also contains the charge and magnetic moment gaps located at the same values of \(\tilde{\Omega}\) as the mass gap (see Fig. 2 and cf. Ref. [27]).
* For a nonzero coupling constant \(\tilde{Q}\), there is some critical value of the spinor frequency \(\tilde{\Omega}_{\rm crit}>0\) that restricts the range of permissible values of \(\tilde{\Omega}\) from the left. As in the case without Maxwell fields, at this boundary, the total masses of the system diverge for all permissible values of \(\tilde{Q}\). As \(\tilde{Q}\) increases (modulus), the value of \(\tilde{\Omega}_{\rm crit}\) increases as well, and there is a finite critical value \(|\tilde{Q}_{\rm crit}|\approx 0.3119\) for which \(\tilde{\Omega}_{\rm crit}\to 1\), i.e., the set of solutions with different \(\tilde{\Omega}\) degenerates to the only solution with \(\tilde{\Omega}=1\).
* For \(0<|\tilde{Q}|<|\tilde{Q}_{\rm crit}|\) and as \(\tilde{\Omega}\to 1\), the total mass of the system, in contrast to the case without Maxwell fields, remains always finite. That is, regular solutions exist in the frequency range of \(\tilde{\Omega}_{\rm crit}<\tilde{\Omega}\leq 1\), and the magnitude of \(\tilde{\Omega}_{\rm crit}\) is completely determined only by the value of the coupling constant \(\tilde{Q}\).
* There is a maximum possible value of \(|\tilde{Q}|\approx 0.27\) above which the aforementioned gaps are already absent in the system.
As a possible application of the above results, we have considered the case where the coupling constant \(\tilde{Q}\) is to be so chosen that the electric charge of the system located at the mass/charge gap would be equal to the charge of an electron (or of a positron when \(-\tilde{Q}\) is changed into \(\tilde{Q}\)) (see Sec. III.5). Then, by choosing an appropriate value of the bare mass \(m\) in the Dirac equation (2), one can also get the total mass of the system, equal to the mass of an electron. In turn, the total angular momentum \(J_{\rm tot}\) and the total Noether charge \(Q_{\rm tot}\) are related by \(J_{\rm tot}=\frac{1}{2}Q_{\rm tot}\) (as it also takes place for all other spinning systems with permissible values of \(\tilde{Q}\) and \(\tilde{\Omega}\) considered in the present paper), and when \(Q_{\rm tot}=1\) (i.e., for a normalized solution), \(J_{\rm tot}\) coincides with the eigenvalue \(M_{z}=1/2\) of the operator of the total angular momentum (25). Also, the gyromagnetic ratio for such configuration is \(g\approx 2.083\) (cf. the electron for which \(g\approx 2\)) and the characteristic size is \(r_{\rm ch}\sim 10^{-10}\,\)cm.
###### Acknowledgements.
This research was funded by the Committee of Science of the Ministry of Science and Higher Education of the Republic of Kazakhstan (Grant No. BR21881941).
## Appendix A Symmetry of the field equations
When \(\theta\) is replaced by \(\pi-\theta\), the equations (8)-(13) take the form
\[\tilde{X}_{,x}+\frac{\tilde{X}}{x}+\frac{\tilde{W}_{,\theta}}{x }-\frac{\tan\frac{\theta}{2}}{2x}\tilde{W}+\tilde{Q}\left(-\frac{\csc\theta} {x}\tilde{W}\tilde{\sigma}+\tilde{V}\tilde{\phi}\right)-\left(1+\tilde{ \Omega}\right)\tilde{V}+U_{2}\tilde{V}= 0, \tag{16}\] \[\tilde{Y}_{,x}+\frac{\tilde{Y}}{x}+\frac{\tilde{V}_{,\theta}}{x }+\frac{\cot\frac{\theta}{2}}{2x}\tilde{V}+\tilde{Q}\left(\frac{\csc\theta} {x}\tilde{V}\tilde{\sigma}-\tilde{W}\tilde{\phi}\right)-\left(1-\tilde{ \Omega}\right)\tilde{W}+U_{2}\tilde{W}= 0,\] (17) \[\tilde{V}_{,x}+\frac{\tilde{V}}{x}-\frac{\tilde{Y}_{,\theta}}{x}+ \frac{\tan\frac{\theta}{2}}{2x}\tilde{Y}+\tilde{Q}\left(\frac{\csc\theta} {x}\tilde{Y}\tilde{\sigma}-\tilde{X}\tilde{\phi}\right)-\left(1-\tilde{ \Omega}\right)\tilde{X}+U_{2}\tilde{X}= 0,\] (18) \[\tilde{W}_{,x}+\frac{\tilde{W}}{x}-\frac{\tilde{X}_{,\theta}}{x}- \frac{\cot\frac{\theta}{2}}{2x}\tilde{X}+\tilde{Q}\left(-\frac{\csc\theta} {x}\tilde{X}\tilde{\sigma}+\tilde{Y}\tilde{\phi}\right)-\left(1+\tilde{ \Omega}\right)\tilde{Y}+U_{2}\tilde{Y}= 0,\] (19) \[\tilde{\phi}_{,xx}+\frac{2}{x}\tilde{\phi}_{,x}+\frac{1}{x^{2}} \tilde{\phi}_{,\theta\theta}+\frac{\cot\theta}{x^{2}}\tilde{\phi}_{,\theta}+ \tilde{Q}\,U_{1}= 0,\] (20) \[\tilde{\sigma}_{,xx}+\frac{1}{x^{2}}\tilde{\sigma}_{,\theta \theta}-\frac{\cot\theta}{x^{2}}\tilde{\sigma}_{,\theta}+2\,\tilde{Q}\,x\sin \theta\,U_{3}= 0. \tag{21}\]
All the functions appearing here depend on \(\pi-\theta\): \(\tilde{X}(x,\pi-\theta),\tilde{Y}(x,\pi-\theta),\tilde{V}(x,\pi-\theta),\tilde{W} (x,\pi-\theta),\phi(x,\pi-\theta),\sigma(x,\pi-\theta)\), while the combinations \(U_{1},U_{2},\) and \(U_{3}\) remain unchanged. A comparison of the equations (11) and (11) enables one to conclude that \(\tilde{W}(x,\theta)=\tilde{X}(x,\pi-\theta)\). Similarly, one can show that such a symmetry is valid for all functions:
\[\tilde{X}(x,\theta)=\tilde{W}(x,\pi-\theta),\quad\tilde{Y}(x,\theta)=\tilde{V} (x,\pi-\theta),\quad\tilde{V}(x,\theta)=\tilde{Y}(x,\pi-\theta),\quad\tilde{W} (x,\theta)=\tilde{X}(x,\pi-\theta). \tag{124}\]
This enables us to rewrite four local Dirac equations (8)-(11) in the form of two nonlocal equations
\[\tilde{X}(x,\theta)_{,x}+\frac{\tilde{X}(x,\theta)}{x}-\frac{ \tilde{X}(x,\pi-\theta)_{,\theta}}{x}-\frac{\cot\frac{\theta}{2}}{2x}\tilde{X} (x,\pi-\theta)+\tilde{Q}\left[-\frac{\csc\theta}{x}\tilde{X}(x,\pi-\theta) \tilde{\sigma}+\tilde{Y}(x,\pi-\theta)\tilde{\phi}\right]\] \[-\left(1+\tilde{\Omega}\right)\tilde{Y}(x,\pi-\theta)+U_{2} \tilde{Y}(x,\pi-\theta)= 0,\] \[\tilde{Y}(x,\theta)_{,x}+\frac{\tilde{Y}(x,\theta)}{x}-\frac{ \tilde{Y}(x,\pi-\theta)_{,\theta}}{x}+\frac{\tan\frac{\theta}{2}}{2x}\tilde{Y} (x,\pi-\theta)+\tilde{Q}\left[\frac{\csc\theta}{x}\tilde{Y}(x,\pi-\theta) \tilde{\sigma}-\tilde{X}(x,\pi-\theta)\tilde{\phi}\right]\] \[-\left(1-\tilde{\Omega}\right)\tilde{X}(x,\pi-\theta)+U_{2} \tilde{X}(x,\pi-\theta)= 0.\]
In turn, taking into account the properties of the functions \(\tilde{X},\tilde{Y},\tilde{V},\) and \(\tilde{W}\) given in Eq. (124), one can see that there are the following symmetric and antisymmetric functions:
\[P_{+}(x,\theta)= \tilde{X}(x,\theta)+\tilde{W}(x,\theta)=\tilde{X}(x,\theta)+ \tilde{X}(x,\pi-\theta)=\tilde{W}(x,\theta)+\tilde{W}(x,\pi-\theta)=P_{+}(x, \pi-\theta), \tag{125}\] \[P_{-}(x,\theta)= \tilde{X}(x,\theta)-\tilde{W}(x,\theta)=\tilde{X}(x,\theta)- \tilde{X}(x,\pi-\theta)=\tilde{W}(x,\pi-\theta)-\tilde{W}(x,\theta)=-P_{-}(x, \pi-\theta),\] (126) \[Q_{+}(x,\theta)= \tilde{Y}(x,\theta)+\tilde{V}(x,\theta)=\tilde{Y}(x,\theta)+ \tilde{Y}(x,\pi-\theta)=\tilde{V}(x,\theta)+\tilde{V}(x,\pi-\theta)=Q_{+}(x, \pi-\theta),\] (127) \[Q_{-}(x,\theta)= \tilde{Y}(x,\theta)-\tilde{V}(x,\theta)=\tilde{Y}(x,\theta)- \tilde{Y}(x,\pi-\theta)=\tilde{V}(x,\pi-\theta)-\tilde{V}(x,\theta)=-Q_{-}(x, \pi-\theta). \tag{128}\]
|
2301.07675 | A stochastic search for intermittent gravitational-wave backgrounds | A likely source of a gravitational-wave background (GWB) in the frequency
band of the Advanced LIGO, Virgo and KAGRA detectors is the superposition of
signals from the population of unresolvable stellar-mass binary-black-hole
(BBH) mergers throughout the Universe. Since the duration of a BBH merger in
band ($\sim\!1~{\rm s}$) is much shorter than the expected separation between
neighboring mergers ($\sim\!10^3~{\rm s}$), the observed signal will be
"popcorn-like" or intermittent with duty cycles of order $10^{-3}$. However,
the standard cross-correlation search for stochastic GWBs currently performed
by the LIGO-Virgo-KAGRA collaboration is based on a continuous-Gaussian signal
model, which does not take into account the intermittent nature of the
background. The latter is better described by a Gaussian mixture-model, which
includes a duty cycle parameter that quantifies the degree of intermittence.
Building on an earlier paper by Drasco and Flanagan, we propose a
stochastic-signal-based search for intermittent GWBs. For such signals, this
search performs better than the standard continuous cross-correlation search.
We present results of our stochastic-signal-based approach for intermittent
GWBs applied to simulated data for some simple models, and compare its
performance to the other search methods, both in terms of detection and signal
characterization. Additional testing on more realistic simulated data sets,
e.g., consisting of astrophysically-motivated BBH merger signals injected into
colored detector noise containing noise transients, will be needed before this
method can be applied with confidence on real gravitational-wave data. | Jessica Lawrence, Kevin Turbang, Andrew Matas, Arianna I. Renzini, Nick van Remortel, Joseph D. Romano | 2023-01-18T17:52:33Z | http://arxiv.org/abs/2301.07675v1 | # A stochastic search for intermittent gravitational-wave backgrounds
###### Abstract
A likely source of a gravitational-wave background (GWB) in the frequency band of the Advanced LIGO, Virgo and KAGRA detectors is the superposition of signals from the population of unresolvable stellar-mass binary-black-hole (BBH) mergers throughout the Universe. Since the duration of a BBH merger in band (\(\sim\) 1 s) is much shorter than the expected separation between neighboring mergers (\(\sim\) 10\({}^{3}\) s), the observed signal will be "popcorn-like" or intermittent with duty cycles of order \(10^{-3}\). However, the standard cross-correlation search for stochastic GWBs currently performed by the LIGO-Virgo-KAGRA collaboration is based on a continuous-Gaussian signal model, which does not take into account the intermittent nature of the background. The latter is better described by a Gaussian mixture-model, which includes a duty cycle parameter that quantifies the degree of intermittence. Building on an earlier paper by Drasco and Flanagan [1], we propose a stochastic-signal-based search for intermittent GWBs. For such signals, this search performs better than the standard continuous cross-correlation search [2]. We present results of our stochastic-signal-based approach for intermittent GWBs applied to simulated data for some simple models, and compare its performance to the other search methods, both in terms of detection and signal characterization. Additional testing on more realistic simulated data sets, e.g., consisting of astrophysically-motivated BBH merger signals injected into colored detector noise containing noise transients, will be needed before this method can be applied with confidence on real gravitational-wave data.
Introduction
The Advanced LIGO [3], Virgo [4], and KAGRA [5] (LVK) detectors have completed their third observing run (O3), increasing the number of confident detections of gravitational-wave (GW) signals to 90 overall [6]. The detected signals are primarily associated with stellar-mass binary-black-hole (BBH) mergers, although a handful of binary-neutron-star (BNS) and neutron star-black hole (NSBH) coalescences have also been observed [7; 8]. All of these signals are relatively large signal-to-noise ratio (SNR) events, which stand out above the detector noise when searched for using matched-filtering techniques [9; 10].
In addition to these loud, individually-resolvable events, the LVK detectors are also being showered by GW signals produced by much weaker (e.g., more distant and/or less massive) sources, whose combined effect gives rise to a low-level background of gravitational radiation--a so-called gravitational-wave background (GWB) (see e.g., [11; 12] and references cited within). This background signal is expected to be stochastic (i.e., random) in the sense that there is no single deterministic waveform that we can use to perform a matched-filter search for this type of GW signal. Nonetheless, because this signal is present in all detectors, we can cross-correlate the data from multiple detectors to observe the GWB, despite its weakness relative to the noise [2; 13]. Although to date there has not been a direct detection of a GWB using a stochastic pipeline, we know from Advanced LIGO's and Virgo's detections of individual resolvable sources that a background arising from compact binary mergers must exist. Assuming our detectors are upgraded as planned in the coming years [14], and given current projections for the signal [15], detecting the GWB may just be a matter of time. On the other hand, we can improve our detection methods to measure this signal sooner. We assume the latter strategy in this paper.
### Motivation
A likely source of a GWB in the frequency band of the LVK detectors is the population of stellar-mass BBH mergers throughout the Universe. Rate estimates calculated from the BBH signals detected to date [15; 16] predict a BBH merger in the observable universe every \(\sim\) 5-10 minutes on average. Since the duration of a BBH merger in the LVK band is of order 1 s, the duty cycle \(\xi\) of such events (defined as the time in band for one merger signal divided by the average time between successive mergers) is of order \(10^{-3}\). Thus, the expected GWB signal is "popcorn-like" or _intermittent_, with the signal being "on" a small fraction of the total observation time. A similar calculation for the population of BNS mergers predicts (on average) roughly one event every 15 s, while the duration of a BNS signal in band is approximately 100 s. Thus, BNS merger signals overlap in time leading to a continuous (and possibly confusion-limited) background.
The total expected BBH signal is potentially detectable with the Advanced LIGO and Virgo detectors when observing at design sensitivity [15; 17]. Although the SNRs for the individual events are small, the combined SNR of the correlated data summed over all events grows like the square-root of the observation time, reaching a detectable level of \(3\sigma\) (corresponding to a false alarm probability of approximately \(10^{-3}\)) after \(\sim\)40 months of observation [17]. This estimate of the time-to-detection is based on the standard cross-correlation search [2], which looks for evidence of excess cross-correlated signal power, assuming that the amplitude of the GW signal component is drawn from a continuous-Gaussian distribution. This search assumes that the signal is "on" all the time, in conflict with the intermittent nature of the stellar-mass BBH background, which is expected to be the dominant signal. Thus, although the standard cross-correlation search is able to detect the time-averaged signal from an intermittent GWB [18], this search is sub-optimal in the sense that the time-to-detection will be longer than that for a search which properly takes into account the intermittent nature of the background.
### Purpose and outline
The purpose of this paper is to introduce a new stochastic-signal-based search that specifically targets intermittent GWBs, and hence can potentially reduce the time-to-detection of the BBH background signal. This new search is built on the seminal work of Drasco and Flanagan [1], who proposed a Gaussian mixture-model (GMM) likelihood function for analyzing intermittent GWBs (Sec. II.1). Our proposed search for intermittent GWBs looks for excess cross-correlated power in short stretches of data. Conversely, a deterministic-signal-based search for the intermittent BBH background was proposed by Smith and Thrane [19], which involves marginalizing over the signal parameters for deterministic BBH chirp waveforms in short (\(\sim 4\) s) stretches of data (Sec. II.2). By construction, our proposed search is adaptable to a generic intermittent GWB since it looks only for excess cross-correlated power. We also expect our proposed search to be computationally more efficient in detecting a signal than the deterministic-signal-based
approach of Smith and Thrane, since our search ignores the deterministic form of the GW signal waveforms and hence the need to marginalize over all the associated signal parameters.
A brief outline of this paper is as follows: first, we give an overview of the current searches for intermittent GWBs in Sec. II. We then proceed by introducing our proposed stochastic search for intermittent GWBs in Sec. III. To compare the performance of the various search methods mentioned above, we analyze a series of datasets which are tailored to highlight the merits and shortcomings of each style of search. We start in Sec. IV.1 by considering stationary-Gaussian white noise in two co-located and co-aligned detectors, and inject an intermittent GWB made up of white GW bursts with Gaussian signal amplitudes scaled by distances to the sources drawn from a uniform-in-volume distribution. We then consider a background made up of colored GW bursts1 in Sec. IV.2, which follow the expected spectral shape of BBH mergers. Finally, we analyze a set of deterministic BBH chirp waveforms in Sec. IV.3, where the chirp parameters are fixed except for the distance to the source, which is also drawn from a uniform-in-volume distribution. We conclude in Sec. V by discussing possible extensions of our method and additional tests that are needed on more realistic simulated data before it can be run on real LVK data.
Footnote 1: The term “burst” will be used throughout this paper as it is the most general, irrespective of the type of signal. In the context of compact binary mergers, these bursts of GWs are often referred to as “transients”.
## II Proposed Searches for Intermittent GWBs - Overview
The standard continuous cross-correlation search [2] aims to measure the fractional energy density of a GWB, defined as
\[\Omega_{\rm gw}(f)=\frac{1}{\rho_{c}}\frac{{\rm d}\rho_{\rm gw}}{{\rm d}\ln f}, \tag{1}\]
where the critical energy density of the Universe is \(\rho_{c}=3H_{0}^{2}c^{2}/(8\pi G)\), \(H_{0}\) is the Hubble constant, \(c\) is the speed of light, and \(G\) is Newton's constant. Alternatively, a GWB can be characterized by its power spectral density (PSD) \(P_{\rm gw}(f)\), which is related to \(\Omega_{\rm gw}(f)\) by [2]:
\[\Omega_{\rm gw}(f)=\frac{10\pi^{2}}{3H_{0}^{2}}f^{3}P_{\rm gw}(f). \tag{2}\]
For the target signal of a BBH GWB, it is well known that the fractional energy density spectrum is \(\Omega_{\rm gw}(f)\propto f^{2/3}\) to good approximation [20], in the frequency ranges probed by the LVK interferometers. This knowledge can be incorporated into the search, reducing it to the measurement of a single quantity \(\Omega_{\rm gw}(f_{\rm ref})\), where \(f_{\rm ref}\) is a reference frequency chosen where the sensitivity of the LVK detectors is best (typically 25 Hz) [21]. For the remainder of the paper, we will refer to \(\Omega_{\rm gw}(f_{\rm ref})\) simply as \(\Omega_{\rm gw}\) for brevity. For a set of data containing enough events to be statistically significant, \(\Omega_{\rm gw}\) is the amplitude of the time and population-averaged energy density. We will refer to this stochastic search for continuous backgrounds described above as SSC.
Since this search assumes a continuous-in-time signal in the data, it does not properly model an important feature of the BBH GWB signal--the intermittency. To remedy this improper modeling, several searches targeting intermittent GWBs specifically have been proposed. We start by giving a high-level overview of these different analysis methods. We refrain from giving details about the actual form of the likelihoods and refer to Appendix A for more information.
### Gaussian mixture-model likelihood function for intermittent GWBs
In 2003, Drasco and Flanagan [1] proposed a search for an intermittent GWB that makes use of a GMM likelihood function of the form
\[\mathscr{L}_{\rm tot}=\prod_{I}^{N_{\rm seg}}\left[\xi\mathscr{L}_{s,I}+(1- \xi)\mathscr{L}_{n,I}\right]\,, \tag{3}\]
where \(\xi\) is the probability that a particular segment contains a GW signal, and \(\mathscr{L}_{s,I}\) and \(\mathscr{L}_{n,I}\) are the likelihood functions for segment \(I\) in the presence and absence of a GW signal, i.e., the signal and noise likelihoods. For the simple toy model considered in their paper (i.e., single-sample GW "bursts", occurring with probability \(\xi\) drawn
from a fixed Gaussian distribution with variance \(\sigma_{b}^{2}\), and injected into uncorrelated white noise in two co-located and co-aligned detectors), the signal and noise parameters that enter the likelihood functions \(\mathscr{L}_{s,I}\) and \(\mathscr{L}_{n,I}\) are the variances \((\sigma_{b}^{2},\sigma_{n_{1}}^{2},\sigma_{n_{2}}^{2})\) and \((\sigma_{n_{1}}^{2},\sigma_{n_{2}}^{2})\), respectively. Single-sample bursts are bursts whose duration is less than the sample period \(\Delta t\). By maximizing \(\mathscr{L}_{\rm tot}\) with respect to all four parameters \((\xi,\sigma_{b}^{2},\sigma_{n_{1}}^{2},\sigma_{n_{2}}^{2})\), Drasco and Flanagan obtained a detection statistic (the maximum-likelihood statistic), which they could use to search for intermittent GWBs. Note that in the case \(\xi=1\), i.e., assuming the signal is always present, one recovers the standard continuous-Gaussian search introduced above.
Although Drasco and Flanagan tested their proposed method with a test statistic within a frequentist framework, we have decided to work within a Bayesian framework in this paper. We define several concepts of importance within this framework before moving on to the discussion of the results of Drasco and Flanagan.
Given a likelihood function \(\mathscr{L}_{\rm tot}\) and priors \(\pi\), the joint posterior distribution for the duty cycle and the signal+noise parameters can be computed using Bayes' theorem:
\[p(\xi,\sigma_{b}^{2},\sigma_{n_{1}}^{2},\sigma_{n_{2}}^{2}|d)=\frac{\mathscr{ L}_{\rm tot}(d|\xi,\sigma_{b}^{2},\sigma_{n_{1}}^{2},\sigma_{n_{2}}^{2})\pi( \xi)\pi(\sigma_{b}^{2})\pi(\sigma_{n_{1}}^{2})\pi(\sigma_{n_{2}}^{2})}{ \mathcal{Z}(d)}\,, \tag{4}\]
where
\[\mathcal{Z}(d)\equiv\int\mathrm{d}\xi\int\mathrm{d}\sigma_{b}^{2}\int\mathrm{ d}\sigma_{n_{1}}^{2}\int\mathrm{d}\sigma_{n_{2}}^{2}\,\mathscr{L}_{\rm tot }(d|\xi,\sigma_{b}^{2},\sigma_{n_{1}}^{2},\sigma_{n_{2}}^{2})\pi(\xi)\pi( \sigma_{b}^{2})\pi(\sigma_{n_{1}}^{2})\pi(\sigma_{n_{2}}^{2}) \tag{5}\]
is the model evidence. Marginalized posterior distributions (for each parameter separately) are obtained by integrating the joint posterior distribution over all the other parameters, e.g.,
\[p(\xi)=\int\mathrm{d}\sigma_{b}^{2}\int\mathrm{d}\sigma_{n_{1}}^{2}\int \mathrm{d}\sigma_{n_{2}}^{2}\;p(\xi,\sigma_{b}^{2},\sigma_{n_{1}}^{2},\sigma _{n_{2}}^{2})\,. \tag{6}\]
Of course, likelihood functions, priors, etc., are all calculated in the context of a particular choice of analysis model \(\mathscr{M}_{\alpha}\) (e.g., a GMM likelihood search for intermittent GWBs or the standard continuous-Gaussian search), which we have not indicated in the above expressions. If we explicitly denote the dependence of the above distributions on the choice of analysis model, we can define the Bayes factor between models \(\mathscr{M}_{\alpha}\) and \(\mathscr{M}_{\beta}\) as
\[\mathcal{B}_{\alpha\beta}(d)\equiv\frac{\mathcal{Z}(d|\mathscr{M}_{\alpha})}{ \mathcal{Z}(d|\mathscr{M}_{\beta})}\,. \tag{7}\]
Assuming equal prior odds for the two models, the Bayes factor tells us how much more the data favors model \(\mathscr{M}_{\alpha}\) relative to \(\mathscr{M}_{\beta}\). Throughout this paper, we will make plots of the natural logarithm of the Bayes factor as a function of the duty cycle to compare the various search methods.
With these concepts in mind, we now move to the discussion of the results of the proposed GMM likelihood. Drasco and Flanagan showed that their detection statistic for intermittent GWBs performs better than the standard cross-correlation statistic for continuous-Gaussian backgrounds when the duty cycle \(\xi\) is sufficiently small. To illustrate this, we implement their proposed GMM likelihood in a Bayesian framework. Instead of using their proposed frequentist detection statistic, we use the Bayes factor as a measure of efficiency. To be able to study its behavior as a function of the duty cycle, we combine 100 data realizations for each \(\xi\) value. Each data realization consists of 40,000 segments, where a fraction of them contains single-sample bursts drawn from a Gaussian distribution with variance \(\sigma_{b}^{2}=1\).
We keep the total continuous-Gaussian signal-to-noise ratio fixed to 3, computed using (8) and (10), by adjusting the noise variances for each value of the duty cycle, rather than adjusting the signal parameters. So, as \(\xi\) decreases, the segment signal-to-noise ratios must increase, which means that the noise variances must decrease. This is illustrated in Fig. 1, where both the continuous-in-time, i.e., \(\xi=1\) in (3), and the intermittent GMM likelihood analysis methods are used. Each plotted point corresponds to the mean of the \(\ln\) Bayes factor over 100 realizations of data, while the error bars correspond to the standard deviation of the \(\ln\) Bayes factor.
While the continuous search performs equally well for all duty cycles (since it assumes \(\xi=1\)), the Bayes factor for the GMM likelihood increases as \(\xi\) decreases, exceeding the continuous stochastic likelihood Bayes factor, illustrating that the GMM likelihood performs better than the continuous likelihood for smaller values of \(\xi\). Equivalently, the relative performance of the Bayes factors shown in Fig. 1 can be expressed in terms of
\[\rho_{\rm seg}\equiv\frac{\sigma_{b}^{2}}{\sigma_{n_{1}}\sigma_{n_{2}}}\,, \tag{8}\]
which is the expected signal-to-noise ratio in an individual segment assuming the presence of a GW signal with burst variance \(\sigma_{b}^{2}\). In terms of \(\rho_{\rm seg}\), the condition for the GMM likelihood to perform better than the continuous likelihood is
\[\rho_{\rm seg}\sim 1\,. \tag{9}\]
In the limit where \(\rho_{\rm seg}\ll 1\), the GW signals in an individual segment are sufficiently weak that the GMM likelihood does not perform any better than the standard stochastic continuous likelihood. Conversely, when \(\rho_{\rm seg}\gg 1\), the GW signals in the individual segments are so strong that they are individually resolvable, with segment signal-to-noise ratios exceeding the threshold needed for detection with a single-detector burst statistic. In other words, a search for an intermittent GWB is the most sensitive search when the GW signals in the individual segments are marginally sub-threshold (\(\rho_{\rm seg}\sim 1\)).
Furthermore, we can determine an approximate value of \(\rho_{\rm seg}\) for the LVK detectors, for the population of stellar-mass BBH mergers throughout the Universe. As mentioned in Sec. I, it should take \(\sim 40\) months of observation using the standard continuous-Gaussian cross-correlation statistic to observe the BBH background with a total signal-to-noise ratio \(\rho_{\rm tot}=3\)[22]. Since the segment duration proposed by Smith and Thrane [19] for an intermittent search is of order \(T_{\rm seg}\sim 4\) s (see Sec. II.2 for more details), 40 months of observation corresponds to \(N_{\rm seg}\sim 2.5\times 10^{7}\) segments. The final input that we need to do the calculation is the expected duty cycle of the signal, which for stellar-mass BBH mergers throughout the Universe is \(\xi\sim 10^{-3}\). These values imply
\[\rho_{\rm seg}=\frac{\rho_{\rm tot}}{\xi\sqrt{N_{\rm seg}}}\sim 0.6\,, \tag{10}\]
which is in the regime where a search for an intermittent GWB should start to perform better than the standard continuous-Gaussian cross-correlation search. The value of \(\rho_{\rm seg}\) at which the intermittent search begins to outperform the continuous search in Fig. 1 matches this result.
### Deterministic-signal-based search for intermittent GWBs
In 2018, Smith and Thrane [19] extended the work of Drasco and Flanagan [1] by proposing an optimal fully-Bayesian deterministic-signal-based search for the intermittent GWB produced by the population of stellar-mass BBH mergers throughout the Universe. As in [1], Smith and Thrane [19] assume a mixture model for the intermittent GW signals. They chose a segment duration \(\sim\!4\) s, which is long enough to include a typical BBH chirp signal, yet short enough that the probability of two such signals occurring in a single segment is negligibly small (\(\sim 10^{-4}\)). However, instead of considering single-sample GW bursts drawn from a fixed Gaussian distribution, they considered finite-duration deterministic BBH chirp waveforms \(h=h_{\rm chirp}(t;\theta)\), where \(\theta\) are the chirp parameters (e.g., the component masses and spins of the two BHs, the inclination angle of the orbital plane relative to the line of sight, etc). Smith and Thrane then marginalized (instead of maximized) over the signal parameters for each segment of data, assuming prior probability distributions for these parameters, while replacing the noise parameters by measured estimates of these
Figure 1: ln Bayes factors of the signal+noise model to the noise-only model as a function of the duty cycle \(\xi\) for the intermittent search (blue) and the continuous search (orange) where the signal consists of single sample bursts drawn from a Gaussian distribution of variance \(\sigma_{b}^{2}\).
quantities. If the signal priors are conditioned on segment-independent population parameters \(\theta_{\rm pop}\), which parameterize the distributions from which the individual masses, spins, etc., are drawn, then the final (marginalized) likelihood function \(\mathscr{L}_{\rm tot}\equiv\mathscr{L}_{\rm tot}(d|\xi,\theta_{\rm pop})\) depends only on the duty cycle \(\xi\) and the population parameters \(\theta_{\rm pop}\). Finally, doing Bayesian inference calculations given \(\mathscr{L}_{\rm tot}\) and a prior for \(\xi\) and \(\theta_{\rm pop}\), Smith and Thrane were able to construct joint posterior distributions for \(\xi\) and \(\theta_{\rm pop}\) as well as Bayes factors comparing the evidence for this intermittent signal model and e.g., that for the standard cross-correlation search for a continuous-Gaussian GWB.
The deterministic-signal-based search of Smith and Thrane is expected to decrease the time-to-detection of the intermittent GWB produced by stellar-mass BBH mergers by a factor of \(\sim\!1000\) relative to the standard continuous-Gaussian search [19], by taking into account both the intermittent nature of the signal as well as knowledge of the form of the individual waveforms, whose parameters are marginalized over. For this factor of \(\sim 1000\) determination, they did not consider any population parameters, so the only parameter that they needed to infer from the data was the duty cycle \(\xi\). A posterior distribution for \(\xi\) sufficiently bounded away from zero would be evidence of a confident detection of an intermittent GWB signal. The gain in time-to-detection comes at the computational cost of having to perform Bayesian marginalization over all the BBH chirp signal parameters for every 4 s segment of data. This search is currently in the testing phase, in preparation for running on real LVK data in the near future.
Within this paper, for comparative purposes, we will implement a much simpler version of this deterministic-signal-based search. We will use the acronym DSI throughout this work to refer to the deterministic-signal-based search for intermittent GWBs.
## III SSI: Stochastic search for intermittent GWBs
Building off the work of Drasco and Flanagan [1], we propose a new search based on a stochastic-signal model consisting of intermittent "bursts" of correlated stochastic GWs with unknown duty cycle \(\xi\), in otherwise uncorrelated noise in two detectors. We call this search SSI, for stochastic search for intermittent GWBs, referencing both the signal model the analysis assumes, as well as the type of background for which it is designed. To make the connection with BBH mergers, we assume that these bursts of GWs last on the order of a few seconds so the data are split into short stretches as in Smith and Thrane, and that the power spectrum in the LVK detectors goes like \(f^{-7/3}\), appropriate for binary inspiral. This corresponds to a fractional energy density spectrum \(\Omega_{\rm gw}(f)\propto f^{2/3}\), as introduced in (1).
Rather than marginalize over the parameters of deterministic BBH chirp waveforms as in the deterministic-signal-based approach, our search looks for excess cross-correlated power when the signal is assumed to be present, using a mixture-model likelihood function. Thus, we trade off optimality for computational efficiency and flexibility relative to the deterministic-signal-based approach, while still accounting for the intermittent nature of the BBH background, which is missing from the standard cross-correlation search for continuous-Gaussian GWBs.
We begin by dividing up the data into short segments such that the probability of a segment containing more than one signal is small. The total likelihood is given by a product over segments of the GMM likelihood function
\[\mathscr{L}_{\rm tot}(d|\xi,\theta_{s,{\rm pop}},\theta_{n})=\prod_{I}\left[ \xi\mathscr{L}_{s}(d_{I}|\theta_{s,{\rm pop}},\theta_{n})+(1-\xi)\mathscr{L} _{n}(d_{I}|\theta_{n})\right], \tag{11}\]
where \(\theta_{n}\) represents the noise parameters, \(\theta_{s,{\rm pop}}\) represents the signal population parameters, and \(d_{I}\) represents the data in segment \(I\).
For our stochastic-signal-based search, the segment-dependent signal likelihood takes the form
\[\mathscr{L}_{s}(d_{I}|\theta_{s,{\rm pop}},\theta_{n})\equiv\int{\rm d} \theta_{s,I}\,\mathscr{L}_{n}(d_{I}|\theta_{s,I},\theta_{n})\pi(\theta_{s,I} |\theta_{s,{\rm pop}})\,, \tag{12}\]
where the segment-dependent signal parameters \(\theta_{s,I}\) are marginalized over. Marginalizing over the correct segment prior is an important and necessary step in order to recover correct and unbiased results.
We choose to write the likelihood for a specific set of parameters, \(\theta_{s,{\rm pop}}=\langle\Omega_{b}\rangle\), \(\theta_{s,I}=\Omega_{b,I}\), and \(\theta_{n}=\{\sigma_{n_{1}}^{2},\sigma_{n_{2}}^{2}\}\), where \(\langle\Omega_{b}\rangle\) is the population-averaged energy density amplitudes of bursts of GW power and \(\Omega_{b,I}\) is the energy density amplitude in data segment \(I\). The population parameter \(\langle\Omega_{b}\rangle\) is related to \(\Omega_{\rm gw}\), introduced at the beginning of Sec. II, by:
\[\Omega_{\rm gw}=\xi\langle\Omega_{b}\rangle. \tag{13}\]
Recall that \(\Omega_{\rm gw}\) is what the standard cross-correlation search for a continuous-Gaussian GWB estimates. For the analyses included in this paper, we simulate stationary, white-Gaussian noise. This means that the power spectrum of the noise is independent of frequency and has the value
\[P_{n_{\mu}}=\frac{\sigma_{n_{\mu}}^{2}}{f_{\rm high}-f_{\rm low}} \tag{14}\]
where \(\mu=1,2\) is the detector index and \(f_{\rm low}\) and \(f_{\rm high}\) are the low- and high-frequency cutoffs for our search. We will take \(f_{\rm high}\) to equal the Nyquist critical frequency \(f_{\rm nyq}\equiv 1/(2\Delta t)\), where \(\Delta t\) is the sample period. Each segment of time-domain data of duration \(T\) is Fourier transformed and coarse-grained to frequencies \(f_{k}\) having frequency resolution \(M/T\). We then take our noise parameters to be the variance of the noise in each detector. Under these assumptions, the segment-dependent signal likelihood (12) becomes
\[\mathscr{L}_{s}(d_{I}|\langle\Omega_{b}\rangle,\sigma_{n_{1}}^{2},\sigma_{n_{2}}^{2}) =\int d\Omega_{b,I}\pi(\Omega_{b,I}|\langle\Omega_{b}\rangle)\prod _{k}\frac{1}{(\pi T/2)^{2M}(P_{1,I}(f_{k})P_{2,I}(f_{k})-P_{b,I}^{2}(f_{k}))^{ M}}\] \[\times\exp\left\{-\frac{M}{(P_{1,I}(f_{k})P_{2,I}(f_{k})-P_{b,I}^ {2}(f_{k}))}\left[\hat{P}_{1,Ik}P_{2,I}(f_{k})+\hat{P}_{2,Ik}P_{1,I}(f_{k})-2 \hat{P}_{b,Ik}P_{b,I}(f_{k})\right]\right\}\,, \tag{15}\]
where
\[P_{1,I}(f)\equiv\frac{\sigma_{n_{1}}^{2}}{f_{\rm high}-f_{\rm low}}+P_{b,I}(f )\,,\qquad P_{2,I}(f)\equiv\frac{\sigma_{n_{2}}^{2}}{f_{\rm high}-f_{\rm low} }+P_{b,I}(f)\,,\qquad P_{b,I}(f)\equiv\Omega_{b,I}H(f), \tag{16}\]
are the total auto-correlated power spectra in each detector and the power spectrum for a GW burst in segment \(I\), and \(k\) runs over the coarse-grained frequencies \(f_{k}\). The spectral shape \(H(f)\) is of the form
\[H(f)\equiv\frac{3H_{0}^{2}}{10\pi^{2}}\frac{1}{f_{\rm ref}^{3}}\left(\frac{f}{ f_{\rm ref}}\right)^{-7/3}\,. \tag{17}\]
The Fourier transformed data enter the evidence via the following quadratic combinations
\[\hat{P}_{1,Ik} \equiv\frac{2}{T}\,\frac{1}{M}\sum_{k^{\prime}=k-M/2T}^{k+M/2T-1} \left|\tilde{d}_{1,Ik^{\prime}}\right|^{2},\] \[\hat{P}_{2,Ik} \equiv\frac{2}{T}\,\frac{1}{M}\sum_{k^{\prime}=k-M/2T}^{k+M/2T-1} \left|\tilde{d}_{2,Ik^{\prime}}\right|^{2}, \tag{18}\] \[\hat{P}_{b,Ik} \equiv\frac{2}{T}\,\frac{1}{M}\sum_{k^{\prime}=k-M/2T}^{k+M/2T-1} \mathrm{Re}\left(\tilde{d}_{1,Ik^{\prime}}^{*}\tilde{d}_{2,Ik^{\prime}}\right)\,,\]
which are coarse-grained estimators (i.e., averaged over fine-grained frequencies labeled by \(k^{\prime}\)) of the total auto-correlated and cross-correlated power spectra in the two detectors.
The segment-dependent noise likelihood can similarly be written as
\[\mathscr{L}_{n}(d_{I}|\sigma_{n_{1}}^{2},\sigma_{n_{2}}^{2})=\prod_{k}\frac{1 }{(\pi T/2)^{2M}\left(P_{n_{1}}(f_{k})P_{n_{2}}(f_{k})\right)^{M}}\exp\left\{ -M\left[\frac{\hat{P}_{1,Ik}}{P_{n_{1}}}+\frac{\hat{P}_{2,Ik}}{P_{n_{2}}} \right]\right\}\,. \tag{19}\]
In principle, the noise parameters \(\theta_{n}=\{\sigma_{n_{1}}^{2},\sigma_{n_{2}}^{2}\}\) in the likelihood functions above should be inferred together with the signal population parameters \(\theta_{s,\rm pop}=\langle\Omega_{b}\rangle\), as part of the Bayesian inference procedure. Doing so defines the so-called _full_ version of the analyses. However, as LVK noise is stationary to good approximation, it is typically sufficient to use measured estimates of the noise parameters (denoted by \(\bar{\sigma}_{n_{1}}^{2}\) and \(\bar{\sigma}_{n_{2}}^{2}\) and computed using (111)) in the likelihood function, thereby avoiding having to infer them in this analysis. We refer to this approach as the _reduced_ form of the analyses, which is computationally cheaper than the full form. The reduced version of the likelihood requires that the cross-correlation estimators be approximately Gaussian, which holds only if the number of samples per segment \(N\) is sufficiently large.
The reduced segment-dependent signal likelihood is given by [23]:
\[\mathscr{L}_{s}(d_{I}|\langle\Omega_{b}\rangle,\bar{\sigma}_{n_{1}}^{2},\bar{ \sigma}_{n_{2}}^{2})=\int d\Omega_{b,I}\pi(\Omega_{b,I}|\langle\Omega_{b} \rangle)\frac{1}{\sqrt{2\pi\operatorname{var}(\Omega_{b,I})}}\exp\left[-\frac{( \hat{\Omega}_{b,I}-\Omega_{b,I})^{2}}{2\operatorname{var}(\bar{\Omega}_{b,I}) }\right]\,, \tag{20}\]
where
\[\hat{\Omega}_{b,I}\equiv\frac{\sum_{k}Q_{I}(f_{k})\hat{P}_{b,Ik}}{\sum_{k^{ \prime}}Q_{I}(f_{k^{\prime}})H(f_{k^{\prime}})}\,,\qquad\operatorname{var}( \bar{\Omega}_{b,I})\equiv\left(2M\sum_{k}Q_{I}(f_{k})H(f_{k})\right)^{-1} \tag{21}\]
are the optimally-filtered cross-correlation estimators and corresponding variances, which are constructed from coarse-grained estimates of the cross-correlated power \(\bar{P}_{b,Ik}\) (given by (18)) and the segment-dependent optimal filter function
\[Q_{I}(f)\equiv\frac{H(f)}{\bar{P}_{1,I}(f)\bar{P}_{2,I}(f)}\,, \tag{22}\]
where
\[\bar{P}_{1,I}(f)\equiv\frac{\bar{\sigma}_{n_{1}}^{2}}{f_{\rm high}-f_{\rm low} }+\Omega_{b,I}H(f)\,,\qquad\bar{P}_{2,I}(f)\equiv\frac{\bar{\sigma}_{n_{2}}^{2 }}{f_{\rm high}-f_{\rm low}}+\Omega_{b,I}H(f)\,. \tag{23}\]
Note that \(Q_{I}(f)\) is a generalization of the standard optimal filter for an \(f^{-7/3}\) power spectrum (see e.g., [2; 24]), extended to include the segment-dependent burst contribution, i.e., dependent on the likelihood parameter \(\Omega_{b,I}\), to the total auto-correlated power estimates \(\bar{P}_{1,I}(f)\), \(\bar{P}_{2,I}(f)\).
The reduced segment-dependent noise likelihood \(\mathscr{L}_{n}(d_{I}|\bar{\sigma}_{n_{1}}^{2},\bar{\sigma}_{n_{2}}^{2})\) is given by
\[\mathscr{L}_{n}(d_{I}|\bar{\sigma}_{n_{1}}^{2},\bar{\sigma}_{n_{2}}^{2})= \frac{1}{\sqrt{2\pi\,{\rm var}(\Omega_{b})}}\exp\left[-\frac{(\hat{\Omega}_{b,I})^{2}}{2\,{\rm var}(\hat{\Omega}_{b})}\right]\,, \tag{24}\]
where \(\hat{\Omega}_{b,I}\) and \({\rm var}(\hat{\Omega}_{b})\) are the same as for the segment-dependent signal likelihood, but with a segment-_independent_, noise-only optimal filter function
\[Q(f)\equiv\frac{H(f)}{\bar{P}_{n_{1}}\bar{P}_{n_{2}}}\,. \tag{25}\]
## IV Analyses
In this section, we describe in detail a set of analyses, which we use to illustrate various aspects of the search methods described above. The tests that these analyses allow us to perform should be thought of as providing a "proof-of-principle" demonstration of our proposed stochastic-signal-based search for intermittent GWBs. A more rigorous test of this search on actual LVK noise and realistic injected BBH chirp signals is a topic for future investigation (see Sec. V for more details).
For all the analyses we consider, we assume white, stationary-Gaussian noise in two co-located and co-aligned detectors with variances \(\sigma_{n_{1}}^{2}\) and \(\sigma_{n_{2}}^{2}\), respectively. The assumption of co-located and co-aligned detectors means that we can ignore the so-called overlap reduction function [25; 26], which encodes the reduction in cross-correlated power that comes from correlating two physically separated and possibly misaligned detectors. To calculate the total SNR for each set of data, we use the average SNR per segment computed using formulas specified below for each data set and rearrange (10) to solve for \(\rho_{\rm tot}\). We note that this \(\rho_{\rm tot}\) is the total SNR of the continuous-in-time cross-correlation search, which assumes the signal exists in every segment of data. For our intermittent analyses, we use this definition of total SNR to quantify the strength of the GW signal.
### Extension of previous work
In Section II.1, the results of Drasco and Flanagan [1] are reproduced within a Bayesian framework (see Fig. 1). We remind the reader that the signals considered there are single-sample GW "bursts" drawn from a fixed Gaussian distribution with variance \(\sigma_{b}^{2}\). We proceed with the generalisation of the proposed GMM likelihood to allow for more realistic signals.
As a first step, we now allow multi-sample (\(N\gg 1\)) bursts of white stochastic GWs having duty cycle \(\xi\), with signal samples drawn from a probability distribution that depends on the distance \(r\) to an individual source. For a source at arbitrary reference distance \(r_{\rm ref}\), we draw the signal samples from a Gaussian distribution with fixed variance \(\sigma_{\rm ref}^{2}\). For a source at a general distance \(r\), we first draw the signal samples from a Gaussian distribution with variance \(\sigma_{\rm ref}^{2}\) as explained above, and then rescale the samples by a factor of \(r_{\rm ref}/r\), since GW signal amplitudes fall off as \(1/r\)[27]. Thus,
\[\sigma_{b}^{2}(r)\equiv\sigma_{\rm ref}^{2}\,\frac{r_{\rm ref}^{2}}{r_{\rm ref}^ {2}} \tag{26}\]
is the burst variance for a source at distance \(r\).
For the population model, we will assume that the source distances are drawn from a _uniform-in-volume_ probability distribution
\[p(r|r_{\rm max})\equiv\frac{3r^{2}}{r_{\rm max}^{3}-r_{\rm min}^{3}}\,, \tag{27}\]
where \(r_{\rm max}\) is the maximum distance out to which the sources are formed (i.e., an unknown population parameter that will eventually be inferred from the data). The parameter \(r_{\rm min}\) is taken to be a fixed, known parameter, for simplicity. Note that choosing \(r_{\rm min}\neq 0\) in the simulation process limits the number of GW bursts that are so loud that they are individually detectable in a single segment of data. We also note that this choice of population model is a simplification as it does not take into account cosmology.
It follows from (26) and (27) that
\[p(\sigma_{b}^{2}(r)|r_{\rm max})=\frac{3r_{\rm ref}^{3}}{2(r_{\rm max}^{3}-r_{ \rm min}^{3})}(\sigma_{\rm ref}^{2})^{3/2}(\sigma_{b}^{2}(r))^{-5/2} \tag{28}\]
is the probability distribution for the signal variance \(\sigma_{b}^{2}(r)\) associated with a source at distance \(r\). We also define the population-averaged burst variance:
\[\langle\sigma_{b}^{2}\rangle\equiv\int_{r_{\rm min}}^{r_{\rm max}}{\rm d}r\,p (r|r_{\rm max})\sigma_{b}^{2}(r)=3\sigma_{\rm ref}^{2}\frac{r_{\rm ref}^{2}(r _{\rm max}-r_{\rm min})}{r_{\rm max}^{3}-r_{\rm min}^{3}}\,, \tag{29}\]
which is obtained by averaging \(\sigma_{b}^{2}(r)\) over the uniform-in-volume-distributed source distances \(r\). We define \(\sigma_{\rm gw}^{2}\equiv\xi\langle\sigma_{b}^{2}\rangle\), which has the interpretation of being the time and population-averaged variance of the signals. This quantity is what the standard cross-correlation search for a continuous-Gaussian GWB (SSC) estimates.
Since the probability distribution for \(\sigma_{b}^{2}(r)\) depends on just one free parameter, i.e., \(r_{\rm max}\) in (28), we can equally well use the population-averaged variance \(\langle\sigma_{b}^{2}\rangle\) as the population parameter for the probability distribution. Solving (29) for \(r_{\rm max}\) in terms of \(\langle\sigma_{b}^{2}\rangle\), we find
\[\begin{array}{c}r_{\rm max}=r_{\rm min}\left(\sqrt{-\frac{3}{4}+3\frac{ \sigma_{b,\rm max}^{2}}{\langle\sigma_{b}^{2}\rangle}}-\frac{1}{2}\right)\,, \\ \sigma_{b,\rm max}^{2}\equiv\sigma_{b}^{2}(r_{\rm min})=\sigma_{\rm ref}^{2} \frac{r_{\rm ref}^{2}}{r_{\rm min}^{2}}\,,\end{array} \tag{30}\]
leading to
\[p(\sigma_{b}^{2}(r)|\langle\sigma_{b}^{2}\rangle)=\frac{\langle\sigma_{b}^{2} \rangle(\sigma_{b,\rm max}^{2})^{1/2}}{\sqrt{-3+12\sigma_{b,\rm max}^{2}/ \langle\sigma_{b}^{2}\rangle}-3}\,(\sigma_{b}^{2}(r))^{-5/2}\,. \tag{31}\]
The above expression is somewhat messy, but it will be useful when we perform Bayesian inference on \(\langle\sigma_{b}^{2}\rangle\). Building on the above, we define the average segment SNR of the distribution in a similar manner as (29),
\[\langle\rho_{\rm seg}\rangle=\int_{r_{\rm min}}^{r_{\rm max}}{\rm d}r\,p(r|r_ {\rm max})\rho_{\rm seg}(r) \tag{32}\]
where \(\rho_{\rm seg}(r)\) for these signals is given by (8) with \(\sigma_{b}^{2}\) replaced by \(\sigma_{b}^{2}(r)\).
We generate multi-sample (\(N=2048\)) bursts of white stochastic GWs having duty cycle \(\xi=2.98\times 10^{-3}\), with signal samples drawn from a probability distribution that depends on the distance \(r\) to an individual source, as described above. With the chosen parameters (listed explicitly in Table 1) the population-averaged variance is \(\langle\sigma_{b}^{2}\rangle=0.0769\) and the noise variances are \(\sigma_{n_{1}}^{2}=\sigma_{n_{2}}^{2}=0.691\). An example of the simulated data is shown in Fig. 2, together with the distribution of the burst variances \(\sigma_{b}^{2}(r)\).
We analyse the data with SSC and SSI, using the full version of the likelihoods, i.e. inferring the noise parameters as well as the population parameters. We will not consider DSI for this particular data. The concrete expressions for the likelihoods can be found in Appendix A.1. In Fig. 3, we display the recovery of our SSI search, illustrating that the generalisations made in this section still allow for a successful recovery of the population and noise parameters.
We note that given the large number of samples per segment (\(N=2048\)) used for this analysis, one could have resorted to the reduced version of the likelihoods, where the estimates of the noise parameters are used (as provided in Appendix A.1.4). We refrain from entering into a detailed comparison between full and reduced implementations of the likelihoods, as this was the topic of work by Matas and Romano [23]. Throughout the remainder of the paper, we will work with a large number of samples per segment and will employ the reduced version of the likelihoods.
### Stochastic bursts
We extend the analysis described in the previous section to include frequency dependence. We analyze data defined by multi-sample (\(N\gg 1\)) bursts of stochastic GWs having duty cycle \(\xi\) and an \(f^{-7/3}\) power spectrum, for a uniform-in-volume distribution of source distances between \(r_{\rm min}\) to \(r_{\rm max}\), as in Section IV.1. The choice of spectral index \(-7/3\) is appropriate for compact binary inspiral. We first simulate data for a source at reference distance \(r_{\rm ref}\) so that it has the power spectrum2
Footnote 2: In practice, we first simulate the data in the frequency domain with an amplitude spectral density \(\sqrt{P_{\rm ref}(f)}\) and random phases, and then inverse-Fourier-transform the data back to the time domain.
\[P_{\rm ref}(f)=A_{\rm ref}\left(\frac{f}{f_{\rm ref}}\right)^{-7/3}\,, \tag{33}\]
where \(A_{\rm ref}\) is some fixed amplitude, and \(f_{\rm ref}\) is a reference frequency, usually taken to be 25 Hz in line with LVK searches. For a source at a general distance \(r\), we do the same as above, and then rescale the amplitude of the simulated
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline \multicolumn{10}{|l|}{Extension of previous work} \\ \hline \hline \(N_{\rm seg}\) & \(N\) & \(\xi\) & \(r_{\rm min}\) & \(r_{\rm max}\) & \(r_{\rm ref}\) & \(\sigma_{\rm ref}^{2}\) & \(\langle\sigma_{b}^{2}\rangle\) & \(\sigma_{n}^{2}\) & \(\langle\rho_{\rm seg}\rangle\) & \(\rho_{\rm tot}\) \\ \hline \(4\times 10^{4}\) & 2048 & 2.98\(\times 10^{-3}\) & 2 & 5 & 1 & 1 & 0.0769 & 0.691 & 5.04 & 3 \\ \hline \multicolumn{10}{|l|}{Stochastic bursts} \\ \hline \hline \(N_{\rm seg}\) & \(N\) & \(T\) & \(\xi\) & \(r_{\rm min}\) & \(r_{\rm max}\) & \(r_{\rm ref}\) & \(\Omega_{\rm ref}\) & \(\langle\Omega_{b}\rangle\) & \(f_{\rm low}\) & \(f_{\rm high}\) & \(\langle\rho_{\rm seg,stoch}\rangle\) & \(\rho_{\rm tot,stoch}\) \\ \hline \(4\times 10^{4}\) & 2048 & 4 s & 2.98\(\times 10^{-3}\) & 2 Mpc & 5 Mpc & 2 Mpc & 2.61 & 0.803 & 20 Hz & 256 Hz & 5.04 & 3 \\ \hline \multicolumn{10}{|l|}{Deterministic chirps} \\ \hline \hline \(N_{\rm seg}\) & \(N\) & \(T\) & \(r_{\rm min}\) & \(r_{\rm max}\) & \(f_{\rm low}\) & \(f_{\rm high}\) & \(m\) & \(\langle\Omega_{b}\rangle\) & \(\xi\) & \(\langle\rho_{\rm seg,stoch}\rangle\) & \(\rho_{\rm tot,stoch}\) & \(\langle\rho_{\rm seg,det}\rangle\) & \(\rho_{\rm tot,det}\) \\ \hline \(4\times 10^{4}\) & 2048 & 4 s & 2 Mpc & 5 Mpc & 20 Hz & 256 Hz & 30 \(M_{\odot}\) & 0.803 & 2.98\(\times 10^{-3}\) & 5.04 & 3 & 13.2 & 7.86 \\ \hline \end{tabular}
\end{table}
Table 1: Parameters used for the different analyses in Sec. IV. Parameters listed in ‘Extension of previous work’ and ‘Stochastic bursts’ were used in the production of Fig. 3 and Fig. 4, respectively. The first 9 columns in ‘Deterministic chirps’ were used in the production of Fig. 6, while the last 5 columns specified the additional parameters used for Fig. 7.
Figure 2: **Left:** Example of simulated data with amplitudes drawn from a uniform-in-volume distribution. The parameters used for this injection are given in the ‘Extension of previous work’ section of Table 1. **Right:** Distribution of the burst variances drawn from a uniform-in-volume distribution, with theoretical minimum and maximum burst variance evaluated at \(r_{\rm max}\) and \(r_{\rm min}\), respectively, and average burst variance \(\langle\sigma_{b}^{2}\rangle\) computed according to (29).
signal by a factor of \(r_{\rm ref}/r\), which is equivalent to having
\[A_{b}(r)\equiv A_{\rm ref}\,\frac{r_{\rm ref}^{2}}{r^{2}} \tag{34}\]
as the amplitude of the power spectral density for a GW burst at source distance \(r\). The power spectrum of a burst is therefore
\[P_{b}(r;f)=A_{\rm ref}\frac{r_{\rm ref}^{2}}{r^{2}}\left(\frac{f}{f_{\rm ref}} \right)^{-7/3}\,. \tag{35}\]
Note that by using (2), we can also write the above expression in terms of the fractional energy density spectrum \(\Omega_{b}(r;f)\). Then by taking \(f=f_{\rm ref}\), we can define the amplitude of the energy density at reference frequency \(f_{\rm ref}\) of a burst at distance \(r\)
\[\Omega_{b}(r)\equiv\frac{10\pi^{2}}{3H_{0}^{2}}f_{\rm ref}^{3}P_{b}(r;f_{\rm ref })=\Omega_{\rm ref}\frac{r_{\rm ref}^{2}}{r^{2}},\qquad\Omega_{\rm ref}\equiv \frac{10\pi^{2}}{3H_{0}^{2}}f_{\rm ref}^{3}A_{\rm ref}. \tag{36}\]
Figure 3: Corner plot for the full version of the SSI analysis, combining the posteriors of 100 realizations of the data. The black lines show the injected values of the parameters used for the simulated data, and the three shaded regions for the 2-d joint posteriors correspond to \(1\sigma\), \(2\sigma\), and \(3\sigma\) uncertainty levels. All parameters are recovered within a \(1\sigma\) credible interval.
By following the same derivation given in (29), the population-averaged energy density amplitude for sources distributed uniformly-in-volume between \(r_{\rm min}\) and \(r_{\rm max}\) is
\[\langle\Omega_{b}\rangle=3\Omega_{\rm ref}\frac{r_{\rm ref}^{2}(r_{\rm max}-r_{ \rm min})}{r_{\rm max}^{3}-r_{\rm min}^{3}}\,. \tag{37}\]
The probability distribution of the amplitude of the energy density of the bursts \(\Omega_{b}(r)\) has the same form as (31)
\[p(\Omega_{b}(r)|\langle\Omega_{b}\rangle)=\frac{\langle\Omega_{b}\rangle \Omega_{b,{\rm max}}^{1/2}}{\sqrt{-3+12\Omega_{b,{\rm max}}/\langle\Omega_{b} \rangle}-3}\,\Omega_{b}^{-5/2}(r)\,,\qquad\Omega_{\rm b,max}\equiv\Omega_{b}( r_{\rm min})\,. \tag{38}\]
Thus, the signal segment likelihood used for SSI is given by (15) (full) and (20) (reduced) with prior given by (38) (i.e., \(\pi(\Omega_{b,I}|\langle\Omega_{b}\rangle)=p(\Omega_{b}(r_{I})|\langle\Omega_ {b}\rangle)\)). The integration bounds are then \(\Omega_{\rm b,min}(\langle\Omega_{b}\rangle)\) and \(\Omega_{\rm b,max}\) where \(\Omega_{\rm b,min}=\Omega_{b}(r_{\rm max})\) and \(r_{\rm max}\) is written in terms of the population parameter \(\langle\Omega_{b}\rangle\), in the same manner as (30).
For reference, we note that the expected value of the stochastic (optimally-filtered) signal-to-noise ratio for a segment that contains a GWB burst is
\[\rho_{\rm seg,stoch}=\sqrt{2T}\left[\int_{f_{\rm low}}^{f_{\rm high}}{\rm d}f \ \frac{P_{b}^{2}(f)}{P_{n_{1}}P_{n_{2}}}\right]^{1/2}\,, \tag{39}\]
where \(P_{n_{1}}\) and \(P_{n_{2}}\) are the power spectra of the noise in each detector. Note that, if the two detectors were not co-located and co-aligned, we would need to include a factor of the overlap reduction squared in the numerator of the integrand in (39). The above expression for \(\rho_{\rm seg,stoch}\) is a _power_ signal-to-noise ratio, defined as the expected value of the optimally-filtered cross-correlation statistic divided by its standard deviation, see, e.g., [24].
As mentioned before, our stochastic-signal-based search looks for a GWB consistent with a power spectrum of spectral index \(-7/3\), as expected for BBH mergers. In contrast, the deterministic-signal-based search described in Sec. II.2 (which we call DSI) looks for deterministic BBH chirp waveforms, where the signal parameters of the individual chirps must be marginalized over. We inject intermittent, stochastic bursts with an \(f^{-7/3}\) power spectrum and duty cycle \(\xi=2.98\times 10^{-3}\). The parameters used for the injection are displayed in Table 1.We arbitrarily choose the reference distance \(r_{\rm ref}=r_{\rm min}\). The value of \(\Omega_{\rm ref}\) is chosen to be \(2.61\) (to be consistent with the parameters chosen in Sec. IV.3). With these parameters, the population-averaged energy density amplitude of the bursts is \(\langle\Omega_{b}\rangle=0.803\). The noise is then set such that the average SNR per segment, as computed with (39), is \(5.04\) to give a total SNR of \(3\), as obtained by using (10).
We analyze our data with the reduced forms which estimate the noise parameters of our stochastic-signal-based search (SSI) and the deterministic-signal-based search (DSI). The exact form of the likelihood is given in Sec. III (with coarse-graining factor \(M=16\)) and A.2.d, respectively. The population parameter recovered by SSI is \(\langle\Omega_{b}\rangle\) while the population parameter recovered by DSI is \(r_{\rm max}\). Note, these are related by (37). In Fig. 4, we demonstrate that DSI cannot recover the signal in the data, since no chirp waveform exists. While this result is in a sense obvious, it highlights the challenges that a deterministic-signal-based search faces. Incorrectly modeling the waveforms of the chirps could lead the search to overlook a signal which is present. Conversely, SSI recovers both stochastic bursts of GW power as well as deterministic waveforms, as we will see in the next section.
### Deterministic chirps
Finally, we consider multi-sample bursts of GWs produced by deterministic BBH chirp signals, for a uniform-in-volume distribution of sources (27). The corresponding power spectrum will necessarily have an approximate \(f^{-7/3}\) frequency dependence. By using deterministic BBH chirp signals, this analysis is more in line with the assumptions made by the deterministic-signal-based search DSI.
We assume that all parameters defining the chirp waveforms except for the distances to the sources (e.g., the chirp mass \(\mathscr{M}_{c}\equiv(m_{1}m_{2})^{3/5}/(m_{1}+m_{2})^{1/5}\), the inclination angle \(\iota\), the coalescence time \(t_{\rm col}\), and the phase of coalescence within a segment) have fixed values and are known a priori by the DSI search. For simplicity, we choose the two component masses to be equal (i.e., \(m_{1}=m_{2}\equiv m\)); the inclination angle \(\iota=\pi/2\) so that the source is linearly polarized (i.e., \(h(t)=h_{+}(t)\), \(h_{\times}(t)=0\)); the phase at coalescence \(\Phi_{0}\) to be zero; and the coalescence time \(t_{\rm col}\) to occur at the end of a segment, so \(t_{\rm col}=T\), the segment duration. For a source drawn from the population with distance \(r\), the explicit form for the simulated deterministic chirp signal is given in the time domain by [27]
\[h_{\rm chirp}(t;r)=\frac{1}{2r}\left(\frac{G\mathscr{M}_{c}}{c^{2}}\right)^{5/ 4}\left(\frac{5}{c\tau}\right)^{1/4}\cos\left[\Phi(\tau)\right]\,,\qquad\tau \equiv t_{\rm col}-t\,, \tag{40}\]
where
\[\Phi(\tau)\equiv-2\left(\frac{5G\mathscr{M}_{c}}{c^{3}}\right)^{-5/8}\tau^{5/8}+ \Phi_{0} \tag{41}\]
encodes the frequency evolution of the chirp,
\[f(t)\equiv-\frac{1}{2\pi}\frac{\mathrm{d}}{\mathrm{d}\tau}\Phi(\tau)=\frac{1}{ \pi}\left(\frac{G\mathscr{M}_{c}}{c^{3}}\right)^{-5/8}\left(\frac{5}{256}\frac {1}{\tau}\right)^{3/8}\,. \tag{42}\]
The corresponding BBH chirp power spectrum is
\[P_{\mathrm{chirp}}(r;f)=\frac{2}{T}\left|\tilde{h}_{\mathrm{chirp}}(r;f) \right|^{2}\equiv A_{\mathrm{chirp}}(r)\left(\frac{f}{f_{\mathrm{ref}}}\right)^ {-7/3}\,, \tag{43}\]
where \(\tilde{h}_{\mathrm{chirp}}\) is the Fourier transform of the chirp waveform and
\[A_{\mathrm{chirp}}(r)=A_{\mathrm{ref}}\frac{r_{\mathrm{ref}}^{2}}{r^{2}}\,, \qquad A_{\mathrm{ref}}\equiv\frac{2}{T}\frac{c^{2}}{4r_{\mathrm{ref}}^{2}} \left(\frac{5\pi}{24}\right)\left(\frac{G\mathscr{M}_{c}}{c^{3}}\right)^{5/3} (\pi f_{\mathrm{ref}})^{-7/3}\,. \tag{44}\]
Note one can express the chirp PSD, \(P_{\mathrm{chirp}}\), in terms of the fractional energy density of the chirps by using (2). For reference, we note that the expected value of the deterministic (matched-filter) signal-to-noise ratio for a segment which contains a BBH chirp signal is [24]
\[\rho_{\mathrm{seg,det}}=\left[4\sum_{\mu=1}^{2}\int_{f_{\mathrm{low}}}^{f_{ \mathrm{high}}}\mathrm{d}f\,\frac{|\tilde{h}_{\mathrm{chirp}}(f)|^{2}}{P_{n_{ \mu}}}\right]^{1/2}=\sqrt{2T}\left[\sum_{\mu=1}^{2}\int_{f_{\mathrm{low}}}^{f_ {\mathrm{high}}}\mathrm{d}f\,\frac{P_{\mathrm{chirp}}(f)}{P_{n_{\mu}}}\right]^ {1/2}\,, \tag{45}\]
Figure 4: For intermittent, stochastic bursts with an \(f^{-7/3}\) power spectrum, we demonstrate recovery of our search (left) and compare it to that of a deterministic-signal-based search (right). Our search recovers the injected signal parameters within a \(1\sigma\) credible interval, while DSI recovers the uniform prior on \(r_{\mathrm{max}}\) and the lower boundary of the prior imposed on the duty cycle (\(\xi=10^{-4}\)). Thus, the DSI analysis finds no signal in the data.
where \(P_{n_{\mu}}\) is the noise power spectral density in detector \(\mu=1,2\) (see (14)). The above expression for \(\rho_{\rm seg,det}\) is an _amplitude_ signal-to-noise ratio, defined as the expected value of the matched-filter statistic divided by its standard deviation. The quadrature sum takes into account the contribution from using both detectors to do the analysis.
Figure 5 shows a plot of a representative BBH chirp signal in the time-domain (left panel) and an average over an ensemble of BBH chirp signals in the frequency domain (right panel).
As mentioned in Section II.1, the detection statistic in our Bayesian framework is the Bayes factor where the models in (7) are the signal+noise model and the noise only model for a particular search. While SSC and SSI contain the same noise model, the noise model in DSI does not take the same form. Hence, the Bayes factors for the different searches are not computed with respect to the same noise model and one cannot compare these methods with one another in terms of the Bayes factor. Instead, we evaluate how the intermittent nature of the signal impacts each search method's effectiveness in recovering the signal by plotting the \(\ln\) Bayes factor as a function of the duty cycle. In other words, we wish to answer two questions: (i) How well does SSI do in recovering the signal at different duty cycles for a constant total stochastic signal-to-noise ratio? and (ii) How well does DSI do in recovering the signal at different duty cycles for a constant total deterministic signal-to-noise ratio? The answers to the questions are independent of one another and cannot be used as a way to assess if one search is "better" than the other. However, since SSC and SSI contain the same noise model, these searches can be compared to one another using the Bayes factor.
In order to assess the efficiency of the methods with respect to their respective noise-only models, we simulate 40,000 segments of data with each segment being 4 seconds long. We choose values of \(r_{\rm min}=2\) Mpc, \(r_{\rm max}=5\) Mpc and the black hole component masses to each be \(30M_{\odot}\). These parameters give a value of \(\langle\Omega_{b}\rangle=0.803\). The parameters used for this analysis are tabulated in the first 9 columns of the 'Deterministic chirps' section in Table 1. Thus, the signal has the same strength as in IV.2, but it is now composed of deterministic chirps. The same coarse-graining factor and low- and high-frequency cutoffs that were used in Section IV.2 are used for this case as well when analyzing the data.
Figure 6 shows the \(\ln\) Bayes factors for the stochastic-signal-based searches (left panel) and for the deterministic-signal-based search (right panel) as a function of the duty cycle. Analogously to what was done in Section II.1, the total SNR is kept constant by adjusting the noise levels. For the stochastic searches, we keep the total power SNR, computed using (39), constant, while for the deterministic search we keep the total amplitude SNR constant, obtained using (45). We see that both intermittent searches (SSI and DSI) perform well at low duty cycles, with values of the \(\ln\) Bayes factors reaching over 1000 for some of the smallest values of the duty cycle considered.
In order to directly compare SSI with DSI, we run both analyses on the same dataset. The data is generated such that the duty cycle is \(2.98\times 10^{-3}\), the signal is the same as described above and the noise variance is chosen such that the average stochastic SNR per segment, computed using (39), is equal to 5.04 and the total stochastic SNR is equal to 3.0. Note for these values, the average deterministic SNR per segment, computed using (45), is 13.20 with the total deterministic SNR being 7.86, which is considerably larger than the total stochastic SNR. Note these parameters are displayed in the remaining columns of the 'Deterministic chirps' section of Table 1. A comparison of the recovered corner plots is shown in Fig. 7 (left panel). We see that for this data, both searches recover the signal within a \(1\sigma\)
Figure 5: **Left:** Example BBH chirp signal in the time-domain as given by (40). **Right:** Averaged power spectral density of an ensemble of BBH chirp signals as a function of frequency for the noise and signal separately, together with their theoretical predictions according to the injected values.
credible interval, with the error bars for DSI much smaller than SSI, due to the deterministic approach appropriately modeling the chirp waveform of the signal. We also show a comparison of 1D posterior plots of \(\Omega_{\rm gw}\) in Fig. 7 (right panel). Similarly to the corner plot, the posterior width is smaller for DSI than SSI, although SSI still performs better than SSC.
One notes a small bias in the recovery of \(\Omega_{\rm gw}\) for SSI in the right panel of Fig. 7. In Fig. 8 we show the relative difference of the injected value and recovered value of \(\Omega_{\rm gw}\) as a function of \(\xi\) for the three searches, together with the \(1\sigma\) uncertainty band, after combining the posterior over 100 realizations of data. We note that the biased recovery is not always towards higher values of \(\Omega_{\rm gw}\). We also note that the width of the uncertainty for the DSI analysis improves as \(\xi\) increases because the total deterministic SNR is not held constant and increases.
To conclude, we give an estimate of the improvement in time to detection of a GWB with our search. Note that this estimate is computed under the assumptions adopted in this paper and will therefore most likely differ for a realistic detector configuration, with realistic detector noise. We also note that the strength of the signal may affect these values. Nevertheless, to obtain such an estimate, we simulate a GWB consisting of deterministic chirps with parameters \(\langle\rho_{\rm seg,stoch}\rangle=2\) (corresponding to \(\langle\rho_{\rm seg,det}\rangle=8.3\)) and \(\xi=2.98\times 10^{-3}\). We then vary the number of data segments and assess how many 4 second segments are needed to reach a threshold value of the \(\ln\) Bayes factor which is large enough to claim a detection. We define this threshold to be of value 12.5, corresponding to a detection of SNR equal to 5. This is shown in the right panel of Fig. 8 for SSI and SSC. Due to the large difference in deterministic and stochastic SNR, the \(\ln\) Bayes factor for DSI already reaches \(\sim 160\) at the first value of \(N_{\rm seg}\) considered. We therefore do not include DSI on this plot to avoid scaling issues. We estimate that the SSC search would cross this threshold after 650,000 segments of data. This corresponds to a factor of \(\sim 54\) improvement in detection of SSI versus SSC for these parameters and assumptions.
## V Discussion
Developing data-analysis techniques to reduce the time-to-detection of an astrophysical GWB with the LVK detectors is one of the current challenges that the GW community faces. Searches that include the intermittency of the BBH background to improve detection statistics have been proposed in the past [19; 28; 29; 1]. In this work, we propose a new, stochastic search for intermittent GWBs and compare its efficiency with other searches. Our stochastic-based search looks for excess cross-correlated power in short stretches of data, ignoring the deterministic form of the GW signal waveforms and, hence the need to marginalize over all the associated signal parameters, as is done in the deterministic-signal-based approach of Smith and Thrane [19]. Not only is it beneficial to develop multiple searches in order to cross-check a potential detection, but there is an added benefit to running a search which does not look for a specific waveform in the data. The stochastic signal model allows our search to be flexible with respect to the type of signal it can detect. By changing the spectral index \(\alpha\) in the search (or by allowing \(\alpha\) to be inferred as a population parameter) we could detect other intermittent signals which might exist in the data.
Figure 6: Plots of the \(\ln\) Bayes factor averaged over 100 data realizations for SSC and SSI (left) and DSI (right) for deterministic chirp signals occurring with various values of the duty cycle \(\xi\). Both intermittent searches are well-suited for detecting signals with a low duty cycle.
For a series of analyses on data of increasing complexity, we show that for data with low duty cycles our search performs better than the standard continuous cross-correlation search, which does not take the intermittent nature of the BBH background into account. Furthermore, we show that a stochastic search for intermittent GWBs is more flexible to the source of the intermittent GWB than our implementation of the Smith and Thrane approach Smith and Thrane (2018) and should be more computationally efficient in detecting a signal. The detection of an intermittent background will allow us to test existing theoretical models, as described in Smith and Thrane (2018); Smith and Thrane (2018).
Figure 8: **Left:** Comparison of recovered values to injected value of \(\Omega_{\rm gw}\) for SSI Reduced (blue), SSC Reduced (orange) and DSI Reduced (green) for different values of the duty cycle. All injected parameters are equivalent to the parameters used in the left panel of Fig. 6 and the recovered values are those after combining \(100\) realizations of data. The shaded regions represent the \(1\sigma\) credible interval of the combined \(100\) realizations of data. **Right:** ln Bayes factor vs \(N_{\rm seg}\) for data with \(\langle\rho_{\rm seg,stoch}\rangle=2\) and \(\xi=2.98\times 10^{-3}\). We define a detection threshold of \(\ln\mathcal{B}=12.5\). SSI crosses this threshold after \(\sim 12,000\) segments of data, while SSC crosses this threshold after \(\sim 650,000\) segments of data, corresponding to a factor of improvement in detecting the signal of roughly \(54\) for SSI relative to SSC.
Figure 7: **Left:** Posterior corner plot combined over \(100\) data realizations analyzed with SSI Reduced (blue) and DSI Reduced (green). Both searches recover the injected signal parameters (\(\xi=2.98\times 10^{-3}\) and \(\langle\Omega_{b}\rangle=0.803\)) within a \(1\sigma\) confidence interval. The recovered values and error bars are those recovered by the SSI Reduced search. **Right:** 1D posterior plot of \(\Omega_{\rm gw}\) samples from SSI Reduced (blue), SSC Reduced (orange) and DSI Reduced (green) constructed by combining posterior samples for \(\xi\) and \(\langle\Omega_{b}\rangle\) using (13). Note, the inference done with the DSI likelihood gives posterior samples for the parameters \(\xi\) and \(r_{\rm max}\) and the values of \(r_{\rm max}\) are then converted to samples in \(\langle\Omega_{b}\rangle\) by (37), since the other variables in (37) are fixed and known.
Before being able to apply this search method on real GW data, further generalizations need to be made. We give several examples of such generalizations, which will be addressed in future work.
For all of our data in this paper, we only simulate signals which lie completely within the segment boundaries. A crucial next step is investigating how a signal which extends past a segment boundary will impact our results. Further, the most realistic data we consider consists of individual BBH chirps injected in white, Gaussian noise. However, various assumptions were made about the source distribution that generates these chirps. For example, the two component masses were chosen to be equal, and the resulting chirp mass chosen to be identical for all the chirps (with only the distance to the source varying from one data segment to another). In reality, the black hole masses will most likely follow a power-law + peak distribution as shown by the latest LVK results [6]. Generalizing our method to allow for such mass distributions, as well as the performance of our search in that case, is left for future work.
Several simplifications regarding the detectors were made as well. First, we worked under the assumption that the detectors are co-located and co-aligned. This needs to be generalized by taking into account the effect of the overlap reduction function. Second, it was assumed that the noise in the detector is white and Gaussian. However, realistic detector noise follows a colored, i.e. frequency-dependent, power spectral density. An additional complication related to noise estimation arises from the presence of a continuous GWB of BNS mergers. At any time, several BNS mergers are expected to be emitting GWs in the LVK frequency band. Not only does this violate the assumption that a segment contains either one signal or noise only, but it will also affect the noise PSD estimation. Challenges related to the correct noise estimation will be addressed in future work. Furthermore, the Gaussian noise assumption will likely be violated as well, due to the presence of noise transients, so-called glitches. During the third observing run of the LVK collaboration, these glitches were omnipresent in the data [21; 32]. Therefore, before analyzing real detector data, the sensitivity of our search to the presence of such glitches will have to be investigated. Analyzing real detector data will introduce many challenges, which we plan to address incrementally, considering more and more realistic detectors and signals.
## Acknowledgement
Joseph Romano and Jessica Lawrence are supported by National Science Foundation (NSF) Grant No. PHY-2207270. Joseph Romano was also supported by start-up funds provided by Texas Tech University. Kevin Turbang is supported by FWO-Vlaanderen through grant number 1179522N. Arianna Renzini is supported by the NSF award 1912594. The authors are grateful for computational resources provided by the LIGO Laboratory and supported by NSF Grants PHY-0757058 and PHY-0823459. The Bayesian inference was performed using bilby[33] with the dynesty sampler [34].
## Appendix A Likelihoods
Throughout this work, various searches for GWBs are compared. In this appendix, we provide the likelihoods corresponding to those searches. We start by giving an overview of the likelihoods used in Section IV.1, i.e., applicable to white signals, and conclude with the likelihoods for colored signals used in Sections IV.2 and IV.3. We also remind the reader that all likelihoods considered in this work are for stationary, white-Gaussian noise (see (14)).
### Likelihoods for white signals
#### a.1.1 SSC-full
For white signals, we define the likelihood functions for a continuous stochastic search (SSC-full) as [23]:
\[\mathscr{L}(d|\sigma_{\text{gw}}^{2},\sigma_{n_{1}}^{2}, \sigma_{n_{2}}^{2})\\ =\prod_{I=1}^{N_{\text{seg}}}\frac{1}{(2\pi)^{N}\left(\sigma_{1 }^{2}\sigma_{2}^{2}-(\sigma_{\text{gw}}^{2})^{2}\right)^{N/2}}\exp\left\{- \frac{1}{2}\frac{N}{\left(\sigma_{1}^{2}\sigma_{2}^{2}-(\sigma_{\text{gw}}^{2} )^{2}\right)}\left[\hat{\sigma}_{1,I}^{2}\sigma_{2}^{2}+\hat{\sigma}_{2,I}^{2 }\sigma_{1}^{2}-2\hat{\sigma}_{\text{gw},I}^{2}\sigma_{\text{gw}}^{2}\right] \right\}\,, \tag{10}\]
where
\[\sigma_{1}^{2}\equiv\sigma_{n_{1}}^{2}+\sigma_{\text{gw}}^{2}\,,\qquad\sigma_ {2}^{2}\equiv\sigma_{n_{2}}^{2}+\sigma_{\text{gw}}^{2}\,, \tag{11}\]
are parameters describing the total auto-correlated power in detectors 1 and 2, and
\[\hat{\sigma}_{1,I}^{2}\equiv\frac{1}{N}\sum_{i}d_{1,Ii}^{2}\,,\qquad\hat{\sigma}_{ 2,I}^{2}\equiv\frac{1}{N}\sum_{i}d_{2,Ii}^{2}\,,\qquad\hat{\sigma}_{\text{gw},I }^{2}\equiv\frac{1}{N}\sum_{i}d_{1,Ii}d_{2,Ii}\,, \tag{10}\]
are the quadratic combinations of the data from segment \(I\) that enter the likelihood function. (Here, \(i\) labels the time sample in data segment \(I\).) The noise variances in each detector are \(\sigma_{n_{1}}^{2}\) and \(\sigma_{n_{2}}^{2}\). It turns out that \(\hat{\sigma}_{1,I}^{2}\), \(\hat{\sigma}_{2,I}^{2}\), \(\hat{\sigma}_{\text{gw},I}^{2}\) are the maximum-likelihood estimates of \(\sigma_{1}^{2}\), \(\sigma_{2}^{2}\), \(\sigma_{\text{gw}}^{2}\) for segment \(I\).
#### a.2.2 SSC-reduced
For a large number of samples per segment (\(N\gg 1\)), one can define a reduced version of the likelihood function, which is given by [23]:
\[\mathscr{L}(d|\sigma_{\text{gw}}^{2},\bar{\sigma}_{n_{1}}^{2},\bar{\sigma}_{n _{2}}^{2})=\prod_{I=1}^{N_{\text{seg}}}\frac{1}{\sqrt{2\pi\,\text{var}(\bar{ \sigma}_{\text{gw}}^{2})}}\exp\left[-\frac{(\hat{\sigma}_{\text{gw},I}^{2}- \sigma_{\text{gw}}^{2})^{2}}{2\,\text{var}(\bar{\sigma}_{\text{gw}}^{2})} \right]\,, \tag{11}\]
where
\[\text{var}(\bar{\sigma}_{\text{gw}}^{2})\equiv\frac{1}{N}\bar{\sigma}_{1}^{2} \bar{\sigma}_{2}^{2}\,, \tag{12}\]
with
\[\bar{\sigma}_{1}^{2}\equiv\frac{1}{N_{\text{tot}}}\sum_{I,i}d_{1,Ii}^{2}\,, \qquad\bar{\sigma}_{2}^{2}\equiv\frac{1}{N_{\text{tot}}}\sum_{I,i}d_{2,Ii}^{2} \tag{13}\]
being estimates of the total auto-correlated power in the two detectors constructed from all the data. We expect SSC-reduced and SSC-full to perform equally well, assuming \(N\gg 1\), which is needed for the cross-correlation data to be approximately Gaussian.
#### a.2.3 SSI-full
For our proposed stochastic search for intermittent GWBs, we build upon the framework of Drasco and Flanagan [1] and extend their proposed formalism to a larger number of samples per segment (\(N\gg 1\)) and allow for the amplitudes to be drawn from a uniform-in-volume distribution. The likelihood takes the same form as (3), where the segment-dependent signal and noise likelihoods are now respectively given by:
\[\mathscr{L}_{s}(d_{I}|\langle\sigma_{b}^{2}\rangle,\sigma_{n_{1}} ^{2},\sigma_{n_{2}}^{2})=\int_{\sigma_{b,\text{min}}^{2}(\langle\sigma_{b}^{2} \rangle)}^{\sigma_{b,\text{max}}^{2}}\text{d}\sigma_{b,I}^{2}\,\pi(\sigma_{b, I}^{2}|\langle\sigma_{b}^{2}\rangle)\frac{1}{(2\pi)^{N}\left(\sigma_{1,I}^{2} \sigma_{2,I}^{2}-(\sigma_{b,I}^{2})^{2}\right)^{N/2}}\] \[\times\exp\left\{-\frac{1}{2}\frac{N}{\left(\sigma_{1,I}^{2} \sigma_{2,I}^{2}-(\sigma_{b,I}^{2})^{2}\right)}\left[\hat{\sigma}_{1,I}^{2} \sigma_{2,I}^{2}+\hat{\sigma}_{2,I}^{2}\sigma_{1,I}^{2}-2\hat{\sigma}_{b,I}^{2 }\sigma_{b,I}^{2}\right]\right\}\,, \tag{14}\] \[\mathscr{L}_{n}(d_{I}|\sigma_{n_{1}}^{2},\sigma_{n_{2}}^{2})= \frac{1}{(2\pi)^{N}\left(\sigma_{n_{1}}^{2}\sigma_{n_{2}}^{2}\right)^{N/2}} \exp\left\{-\frac{N}{2}\left[\frac{\hat{\sigma}_{1,I}^{2}}{\sigma_{n_{1}}^{2}} +\frac{\hat{\sigma}_{2,I}^{2}}{\sigma_{n_{2}}^{2}}\right]\right\}\,, \tag{15}\]
where
\[\hat{\sigma}_{b,I}^{2}\equiv\frac{1}{N}\sum_{i}d_{1,Ii}d_{2,Ii}\,,\qquad\hat{ \sigma}_{1,I}^{2}\equiv\frac{1}{N}\sum_{i}d_{1,Ii}^{2}\,,\qquad\hat{\sigma}_{2,I}^{2}\equiv\frac{1}{N}\sum_{i}d_{2,Ii}^{2}\,. \tag{16}\]
In the above expression for the signal likelihood, we used
\[\sigma_{1,I}^{2}\equiv\sigma_{n_{1}}^{2}+\sigma_{b,I}^{2}\,,\qquad\sigma_{2,I} ^{2}\equiv\sigma_{n_{2}}^{2}+\sigma_{b,I}^{2}\,, \tag{17}\]
which are parameters describing the _segment-dependent_ total auto-correlated power, with the segment dependence coming from the burst variance \(\sigma^{2}_{b,I}\).
Note that the segment-dependent signal likelihood requires a marginalization over the segment-dependent burst variances \(\sigma^{2}_{b,I}\), which is taken into account by the appropriate use of prior distribution, as introduced in (31):
\[\pi(\sigma^{2}_{b,I}|\langle\sigma^{2}_{b}\rangle)=\frac{\langle\sigma^{2}_{b} \rangle(\sigma^{2}_{b,\max})^{1/2}}{\sqrt{-3+12\sigma^{2}_{b,\max}/\langle \sigma^{2}_{b}\rangle}-3}\,(\sigma^{2}_{b,I})^{-5/2}\,, \tag{111}\]
where
\[\sigma^{2}_{b,\min}(\langle\sigma^{2}_{b}\rangle)=\frac{2\sigma^{2}_{b,\max}} {6\sigma^{2}_{b,\max}/\langle\sigma^{2}_{b}\rangle-1-\sqrt{-3+12\sigma^{2}_{b,\max}/\langle\sigma^{2}_{b}\rangle}}\,,\qquad\sigma^{2}_{b,\max}=\sigma^{2}_ {\mathrm{ref}}\,\frac{r^{2}_{\mathrm{ref}}}{r^{2}_{\min}} \tag{112}\]
are the limits of integration, which depend on the fixed (known) parameter \(r_{\min}\) and the (unknown) population-averaged variance \(\langle\sigma^{2}_{b}\rangle\).
#### a.2.4 SSI-reduced
Similarly to the case of SSC, one can define a reduced version of the SSI likelihood, provided the number of samples per segment \(N\) is large. The segment-dependent signal likelihood still requires a marginalization over the segment-dependent burst variances \(\sigma^{2}_{b,I}\):
\[\mathscr{L}_{s}(d_{I}|\langle\sigma^{2}_{b}\rangle,\bar{\sigma}^{2}_{n_{1}}, \bar{\sigma}^{2}_{n_{2}})=\int_{\sigma^{2}_{b,\min}(\langle\sigma^{2}_{b} \rangle)}^{\sigma^{2}_{b,\max}}\mathrm{d}\sigma^{2}_{b,I}\,\pi(\sigma^{2}_{b, I}|\langle\sigma^{2}_{b}\rangle)\frac{1}{\sqrt{2\pi\,\mathrm{var}(\bar{ \sigma}^{2}_{b,I})}}\exp\left[-\frac{(\hat{\sigma}^{2}_{b,I}-\sigma^{2}_{b,I}) ^{2}}{2\,\mathrm{var}(\bar{\sigma}^{2}_{b,I})}\right]\,, \tag{113}\]
where the prior and limits of integration are the same as those used for SSI-full. In addition,
\[\mathrm{var}(\bar{\sigma}^{2}_{b,I})\equiv\frac{1}{N}\bar{\sigma}^{2}_{1,I} \bar{\sigma}^{2}_{2,I} \tag{114}\]
with
\[\bar{\sigma}^{2}_{1,I}\equiv\bar{\sigma}^{2}_{n_{1}}+\sigma^{2}_{b,I}\,, \qquad\bar{\sigma}^{2}_{1,I}\equiv\bar{\sigma}^{2}_{n_{2}}+\sigma^{2}_{b,I}\,, \tag{115}\]
where we estimate the white noise variances from the auto-correlated and cross-correlated power in the two detector outputs using the full set of data:
\[\bar{\sigma}^{2}_{\mathrm{gw}}\equiv\hat{\sigma}^{2}_{\mathrm{gw}}\theta( \hat{\sigma}^{2}_{\mathrm{gw}})\,,\qquad\bar{\sigma}^{2}_{n_{1}}\equiv(\hat{ \sigma}^{2}_{1}-\bar{\sigma}^{2}_{\mathrm{gw}})\theta(\hat{\sigma}^{2}_{1}- \bar{\sigma}^{2}_{\mathrm{gw}})\,,\qquad\bar{\sigma}^{2}_{n_{2}}\equiv(\hat{ \sigma}^{2}_{2}-\bar{\sigma}^{2}_{\mathrm{gw}})\theta(\hat{\sigma}^{2}_{2}- \bar{\sigma}^{2}_{\mathrm{gw}})\,, \tag{116}\]
where
\[\hat{\sigma}^{2}_{\mathrm{gw}}\equiv\frac{1}{N_{\mathrm{tot}}}\sum_{I,i}d_{1, Ii}d_{2,Ii}\,,\qquad\hat{\sigma}^{2}_{1}\equiv\frac{1}{N_{\mathrm{tot}}}\sum_{I,i}d^{2 }_{1,Ii}\,,\qquad\hat{\sigma}^{2}_{2}\equiv\frac{1}{N_{\mathrm{tot}}}\sum_{I, i}d^{2}_{2,Ii}\,. \tag{117}\]
In the above expressions, \(\theta(x)\) is the usual Heaviside step function, which is defined as \(\theta(x)=0\) or \(1\) depending on whether \(x<0\) or \(x>0\), and the hatted quantities \(\hat{\sigma}^{2}_{\mathrm{gw}}\), \(\hat{\sigma}^{2}_{1}\), \(\hat{\sigma}^{2}_{2}\) are the quadratic combinations of the data in the two detectors. This simplification is possible since the simulated noise is stationary.
The segment-dependent noise likelihood \(\mathscr{L}_{n}(d_{I}|\bar{\sigma}^{2}_{n_{1}},\bar{\sigma}^{2}_{n_{2}})\) is given as before by:
\[\mathscr{L}_{n}(d_{I}|\bar{\sigma}^{2}_{n_{1}},\bar{\sigma}^{2}_{n_{2}})= \sqrt{\frac{N}{2\pi\,\bar{\sigma}^{2}_{n_{1}}\bar{\sigma}^{2}_{n_{2}}}}\exp \left[-\frac{N}{2}\frac{(\hat{\sigma}^{2}_{b,I})^{2}}{\hat{\sigma}^{2}_{n_{1} }\hat{\sigma}^{2}_{n_{2}}}\right]\,. \tag{118}\]
### Likelihoods for colored signals
The signal and noise dependent-likelihoods for SSI are specified in Sec. III for both the full (infer noise parameters) and reduced (use estimated noise parameters) analyses. When analyzing stochastic bursts (Sec. IV.2) and deterministic chirps (Sec. IV.3), the segment prior and integration bounds are specified in (38) and the subsequent paragraph.
SSC-full
For the continuous search, SSC, the full likelihood is specified by
\[\mathscr{L}(d|\Omega_{\rm gw},\sigma_{n_{1}}^{2},\sigma_{n_{2}}^{2})= \prod_{I=1}^{N_{\rm eng}}\prod_{k}\frac{1}{(\pi T/2)^{2M}(P_{1}(f_{k})P_{2}(f_{k} )-P_{\rm gw}^{2}(f_{k}))^{M}}\] \[\qquad\times\exp\left\{-\frac{M}{(P_{1}(f_{k})P_{2}(f_{k})-P_{\rm gw }^{2}(f_{k}))}\left[\hat{P}_{1,Ik}P_{2}(f_{k})+\hat{P}_{2,Ik}P_{1}(f_{k})-2\hat {P}_{\rm gw,Ik}P_{\rm gw}(f_{k})\right]\right\}\,, \tag{119}\]
where
\[P_{1}(f)\equiv P_{n_{1}}(f)+P_{\rm gw}(f)\,,\qquad P_{2}(f)\equiv P_{n_{2}}(f) +P_{\rm gw}(f)\,, \tag{120}\]
with
\[P_{n_{1}}(f)\equiv\frac{\sigma_{n_{1}}^{2}}{(f_{\rm high}-f_{\rm low})}\,, \qquad P_{n_{2}}(f)\equiv\frac{\sigma_{n_{2}}^{2}}{(f_{\rm high}-f_{\rm low}) }\,,\qquad P_{\rm gw}(f)\equiv\Omega_{\rm gw}H(f)\,, \tag{121}\]
and \(H(f)\) is given by (17). Note that the population parameter for SSC is \(\Omega_{\rm gw}\), the time and population-averaged energy density amplitude. In addition, the data enter the signal evidence via the same quadratic combinations as for SSI-full (see (18)), but with the cross-correlation combination now defining \(\hat{P}_{\rm gw,Ik}\) as opposed to \(\hat{P}_{b,Ik}\).
#### a.2.2 SSC-reduced
For SSC-reduced, we have [23]:
\[\mathscr{L}(d|\Omega_{\rm gw},\bar{\sigma}_{n_{1}}^{2},\bar{\sigma}_{n_{2}}^ {2})=\prod_{I=1}^{N_{\rm eng}}\frac{1}{\sqrt{2\pi\,{\rm var}(\Omega_{\rm gw}) }}\exp\left[-\frac{(\hat{\Omega}_{\rm gw,I}-\Omega_{\rm gw})^{2}}{2\,{\rm var} (\bar{\Omega}_{\rm gw})}\right]\,, \tag{122}\]
where
\[\hat{\Omega}_{\rm gw,I}\equiv\frac{\sum_{k}Q(f_{k})\hat{P}_{\rm gw,Ik}}{\sum_ {k^{\prime}}Q(f_{k^{\prime}})H(f_{k^{\prime}})}\,,\qquad{\rm var}(\bar{\Omega} _{\rm gw})\equiv\left(2M\sum_{k}Q(f_{k})H(f_{k})\right)^{-1} \tag{123}\]
are the optimally-filtered cross-correlation estimators and corresponding variances, which are constructed from coarse-grained estimates of the cross-correlated power \(\hat{P}_{\rm gw,Ik}\), and the optimal filter function
\[Q(f)\equiv\frac{H(f)}{\bar{P}_{1}(f)\bar{P}_{2}(f)}\,. \tag{124}\]
In the above expression,
\[\bar{P}_{1}(f)\equiv\frac{\bar{\sigma}_{n_{1}}^{2}}{(f_{\rm high}-f_{\rm low} )}+\bar{\Omega}_{\rm gw}H(f)\,,\qquad\bar{P}_{2}(f)\equiv\frac{\bar{\sigma}_ {n_{2}}^{2}}{(f_{\rm high}-f_{\rm low})}+\bar{\Omega}_{\rm gw}H(f)\,, \tag{125}\]
where \(\bar{\sigma}_{n_{1}}^{2}\), \(\bar{\sigma}_{n_{2}}^{2}\) are measured estimates of the detector noise power as defined in (116), and \(\bar{\Omega}_{\rm gw}\) is related to \(\bar{\sigma}_{\rm gw}^{2}\) (also defined in (116)) via
\[\bar{\Omega}_{\rm gw}=\frac{4}{3}\frac{\bar{\sigma}_{\rm gw}^{2}}{f_{\rm ref} }\left(\frac{3H_{0}^{2}}{10\pi^{2}}\frac{1}{f_{\rm ref}^{3}}\right)^{-1}\left[ \left(\frac{f_{\rm ref}}{f_{\rm low}}\right)^{4/3}-\left(\frac{f_{\rm ref}}{f _{\rm high}}\right)^{4/3}\right]^{-1}\,. \tag{126}\]
This last equation follows from the general relation between variance and power spectrum,
\[\sigma_{\rm gw}^{2}\equiv\int_{f_{\rm low}}^{f_{\rm high}}{\rm d}f\,P_{\rm gw }(f)=\Omega_{\rm gw}\int_{f_{\rm low}}^{f_{\rm high}}{\rm d}f\,H(f)=\Omega_{\rm gw }\left(\frac{3H_{0}^{2}}{10\pi^{2}}\frac{1}{f_{\rm ref}^{3}}\right)\int_{f_{ \rm low}}^{f_{\rm high}}{\rm d}f\,\left(\frac{f}{f_{\rm ref}}\right)^{-7/3}\,. \tag{127}\]
DSI-full
We also analyze the colored data with DSI, our much simpler implementation of the deterministic-signal-based search.
Following [19] for two detectors, we define the DSI segment-dependent signal likelihood to be
\[\mathscr{L}_{s}(d_{I}|r_{\text{max}},\sigma_{n_{1}}^{2},\sigma_{n_{2}}^{2}) \propto\int_{r_{\text{min}}}^{r_{\text{max}}}\text{d}r_{I}\ \pi(r_{I}|r_{\text{max}})\exp\left\{-\frac{1}{2}(4\Delta f)\sum_{k}\sum_{\mu=1, 2}\frac{|(\tilde{d}_{\mu,Ik}-\tilde{h}_{\text{chirp}}(r_{I};f_{k}))|^{2}}{P_{n_{ \mu}}}\right\}, \tag{100}\]
where \(\tilde{d}_{\mu,Ik}\) and \(\tilde{h}_{\text{chirp}}(r_{I};f_{k})\) are the Fourier transform of the data and chirp waveform, respectively, with all of the other chirp parameters assumed to be known a priori. In the above signal evidence, we are marginalizing over the segment-dependent source distance \(r_{I}\), which is drawn from a uniform-in-volume distribution \(\pi(r_{I}|r_{\text{max}})\) as given by (27).
By taking \(\tilde{h}_{\text{chirp}}(r_{I};f_{k})=0\) (corresponding to no signal in the data) the corresponding segment-dependent noise likelihood is
\[\mathscr{L}_{n}(d_{I}|\sigma_{n_{1}}^{2},\sigma_{n_{2}}^{2}) \propto\exp\left\{-\frac{1}{2}(4\Delta f)\sum_{k}\sum_{\mu=1,2}\frac{|\tilde{d }_{\mu,Ik}|^{2}}{P_{n_{\mu}}}\right\}. \tag{101}\]
#### a.4.4 DSI-reduced
For the reduced implementation, we substitute the noise parameters with the auto-correlated power estimates which gives,
\[\mathscr{L}_{s}(d_{I}|r_{\text{max}},\bar{\sigma}_{n_{1}}^{2}, \bar{\sigma}_{n_{2}}^{2})\propto\int_{r_{\text{min}}}^{r_{\text{max}}}\text{d }r_{I}\ \pi(r_{I}|r_{\text{max}})\exp\left\{-\frac{1}{2}(4\Delta f)\sum_{k}\sum_{\mu=1,2}\frac{|(\tilde{d}_{\mu,Ik}-\tilde{h}_{\text{chirp}}(r_{I};f_{k}))|^{2}}{ \tilde{P}_{n_{\mu}}}\right\} \tag{102}\]
and
\[\mathscr{L}_{n}(d_{I}|\bar{\sigma}_{n_{1}}^{2},\bar{\sigma}_{n_{2}}^{2}) \propto\exp\left\{-\frac{1}{2}(4\Delta f)\sum_{k}\sum_{\mu=1,2}\frac{|\tilde{ d}_{\mu,Ik}|^{2}}{\tilde{P}_{n_{\mu}}}\right\} \tag{103}\]
for the segment-dependent signal and noise likelihoods, respectively.
|
2309.03231 | Quantum-AI empowered Intelligent Surveillance: Advancing Public Safety
Through Innovative Contraband Detection | Surveillance systems have emerged as crucial elements in upholding peace and
security in the modern world. Their ubiquity aids in monitoring suspicious
activities effectively. However, in densely populated environments, continuous
active monitoring becomes impractical, necessitating the development of
intelligent surveillance systems. AI integration in the surveillance domain was
a big revolution, however, speed issues have prevented its widespread
implementation in the field. It has been observed that quantum artificial
intelligence has led to a great breakthrough. Quantum artificial
intelligence-based surveillance systems have shown to be more accurate as well
as capable of performing well in real-time scenarios, which had never been seen
before. In this research, a RentinaNet model is integrated with Quantum CNN and
termed as Quantum-RetinaNet. By harnessing the Quantum capabilities of QCNN,
Quantum-RetinaNet strikes a balance between accuracy and speed. This innovative
integration positions it as a game-changer, addressing the challenges of active
monitoring in densely populated scenarios. As demand for efficient surveillance
solutions continues to grow, Quantum-RetinaNet offers a compelling alternative
to existing CNN models, upholding accuracy standards without sacrificing
real-time performance. The unique attributes of Quantum-RetinaNet have
far-reaching implications for the future of intelligent surveillance. With its
enhanced processing speed, it is poised to revolutionize the field, catering to
the pressing need for rapid yet precise monitoring. As Quantum-RetinaNet
becomes the new standard, it ensures public safety and security while pushing
the boundaries of AI in surveillance. | Syed Atif Ali Shah, Nasir Algeelani, Najeeb Al-Sammarraie | 2023-09-05T04:26:26Z | http://arxiv.org/abs/2309.03231v1 | Quantum-AI empowered Intelligent Surveillance: Advancing Public Safety Through Innovative Contraband Detection
###### Abstract
Surveillance systems have emerged as crucial elements in upholding peace and security in the modern world. Their ubiquity aids in monitoring suspicious activities effectively. However, in densely populated environments, continuous active monitoring becomes impractical, necessitating the development of intelligent surveillance systems. AI integration in the surveillance domain was a big revolution, however, speed issues have prevented its widespread implementation in the field. It has been observed that quantum artificial intelligence has led to a great breakthrough. Quantum artificial intelligence-based surveillance systems have shown to be more accurate as well as capable of performing well in real-time scenarios, which had never been seen before. In this research, a RentinaNet model is integrated with Quantum CNN and termed as Quantum-RetinaNet. By harnessing the Quantum capabilities of QCNN, Quantum-RetinaNet strikes a balance between accuracy and speed. This innovative integration positions it as a game-changer, addressing the challenges of active monitoring in densely populated scenarios. As demand for efficient surveillance solutions continues to grow, Quantum-RetinaNet offers a compelling alternative to existing CNN models, upholding accuracy standards without sacrificing real-time performance. The unique attributes of Quantum-RetinaNet have far-reaching implications for the future of intelligent surveillance. With its enhanced processing speed, it is poised to revolutionize the field, catering to the pressing need for rapid yet precise monitoring. As Quantum-RetinaNet becomes the new standard, it ensures public safety and security while pushing the boundaries of AI in surveillance.
**Keywords:** Quantum AI; Deep Learning; Quantum Deep Learning; CNN; QCNN; Intelligent Surveillance; Weapon detection.
## Introduction
Intelligent surveillance research has significant implications across various domains, including public safety, security, and law enforcement efforts. It can be employed in public spaces, transportation hubs, schools, and other crowded areas to identify and prevent potential threats, mitigating risks associated with armed violence and terrorist activities. By integrating such systems into law enforcement operations, authorities can identify and apprehend individuals carrying illegal contraband or harmful objects, reducing the occurrence of violent crimes. Mass shooting prevention can be achieved by detecting contraband in locations like schools and public events, while border security is essential for securing borders and preventing illegal contraband trafficking. Military settings can benefit from systems in identifying and neutralizing enemy threats, providing added protection for
troops. Airport and aviation security can be enhanced by detecting concealed contraband in carry-on luggage and other areas. Prisons and correctional facilities can also benefit from technologies, enhancing overall security for staff and inmates. AI-based image identification, infrared imaging, millimeter-wave scanners, and improved sensor systems are examples of technological advances in contraband detection.
Quantum computing is a field of computer science and physics that explores the principles and technologies underlying quantum mechanics to develop new types of computers. Traditional computers use bits that represent either a 0 or 1 to process and store information, while quantum computers use quantum bits or qubits, which can exist in multiple states simultaneously. This enables quantum computers to perform certain computations much faster than classical computers, especially for complex problems such as factorization and optimization. Based on ideas like quantum superposition, entanglement, and interference, quantum computing has the potential to transform several industries, including banking, drug research, encryption, and AI. However, there are still problems to be solved, such as creating dependable quantum hardware and creating new algorithms that make use of quantum computing. An emerging technology called quantum deep learning uses deep learning methods to solve complex issues in fields including image recognition, natural language processing, and drug development. To carry out deep learning tasks, this entails creating quantum algorithms and quantum computing hardware.
Quantum gates are used by the quantum circuits to process and alter the quantum states created by the encoding of data into them for deep learning tasks. Despite these difficulties, there is significant interest in quantum deep learning, and organizations and academics all around the world are looking at how it may lead to new developments in AI. In a hybrid CNN-QCNN, QCNN is used for later layers, such as classification or detection, while traditional CNN is used for earlier levels, like feature extraction and reduction. While quantum-inspired optimization algorithms like the Variational Quantum Eigensolver (VQE) or Quantum Approximate Optimization Algorithm (QAOA) can be used, this method can also be learned using conventional deep learning approaches like backpropagation. Quantum deep learning is a potential area of technology for the future since it can improve performance on some sorts of issues when the classical CNN and quantum methods are combined. For instance, the data may be preprocessed using the traditional CNN to extract relevant features, which can then be fed into the QCNN for additional processing and analysis.
In this research ethical problems are also addressed, with studies focusing on privacy concerns and potential biases in the deployment of Intelligent surveillance technology. Real-time warning and integration with surveillance systems provide a speedy reaction to possible threats. Overall research has far-reaching consequences for the safety and security of individuals and communities, but striking a balance between security measures and individual rights and privacy needs careful consideration of ethical and legal factors. Collaborations between researchers, law enforcement agencies, and policymakers are critical for the development and application of responsible Intelligent surveillance technologies.
### Literature review
Countries with large stocks of contraband (such as guns) have very high crime rates. According to several sources, the illegal acts have caused a variety of consequences, including murder, theft, destruction of infrastructure, and loss of billions of dollars(1).
Traditional CCTV cameras are used to monitor specific areas, but the surveillance technique is more manual [2]. This scenario has been modified by deep learning. In this regard, researchers have developed many models for identifying contrabands[3]. This article explores the various possibilities in detail. The research ranges from early contraband detection systems to the latest designs available[4]. This journey began with manual methods and ended with fully automated intelligent systems[5]. We started with a manual that incorporated weighted Gaussian mixtures[6], polarization signal-based methodologies, multiresolution mosaicism, three-dimensional (3D) computed tomography (CT), and the Haar cascade [7]method into our work. Machine learning approaches such as the Visual Background Extractor Algorithm, 3-layer ANNs used in conjunction with active mmWave radar [8], X-ray-based methodologies[9], and ANN-based MPEG-7 classifiers are of interest. With the introduction of SURF (Fast Robust features), Harris Interest Point Detector (HIPD)[10], and Fast Retina Keypoint, the era of Convolutional Neural Networks (FREAK) has begun [11]. The arrival of various models was also announced to improve efficiency. Related to this, there are also many different scientific publications using deep convolutional networks and transfer learning technology[12]. For example, his X-rays are used for classification and infrared for concealed contrabands[13]. Recently, a special CNN model was announced that is not only accurate but significantly speeds up the detection of contraband in streaming video. These models include R-CNN [14], Fast R-CNN[15], Faster R-CNN[16], Inception, YOLO, VGG-Net, ZF-Net, and YOLO-V3. In addition to speed and accuracy, another important factor is complexity[17]. This allows the model to run more smoothly on small devices such as smartphones, and he can use it in IoT apps to study such areas. RPN (Regional Proposal Network)[18], GoogleNet, SqueezeNet, HyperNet, RetinaNet, LeNet, AlexNet, ZFNet, GoogleNet, VGGNet, ResNet, Startup Model, ResNeXt, SENet, MobileNet V1/V2, X, NASNet, PNASNet, ENASNet, EfficientPointNet, MobiiiiNet2, Inception ResNetV2, ResNet50 and other models have been developed. This study also includes the meaning of many terms used in the literature. B. "Complex Backbone" vs. "Light Bone", "Two-Phase Indicator" vs. "One-Phase Indicator", and Pros and Cons of Different Models. We note that our essays not only present research models (how they have changed over time) but also help researchers establish a solid foundation from which to start their research[19]. This review discusses the evolution of Intelligent surveillance models and their performance in detecting guns and pistols[20]. Traditional methods for firearm detection, such as X-ray technology [21] or millimetric wave imaging[22][23], are expensive and impractical due to their reaction to all metallic items, including contrabands [24]. However, deep learning has proven to be the most effective method of learning, with convolutional neural networks (CNNs) outperforming traditional methods[25]. Transfer learning, which re-utilizes information from one domain to another related domain, is becoming popular. Other popular techniques include Scale-Invariant Feature Transform (SIFT), Rotation-Invariant Feature Transform (RIFT) [26], and Fast Retina Keypoint (FREAK)[23]. Some initial work has been done to detect pistols in images, but it is not able to predict multiple pistols in one image [27]. The Bag of Words Surveillance System (BoWSS) [28]algorithm was used to detect guns in images, and Faster R-CNN deep learning was used to detect a hand-held gun[29]. However, the model can only detect and locate pistols, and it often fails to detect other types of contraband, such as machine guns[30]. Fernandez et al. presented a new CNN model for detecting guns and knives from video
surveillance systems, comparing it to GoogleNet and SqueezeNet[31]. SqueezeNet had better performance in gun detection, while GoogleNet had better performance with knife detection [32]. A sequential deep neural network-based approach was developed to resolve three major problems with automated BCG detection and learning[33]. Real-time detection of pistols remains a challenge due to factors such as distance, visibility, type of pistol, scale, rotation, and shape[34]. Advancements in these domains are being made to improve accuracy and performance in contraband detection[35].
### Dataset
For this research we have designed our dataset, to include the contrabander especially used in 3rd world countries. Other datasets contain multiple types of army but usually contain contrabands used in Hollywood movies or/and used in developed countries. When the same model is implemented in 3rd world counties then it shows lower accuracy. To make our work acceptable worldwide, we developed our dataset that contains an army used in all parts of the world. This research is mainly concerned with the types of army used in street crimes like Shortrange Rifle, Shotgun, Pistols, and knives thus named it Street Crimes Arms dataset (SCAD). Each category contains almost 5000 images, hence contains 20000 (app) images in total. For smart training of the models, a dataset has gone through different alterations which include augmentation and normalization.
### Methodology
This research combines the power of Quantum computing with conventional Deep Learning (Convolutional Neural Network). Both technologies have their own strengths and weaknesses. Quantum computing is famous for tremendous speed but due to unavailability of hardware, usually not found in implementations. On the other hand though applications of Deep learning are widely used but lacking the speed in many real-time scenarios. Combination of both technologies reveal a new dimension, and thus suppresses their weaknesses. Nowhere, QCNN is combined with RetinaNet. Feature extraction part of RetinaNet is replaced by QCNN. Thus achieves the speed of Quantum Computing and accuracy of RetinaNet.
**RetinaNet**
RetinaNet is a state-of-the-art object detection algorithm that was introduced by researchers at Facebook AI Research in 2017. It is designed to solve the problem of detecting objects in an image, where the location and type of objects can vary widely. The key innovation of RetinaNet is the use of a novel loss function called Focal Loss, which addresses the issue of class imbalance in object detection. In object detection, there are typically many more negative examples (background) than positive examples (objects of interest). RetinaNet utilizes Focal Loss to reduce bias toward negative examples and focus on hard examples in the positive class. This helps the model to identify rare and important positive cases. To recognize objects of varying sizes and resolutions, RetinaNet's feature pyramid network (FPN) design mixes low-resolution and high-resolution characteristics. The model has gained cutting-edge performance on benchmarks like COCO and PASCAL VOC, making it a popular choice for object identification tasks in industrial and commercial applications.
The core functionality of the RetinaNet is divided into two main parts. The first one called feature extractor, which deals with the extraction of features. The second one is called task-specific networks, which is responsible for Classification and Bounding Box. The first part i.e the feature extractor; that uses convolution and pooling layers to extract features which is a time consuming process. In Quantum-RetinaNet this time is reduced using Quantum-Convolutional Neural Network.
Fig 1 describes the basic working of the Q-RetinaNet. The Q-RetinaNet model converts an image into a vector of corresponding bits for quantum computing. Qubits are fabricated, and feature extraction is performed using QCNN. These features are then forwarded to Task Specific Networks, which draw bounding boxes and detect desired objects.
Feature Extraction using Qonvolutional Neural Network
A Quantum Convolutional Neural Network or Qonvolutional Neural Network uses quantum circuits to conduct convolutional operations on input data. Using methods like amplitude encoding or qubit encoding, the procedure entails converting the incoming data into a quantum state. To identify certain characteristics, such as edges or textures, convolutional filters are used. After that, the data is pooled using a pooling layer, which lowers the dimensionality while maintaining crucial properties, as shown in the fig 2.
Figure 1: The basic architecture of the RetinaNet, first part deals with the extraction of features and second one is task-specific networks.
Figure 2: QCNN uses quantum circuits to conduct convolutional operations on input data, using amplitude encoding or qubit encoding to convert incoming data into a quantum state
For classification tasks, the output is processed utilizing additional layers of quantum circuits, employing methods such as variational quantum classifiers or quantum SVMs. To improve network performance, the parameters of the quantum circuits are improved using strategies like quantum gradient descent or variational approaches.
Bits to QubitsThe data contained in a bit is encoded into the single qubit's quantum state by transforming it from a classical bit to a quantum bit. A qubit is a type of quantum bit that may exist in several states at once as opposed to a classical bit, which can only exist in one state at a time. Initialize the qubit in the \(|0\) state, which corresponds to a classical bit's 0 state, to encode a classical bit into it. Apply the X gate to change the qubit from its \(|0|\) state to its \(|1|\) state if the classical bit is 1. The qubit's final state corresponds to the conventional bit's encoded form. Depending on the exact application, there are several additional techniques to convert conventional bits into qubits.
Quantum ConvolutionsQuantum convolutions are implemented using quantum circuits, gates, and quantum algorithms. While gates, such as the Hadamard-Walsh transform gate for discrete cosine transform (DCT) and discrete wavelet transform (DWT) operations, apply the convolutional filter directly to a quantum state, circuits employ quantum Fourier transforms (QFTs) to change a quantum state. Quantum algorithms are created to work on quantum data and carry out various tasks, including quantum convolution, such as the quantum singular value transformation (QSVD) or the quantum Fourier transform (QFT).
Quantum Activation FunctionsQuantum activation functions are quantum analogs of classical activation functions in neural networks. They are mathematical functions that transform the output of a quantum circuit into a new quantum state, introducing nonlinearity into the output of a neural network. There are several types of quantum activation functions, including Quantum ReLU (Q-ReLU), Quantum Sigmoid (Q-Sigmoid), and Quantum Softmax (Q-Softmax). Quantum activation functions can be used in quantum machine learning algorithms to improve their performance on specific types of problems.
Quantum PoolingTo decrease the spatial dimensions of a quantum feature map while keeping the most important data, the quantum pooling approach is employed in quantum machine learning. Since quantum states are inherently uncertain, it might be challenging to pinpoint the most important details. Numerous quantum pooling strategies, such as quantum maximum pooling, quantum mean pooling and quantum median pooling, have been suggested to overcome this issue. Quantum amplitude pooling is implemented using quantum gates and circuits, such as quantum amplitude pooling. The input quantum state is transformed into the frequency domain using a quantum Fourier transform, and the amplitude of each frequency component is squared using a quantum circuit containing Hadamard gates and Controlled-NOT (CNOT) gates. The squared amplitudes are then used to construct a new quantum state with fewer qubits using a measurement operation.
### Qubit to bit
Quantum computing involves measuring the state of a qubit to convert it to a classical bit, with the probability determined by the quantum state of the qubit. Quantum convolution is a quantum operation used in quantum neural networks, specifically in Qonvolutional Neural Networks (QCNNs). In Qonvolutional, input data is encoded into a quantum state, usually using qubits and quantum gates are applied to the state to perform the convolution operation. The output is then decoded from the resulting quantum state.
### Task Specific Networks
For each of the X anchors and Y object classes, the classification subnet forecasts the likelihood of an object's presence at each spatial position. The subnet is a fully connected network interconnected to all levels, sharing features. Its design involves a channel input feature map, four conv layers with filters, ReLU activations, and filters, and sigmoid activations to output binary predictions per spatial location. The object classification subnet and box regression subnet share a common structure, but the box regression subnet uses a class-agnostic bounding box regressor with fewer parameters. This strategy is equally successful, as both subnets employ distinct parameters for regressing the offset from anchor boxes to neighboring ground-truth objects.
Detailed process architecture of the Q-RetinaNet is shown in the fig 3. Here image is fed into the model, then this image is converted to vector of corresponding bits. For quantum computing bits are transformed to Qubits. Once Qubits are fabricated, then feature extraction can be performed using QCNN. QCNN is just like a CNN but only works on Qubits. After extracting the features next step is to forward these feature maps to Task Specific Networks. Due to compatibility issues these Qubits are now need to transformed back to conventional bits. Finally these bits are transferred to Task Specific Networks, which is responsible for drawing bounding boxes and detecting the desired objects from the image.
## Analysis and Results
Figure 3: Process flow diagram of RetinaNet, an image is converted into a vector of corresponding Qubits for quantum computing, feature extraction is performed using QCNN. And Task Specific Networks, draws bounding boxes and detects desired objects.
Various comparative analysis approaches are utilized to compare the performance of the various models employed in the research.
**Accuracy**
The fig 4 illustrates the accuracy of LeNet, AlexNet, VGG, RetinaNet, and Quantum-RetinaNet. Though the Q-RetinaNet contains only fewer Quantum layer but still shows sufficient accuracy, though it is a partial implementation of the QCNN as compared to CNN, however results quite impressive.
**Confusion Matrix**
In machine learning and deep learning, a classification model's performance is assessed using a confusion matrix. It is used to assess the classification model's accuracy by contrasting the anticipated class labels with the actual class labels. An explanation of how the confusion matrix calculates the percentage of true positive (TP), false positive (FP), false negative (FN), and true negative (TN) predictions can be found in the confusion matrix. The instances in each row of the matrix correspond to a predicted class, whereas the examples in each column correspond to an actual class, as shown in fig 5.
**The
Figure 4: The comparison of accuracy of LeNet, AlexNet, VGG, RetinaNet, and Quantum-RetinaNet
Several performance indicators are calculated using the confusion matrix, including accuracy, precision, recall, and F1-score. It offers a thorough analysis of the categorization model and aids in determining the model's advantages and disadvantages.
**F1-Score**
Deep learning often uses the F1-score to track performance, particularly when it comes to binary classification. It is the harmonic mean of precision and recall, where recall is the ratio of true positives to the total of both true positives and false positives, and precision is the sum of both true positives and false positives (FN). These steps are used to determine the F1 score:
F1-score = 2 * (precision * recall) / (precision + recall)
The F1 score is a crucial indicator for evaluating a classification model's accuracy and recall. It ranges from 0 to 1, with 0 indicating no predictive capacity and 1 indicating perfect precision and recall. It can be computed independently for each class and averaged to provide a performance indicator, see fig 6 for detailed information.
**ROC**
A ROC curve in deep learning measures the accuracy of binary classifier models by plotting the true positive rate (TPR) against the false positive rate (FPR) at different threshold settings, as shown in fig 7. It helps determine the optimal threshold setting and compares the performance of different classifier models. The area under the ROC curve (AUC) is a commonly used metric, with scores ranging from 0 to 1, indicating poor classification and random guessing.
**THIS DOCUMENT IS A PREPRINT (Non-peer-reviewed version).**
Figure 6: The comparison of accuracy of LeNet, AlexNet, VGG, RetinaNet, and Quantum-RetinaNet.
## Conclusion
Hybrid neural networks (QCNN-RetinaNet) for object detection integrating conventional and quantum computing methods have attracted a lot of interest. RetinaNet is utilized for object detection, while the QCNN layers are in charge of feature extraction and reduction. Quantum gates and circuits can be used to implement these layers, potentially resulting in exponential speedup in some applications. Traditional deep learning methods like backpropagation and focus loss function may be used to train RetinaNet, which was created to overcome the class imbalance in object recognition. When solving certain object detection issues, the hybrid QCNN-RetinaNet performs better than either a QCNN or RetinaNet by itself. However, in addition to competence in deep learning and neural networks, constructing a hybrid QCNN-RetinaNet combines of the benefits of both classical and quantum computing methods.
## Ethics
A dataset that was generated in this research was designed to have an armory of the types of pistols, shotguns, rifles (limited to short-range), knives, Kalashnikov, etc. It was
Figure 7: The comparative analysis of ROC of LeNet, AlexNet, VGG, RetinaNet, and Quantum-RetinaNet.
defined as a dataset that contained all types of armory most commonly seen in the streets. The dataset shows armories that are commonly found in 3rd world countries, as the project focuses on surveillance in developing countries. This makes the dataset relevant for countries in the Third World. Our investigation included contacting law enforcement, police, private security agencies, websites, the Internet, and the crime branch of a major news channel, among other security departments.
## Conflict of Interest
-The authors declare no competing interests.
## Author contribution statement
Syed Atif Ali Shah (Researcher): Conceptualization idea, methodology, investigation, experiment implementation, data collection, writing. 80%
Dr. Nasir Ageelani (Co-Supervisor): supervision, review. 15%
Dr. Najeeb Al-Sammurrai (Supervisor): supervision and validation. 5%
## Data availability statement
The data supporting the results of this study will be made available by the corresponding author, **Atif**, upon reasonable request.
|
2310.16039 | Modeling of Fluctuations in Dynamical Optoelectronic Device Simulations
within a Maxwell-Density Matrix Langevin Approach | We present a full-wave Maxwell-density matrix simulation tool including
c-number stochastic noise terms for the modeling of the spatiotemporal dynamics
in active photonic devices, such as quantum cascade lasers (QCLs) and quantum
dot (QD) structures. The coherent light-matter interaction in such devices
plays an important role in the generation of frequency combs and other
nonlinear and nonclassical optical phenomena. Since the emergence of nonlinear
and nonclassical features is directly linked to the noise properties, detailed
simulations of the noise characteristics are required for the development of
low-noise quantum optoelectronic sources. Our semiclassical simulation
framework is based on the Lindblad equation for the electron dynamics, coupled
with Maxwell's equations for the optical propagation in the laser waveguide.
Fluctuations arising from interactions of the optical field and quantum system
with their reservoirs are treated within the quantum Langevin theory. Here, the
fluctuations are included by adding stochastic c-number terms to the
Maxwell-density matrix equations. The implementation in the mbsolve dynamic
simulation framework is publicly available. | Johannes Popp, Johannes Stowasser, Michael A. Schreiber, Lukas Seitner, Felix Hitzelhammer, Michael Haider, Gabriela Slavcheva, Christian Jirauschek | 2023-10-24T17:54:04Z | http://arxiv.org/abs/2310.16039v2 | Modeling of Fluctuations in Dynamical Optoelectronic Device Simulations within a Maxwell-Density Matrix Langevin Approach
###### Abstract
We present a full-wave Maxwell-density matrix simulation tool including c-number stochastic noise terms for the modeling of the spatiotemporal dynamics in active photonic devices, such as quantum cascade lasers (QCLs) and quantum dot (QD) structures. The coherent light-matter interaction in such devices plays an important role in the generation of frequency combs and other nonlinear and nonclassical optical phenomena. Since the emergence of nonlinear and nonclassical features is directly linked to the noise properties, detailed simulations of the noise characteristics are required for the development of low-noise quantum optoelectronic sources. Our semiclassical simulation framework is based on the Lindblad equation for the electron dynamics, coupled with Maxwell's equations for the optical propagation in the laser waveguide. Fluctuations arising from interactions of the optical field and quantum system with their reservoirs are treated within the quantum Langevin theory. Here, the fluctuations are included by adding stochastic c-number terms to the Maxwell-density matrix equations. The implementation in the mbsolve dynamic simulation framework is publicly available.
## I Introduction
An optical frequency comb (OFC) describes the coherent radiation with a broadband spectrum consisting of discrete, equidistantly spaced optical lines featuring a stable phase relation with low phase noise and low mode partition noise. [1; 2; 3] Typically, such combs are used for measurements of optical frequencies in metrology and sensing, having revolutionized these fields by providing unprecedented accuracy and enabling numerous innovative applications. [4; 5] Promising semiconductor lasers (SCLs) for integrated optical frequency comb technologies in the mid-infrared and terahertz (THz) regime are quantum dot (QD), [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16] quantum dash (QDash), [17; 18; 19; 20; 21; 22] quantum cascade lasers (QCLs) [23; 24; 25; 26; 27; 28] and interband cascade lasers (ICLs). [29; 30; 31; 32] The active gain medium of the aforementioned low-dimensional SCLs provides a large third-order nonlinearity, which gives rise to a broadband four-wave mixing (FWM) process resulting in mode proliferation. [2; 5] A complex interplay of parametric gain, FWM nonlinearity, chromatic dispersion and spatial hole burning is essential for frequency comb formation. [33; 34; 35; 36; 8; 37; 8; 38] For a better understanding and for improving the laser performance, noise and linewidth characteristics of such devices have been extensively studied, both theoretically and experimentally. [35; 38; 39; 40; 41; 42; 43; 44; 45] Stable and robust OFC operation is assured by a narrow beatnote, which is a measure for the amount of amplitude and phase-noise of the comb lines. Noise accompanying carrier transport and spontaneous emission noise can therefore have a significant impact on the OFC formation and the performance of SCLs.
Significant research efforts are devoted to the generation and deployment of non-classical features in optical and electronic systems. [46; 47; 48; 49; 50; 51] Recently, intensity correlations in QCL harmonic frequency combs (HFCs) were experimentally investigated to develop a new generation of semiconductor devices for generating light with non-classical properties. [52] Endowing commercial devices with outstanding quantum features would pave the way to practical and high-performance applications in the field of quantum networks, [53; 54] including quantum computation, [55; 56] quantum communication, [57] quantum metrology, [58; 59; 60] and quantum simulation. [61; 62] Notably, photonic systems are quite attractive for the investigation and employment of non-classical features, such as the generation of so-called quantum combs, [63; 64; 65] corresponding to non-classical states of light with multimode squeezed and/or entangled output. As the emergence of non-linear and non-classical features in SCLs is directly linked to the noise properties, [52; 66] the development of low-noise SCLs sources based on detailed simulations is an important prerequisite.
For the modeling of the optical dynamics in nano-optoelectronic devices, the Maxwell-Bloch equations are widely used since they form a relatively compact and numerically efficient model, and thus allow for spatiotemporal simulations of the laser dynamics over many optical roundtrips. [67; 68; 69] Here, the Bloch equations are used for simulating the evolution of the quantum system and its
coherent light-matter interaction with the optical field in the active medium. Additionally, the optical field propagation is treated classically within Maxwell's equations, where the coupling with the quantum system arises from the macroscopic polarization term.[69] The density matrix formalism can be extended and adapted by adding further quantized states in addition to the laser levels, and tunneling between states. Typically, the Maxwell-density matrix equations are treated in the so-called rotating-wave approximation where a coarser spatiotemporal grid can be used and thus the numerical load is greatly reduced. This approximation is however only valid for relatively narrowband (and not too strong) optical fields.[67; 69; 70] On the other hand, for frequency comb sources a broadband spectrum with a high number of modes is desired, and the QCL even offers the potential for generating spectra extending over a full octave and beyond.[71] In several studies, dynamical simulations of different nonlinear optical phenomena in nanostructured devices, e.g. QCLs[72; 73; 74; 75; 76; 77; 78; 79; 80] and quantum dots,[81; 82; 83; 84; 35; 81; 82; 84] were conducted. Motivated by these developments, over the recent years our group has developed the open-source codebase mbsolve for the numerically efficient simulation of full-wave Maxwell-density matrix-type equations, i.e., without invoking the rotating-wave approximation.[85; 86]
As pointed out above, this work aims to implement noise sources into mbsolve, enabling a more realistic simulation of OFC operation in low-dimensional SCLs, and especially of the noise characteristics. In a semiclassical framework, noise can generally be included by adding stochastic terms.[87; 88] The stochastic terms are typically numerically implemented by using a pseudorandom number generator producing uncorrelated, Gaussian-distributed random numbers for every gridpoint.[89; 90] The resulting Maxwell-Bloch equations are then commonly solved numerically with the finite-difference time-domain approach.[89; 90] The magnitude of the stochastic terms can be systematically derived from the quantum Langevin equations,[91; 92; 93; 94] which are then represented by equivalent stochastic c-number equations,[95; 96; 97; 98; 99; 95; 96; 89; 97] i.e., evolution equations for operator expectation values with additional stochastic terms. The c-number Langevin equations have also been used to calculate the intrinsic linewidth and estimate the intensity noise in SCLs.[40; 41; 66; 75; 98; 99; 100; 101] Spontaneous emission obviously plays an important role in optoelectronic devices. While the resulting recombination can simply be included by nonlinear rate terms for the carrier occupations,[82; 102] the noise contribution is not included in the Maxwell-Bloch model due to its semiclassical nature. This effect can however be considered in terms of a Gaussian white noise source in the optical propagation equation.[103; 104; 89] In a different model, dipole fluctuations are also included by adding Langevin noise terms not only to the propagation equation but also to the off-diagonal density matrix elements.[102; 105; 106; 107; 108; 109] By virtue of the fluctuation-dissipation theorem, a decay of populations, coherences, or the optical field is generally accompanied by fluctuations, and a Maxwell-Bloch equation model which includes such decay-induced fluctuations has been presented.[90; 110; 111] Furthermore, an extension of the stochastic c-number approach to incorporate nonclassical effects has been discussed.[97; 112] We extend this approach by including incoherent tunneling. From this, we can derive the semiclassical noise terms for our Maxwell-density matrix Langevin approach, which we then incorporate in our open-source tool mbsolve.
In detail, the paper is organized as follows: In Section II, we calculate the stochastic noise terms of the quantum Langevin equations and derive the generalized description of the Maxwell-density matrix Langevin equations for a quantum-optoelectronic structure such as a QCL. Our model is illustrated schematically in Fig. 1. Here, the structure is described by the density matrix \(\hat{\rho}\) and the optical field represented by the electric and magnetic field vectors \(\mathbf{E}\,,\mathbf{H}\), which are coupled to each other by the interaction Hamiltonian \(\hat{H}_{\rm I}\). For the calculation of drift and diffusion operators in the quantum Langevin theory, we take into account the influence of various reservoirs in our structure. Regarding the quantum system, the reservoir interactions with the semiconductor host, which for example includes phonons associated with (longitudinal- and transverse-optical and -acoustic) thermal lattice vibrations, lattice imperfections in the form of impurities (such as dopants), interface roughness or atomic disorder in alloys, as well as vacuum fluctuations arising from spontaneous emission are considered. For the optical field, the interaction with noise arising from thermal radiation (blackbody) entering the active waveguide from the cavity walls can be taken into account by external noise sources.[113] In this paper, we rather focus on the fluctuations arising from the quantum system and dedicate the investigation of thermal noise influences, which can for example play a role in THz QCL active waveguides, to future works. In Section III, we give an overview of the simulation tool and the implementation of the noise terms and validate
Figure 1: Schematic illustration of the coupling of a SCL quantum system and field system and the interaction with their associated reservoirs.
the model by presenting the simulation results for a superfluorescence setup.[90; 114] Furthermore, we present the simulation results for a THz QCL harmonic frequency comb and discuss the effects of noise contributions on the comb characteristics. The paper is concluded in Section IV.
## II Theoretical model
In the following, we focus on the inclusion of noise, arising for example from spontaneous emission and fluctuations associated with the electron transport. First, we introduce the quantum Langevin equations using a simple three-level resonant tunneling QCL system. Here, the reservoir variables are eliminated and are replaced by drift and fluctuation terms within the Heisenberg equation of motion. The quantum Langevin equations can be transformed into associated c-number Langevin equations, which are then used to derive the stochastic noise terms incorporated into the full-wave Maxwell-density matrix equation system.
### The quantum Langevin equations
The quantum Langevin equations are introduced by using a simple three-level resonant tunneling QCL system as depicted in Fig. 2.[7; 115] The QCL exploits optical transitions between quantized states in the conduction band of a quantum well heterostructure, where the properties can be controlled by quantum design rather than being determined by the bulk material. This not only applies to the gain and lasing wavelength, but also to the nonlinear optical properties such as FWM. Besides confinement provided by the quantum wells, another important quantum effect is tunneling through the separating barriers, which significantly influences carrier transport, in addition to the incoherent scattering-induced transitions due to phonons, crystal imperfections and electron-electron interactions.[116; 117] Regarding non-stationary QCL operation as is the case for OFC emission, coherent light-matter interaction as a further quantum effect plays a significant role in the dynamic behavior, e.g., leading to Rabi flopping,[118] i.e. oscillations of the electron population between the upper and lower laser levels driven by the resonant optical field. Dephasing due to incoherent scattering has to be taken into account for a realistic description, as it greatly affects tunneling and coherent light-matter interaction.
For the structure shown in Fig. 2, the lasing transition occurs between the upper laser level \(|3\rangle\) and the lower laser level \(|2\rangle\). Depopulation takes place via level \(|1\rangle\) and electrons are injected from the depopulation level \(|1^{\prime}\rangle\) of the adjacent period via resonant tunneling. The resonant tunneling across thick injection barriers in THz QCLs is treated within the tight-binding model.[115; 116; 119; 120; 117; 118; 119; 121; 122] Here, the tunneling between a doublet of states at the thick injection barrier is described by the coupling strength \(\Omega_{ij}=-\hbar^{-1}\langle i|\hat{V}_{\text{ext}}-\hat{V}_{\text{tb}}|j\rangle\) with the extended conduction band potential \(\hat{V}_{\text{ext}}\) and the tight-binding potential \(\hat{V}_{\text{tb}}\). The coupling strengths \(\Omega_{ij}\) between the states \(|3\rangle\), \(|2\rangle\), \(|1\rangle\) within the active period are zero as these states are chosen to be the eigenstates of the tight-binding potential \(\hat{V}_{\text{tb}}\).
In general, the QCL laser system is then described by the reduced system Hamiltonian[123; 124; 99]
\[\begin{split}\hat{H}_{\text{s}}&=\hat{H}_{\text{F} }+\hat{H}_{0}+\Delta\hat{V}_{\text{tb}}+\hat{H}_{\text{I}}\\ &=\hbar\omega_{0}\hat{a}^{\dagger}\hat{a}+\sum_{i}\epsilon_{i}|i \rangle\langle i|-\hbar\Omega_{1^{\prime}3}(|1^{\prime}\rangle\langle 3|+|3 \rangle\langle 1^{\prime}|)\\ &\quad+\hbar g(|3\rangle\langle 2|+|2\rangle\langle 3|)\big{(} \hat{a}+\hat{a}^{\dagger}\big{)}\,,\end{split} \tag{1}\]
where \(\hat{H}_{\text{F}}\) is the Hamiltonian of the optical field, \(\hat{H}_{0}\) is the Hamiltonian of the quantum system with \(\Delta\hat{V}_{\text{tb}}\) describing the coupling of electron states in two adjacent periods within the tight-binding model, and \(\hat{H}_{\text{I}}\) constitutes the interaction energy between quantum system and optical field. Here, \(\hbar\) is the reduced Planck constant, \(\omega_{0}\) the single mode lasing angular frequency, \(\hat{a}^{\dagger}(\hat{a})\) denotes the creation (annihilation) operator of the radiation field, \(\epsilon_{i}\) is the energy of level \(|i\rangle\) and \(\hbar\Omega_{1^{\prime}3}\) the anticrossing energy gap between levels \(|1^{\prime}\rangle\) and \(|3\rangle\). The dipole coupling constant \(g\) can be written in terms of the dipole matrix element, \(\mu_{z,23}=q\langle 2|\hat{z}|3\rangle\), as[123; 12]
\[g=-\sqrt{\frac{\omega_{0}}{2\hbar\epsilon_{r}\epsilon_{0}V_{\text{p}}}}\mu_{z,23}\,, \tag{2}\]
where \(\epsilon_{r}\) is the relative permittivity, \(\epsilon_{0}\) is the vacuum permittivity and \(V_{\text{p}}\) is the volume of each quantum system consisting of an active QCL period.
Figure 2: Schematic conduction band profile and probability densities of a two well THz QCL structure, where the upper laser level 3 is populated via resonant tunneling from injector level \(1^{\prime}\). Depopulation occurs through LO-phonon scattering from the lower laser level 2 to the depopulation level 1.
The Heisenberg-Langevin equation of motion for an operator \(\hat{A}_{\mu}(t)\) reads as [91, 93, 124, 125]
\[\partial_{t}\hat{A}_{\mu}(t) = -\mathrm{i}\hbar^{-1}[\hat{A}_{\mu}(t),\hat{H}_{\mathrm{s}}(t)]+ \hat{D}_{\mu}(t)+\hat{F}_{\mu}(t) \tag{3}\] \[= \hat{M}_{\mu}(t)+\hat{F}_{\mu}(t)\,.\]
Here, the drift operator \(\hat{D}_{\mu}(t)\) and fluctuation operator \(\hat{F}_{\mu}(t)\) account for the influence of the reservoirs on the system. \([\cdot,\cdot]\) denotes the commutator \([\hat{X},\hat{Y}]=\hat{X}\hat{Y}-\hat{Y}\hat{X}\). For the drift operator \(\hat{D}_{\mu}\) we can under the Markovian approximation write [124, 125]
\[\hat{D}_{\mu} = -\sum_{i,j}\delta(\omega_{i},-\omega_{j})\Big{\{}[\hat{A}_{\mu}, \hat{Q}_{i}]\hat{Q}_{j}w_{ij}^{+}\] \[-\hat{Q}_{j}[\hat{A}_{\mu},\hat{Q}_{i}]w_{ji}^{-}\Big{\}}\,,\]
where \(w^{\pm}\) are the reservoir spectral densities and \(\hat{Q}_{i}\) is a function of system operators. For a detailed description and derivation of this theory together with the calculation examples for specific operators \(\hat{A}_{\mu}\), we refer to Refs. [124, 125].
The reservoir average of the fluctuation operator vanishes, \(\langle\hat{F}_{\mu}^{\dagger}\rangle_{\mathrm{R}}=\langle\hat{F}_{\mu} \rangle_{\mathrm{R}}=0\). The diffusion coefficient for a Markovian system is defined as
\[2\langle\hat{D}_{\mu\nu}(t)\rangle_{\mathrm{R}}\delta(t-t^{\prime})=\langle \hat{F}_{\mu}(t)\hat{F}_{\nu}(t^{\prime})\rangle_{\mathrm{R}}\,, \tag{5}\]
and can be calculated by applying the fluctuation-dissipation theorem. Here, the \(\delta\)-function indicates the very short memory period of the reservoirs. The _generalized Einstein relation_ for the calculation of the diffusion coefficient is given by [125, 126, 91]
\[2\langle\hat{D}_{\mu\nu}(t)\rangle_{\mathrm{R}} = \partial_{t}\langle\hat{A}_{\mu}(t)\hat{A}_{\nu}(t)\rangle_{ \mathrm{R}}-\langle\hat{M}_{\mu}(t)\hat{A}_{\nu}(t)\rangle_{\mathrm{R}} \tag{6}\] \[-\langle\hat{A}_{\mu}(t)\hat{M}_{\nu}(t)\rangle_{\mathrm{R}}\,.\]
From Eq. (3) together with Eqs. (1) and (4) the quantum Langevin equations for the three-level QCL quantum system can be derived. Therefore, we introduce the electron population operators \(\hat{\sigma}_{ii}=|i\rangle\langle i|\) and the coherence operators \(\hat{\sigma}_{ij}=|i\rangle\langle j|\). The term \(\hat{\sigma}_{32}\hat{a}^{\dagger}\) describes the creation of a photon accompanied by an electron transition from the higher to the lower lying energy level, and \(\hat{\sigma}_{23}\hat{a}\) the annihilation of a photon accompanied by an electron transition from the higher to the lower lying energy level. By dropping these energy non-conserving terms, the interaction Hamiltonian \(\hat{H}_{\mathrm{I}}\) in the common rotating wave approximation is obtained. [91, 123] The corresponding equations of motion are given by
\[\partial_{t}\hat{a}(t) = -\mathrm{i}\omega_{0}\hat{a}(t)-\frac{\kappa}{2}\hat{a}(t)-g\hat {\sigma}_{23}+\hat{F}_{a}(t)\,, \tag{7a}\] \[\partial_{t}\hat{\sigma}_{23}(t) = -\frac{\mathrm{i}}{\hbar}\Delta_{32}\hat{\sigma}_{23}(t)-\gamma_{ 23}\hat{\sigma}_{23}(t)+\mathrm{i}\Omega_{1^{\prime}3}\hat{\sigma}_{21^{ \prime}}(t)\] (7b) \[+\mathrm{i}g(\hat{\sigma}_{33}(t)-\hat{\sigma}_{22}(t))\hat{a}(t )+\hat{F}_{23}(t)\,,\] \[\partial_{t}\hat{\sigma}_{31^{\prime}}(t) = -\frac{\mathrm{i}}{\hbar}\Delta_{1^{\prime}3}\hat{\sigma}_{31^{ \prime}}(t)-\gamma_{1^{\prime}3}\hat{\sigma}_{31^{\prime}}(t)+\mathrm{i} \Omega_{1^{\prime}3}(\hat{\sigma}_{33}(t)\] (7c) \[-\hat{\sigma}_{1^{\prime}1^{\prime}}(t))+\mathrm{i}g\hat{\sigma}_ {21^{\prime}}(t)\hat{a}^{\dagger}(t)+\hat{F}_{31^{\prime}}(t)\,,\] \[\partial_{t}\hat{\sigma}_{21^{\prime}}(t) = -\frac{\mathrm{i}}{\hbar}\Delta_{1^{\prime}2}\hat{\sigma}_{21^{ \prime}}(t)-\gamma_{1^{\prime}2}\hat{\sigma}_{21^{\prime}}(t)+\mathrm{i} \Omega_{1^{\prime}3}\hat{\sigma}_{23}(t)\] (7d) \[+\mathrm{i}g\hat{\sigma}_{31^{\prime}}(t)\hat{a}(t)+\hat{F}_{21^ {\prime}}(t)\,,\] \[\partial_{t}\hat{\sigma}_{33}(t) = -\frac{1}{\tau_{3}}\hat{\sigma}_{33}(t)+r_{32}\hat{\sigma}_{22}(t )+r_{31^{\prime}}\hat{\sigma}_{1^{\prime}1^{\prime}}(t)\] (7e) \[+\mathrm{i}g\Big{(}\hat{a}^{\dagger}(t)\hat{\sigma}_{23}(t)-\hat{ a}(t)\hat{\sigma}_{23}^{\dagger}(t)\Big{)}\] \[-\mathrm{i}\Omega_{1^{\prime}3}\Big{(}\hat{\sigma}_{31^{\prime}} ^{\dagger}(t)-\hat{\sigma}_{31^{\prime}}(t)\Big{)}+\hat{F}_{33}(t)\,,\] \[\partial_{t}\hat{\sigma}_{22}(t) = r_{23}\hat{\sigma}_{33}(t)-\frac{1}{\tau_{2}}\hat{\sigma}_{22}(t )+r_{21^{\prime}}\hat{\sigma}_{1^{\prime}1^{\prime}}(t)\] (7f) \[+\mathrm{i}g\Big{(}\hat{a}(t)\hat{\sigma}_{23}^{\dagger}(t)-\hat{ a}^{\dagger}(t)\hat{\sigma}_{23}(t)\Big{)}+\hat{F}_{22}(t)\,,\] \[\partial_{t}\hat{\sigma}_{1^{\prime}1^{\prime}}(t) = r_{1^{\prime}3}\hat{\sigma}_{33}(t)+r_{1^{\prime}2}\hat{\sigma}_{ 22}(t)-\frac{1}{\tau_{1^{\prime}}}\hat{\sigma}_{1^{\prime}1^{\prime}}(t)\] (7g) \[-\mathrm{i}\Omega_{1^{\prime}3}\Big{(}\hat{\sigma}_{31^{\prime}} (t)-\hat{\sigma}_{31^{\prime}}^{\dagger}(t)\Big{)}+\hat{F}_{1^{\prime}1^{ \prime}}(t)\,,\]
where \(\kappa\) is the cavity decay rate, \(\Delta_{ij}\) denotes the energy separation between levels \(|i\rangle\) and \(|j\rangle\), \(\tau_{i}^{-1}=\sum_{i\neq j}r_{ji}\) is the inverse population lifetime, \(r_{ij,i\neq j}\) represents the scattering rate from level \(j\) to \(i\) and
\[\gamma_{ij}=\frac{1}{2}\bigg{(}\frac{1}{\tau_{i}}+\frac{1}{\tau_{j}}\bigg{)}+ \gamma_{ij,p} \tag{8}\]
is the dephasing rate. Here, \(\gamma_{ij,\mathrm{p}}\) is the pure dephasing rate, which for QCLs mainly consists of elastic scattering contributions due to impurity and interface roughness. [17] The equivalent equations for \(\hat{a}^{\dagger}(t)\), \(\hat{\sigma}_{32}(t)\), \(\hat{\sigma}_{1^{\prime}3}(t)\) and \(\hat{\sigma}_{1^{\prime}2}(t)\) are given by the hermitian conjugates of Eqs. (7)(a)-(d).
Using Eq. (6), we can calculate the second-order correlation function for the polarization operator as
\[\langle\hat{F}_{23}^{\dagger}(t)\hat{F}_{23}(t^{\prime})\rangle_{ \mathrm{R}} =2\langle\hat{D}_{3223}(t)\rangle_{\mathrm{R}}\delta(t-t^{\prime})\] \[=\Big{(}\partial_{t}\langle\hat{\sigma}_{23}^{\dagger}(t)\hat{ \sigma}_{23}(t)\rangle_{\mathrm{R}}-\langle\hat{M}_{23}^{\dagger}(t)\hat{ \sigma}_{23}(t)\rangle_{\mathrm{R}}\] \[\quad-\langle\hat{\sigma}_{23}^{\dagger}(t)\hat{M}_{23}(t)\rangle_{ \mathrm{R}}\Big{)}\delta(t-t^{\prime})\] \[=\bigg{[}\bigg{(}2\gamma_{23}-\frac{1}{\tau_{3}}\bigg{)}\langle \hat{\sigma}_{33}(t)\rangle_{\mathrm{R}}\] \[\quad+r_{32}\langle\hat{\sigma}_{22}(t)\rangle_{\mathrm{R}}+r_{31^{ \prime}}\langle\hat{\sigma}_{1^{\prime}1^{\prime}}(t)\rangle_{\mathrm{R}}\bigg{]}\] \[\quad\times\delta(t-t^{\prime})
\[\langle\hat{F}_{23}(t)\hat{F}_{23}^{\dagger}(t^{\prime})\rangle_{\rm R} =\left[r_{23}\langle\hat{\sigma}_{33}(t)\rangle_{\rm R}+\left(2\gamma _{23}-\frac{1}{\tau_{2}}\right)\right.\] \[\quad\times\langle\hat{\sigma}_{22}(t)\rangle_{\rm R}+r_{21^{ \prime}}\langle\hat{\sigma}_{1^{\prime}1^{\prime}}(t)\rangle_{\rm R}\bigg{]}\] \[\delta(t-t^{\prime})\,, \tag{10c}\] \[\langle\hat{F}_{31^{\prime}}^{\dagger}(t)\hat{F}_{31^{\prime}}(t^ {\prime})\rangle_{\rm R} =\left[r_{1^{\prime}3}\langle\hat{\sigma}_{33}(t)\rangle_{\rm R}+ r_{1^{\prime}2}\langle\hat{\sigma}_{22}(t)\rangle_{\rm R}\right.\] \[\quad\left.+\left(2\gamma_{1^{\prime}3}-\frac{1}{\tau_{1^{\prime }}}\right)\langle\hat{\sigma}_{1^{\prime}1^{\prime}}(t)\rangle_{\rm R}\right]\] \[\quad\times\delta(t-t^{\prime})\,,\] (10d) \[\langle\hat{F}_{31^{\prime}}(t)\hat{F}_{31^{\prime}}^{\dagger}(t^ {\prime})\rangle_{\rm R} =\left[\left(2\gamma_{1^{\prime}3}-\frac{1}{\tau_{3}}\right) \langle\hat{\sigma}_{33}(t)\rangle_{\rm R}\right.\] \[\quad\left.+r_{32}\langle\hat{\sigma}_{22}(t)\rangle_{\rm R}+r_{3 1^{\prime}}\hat{\sigma}_{1^{\prime}1^{\prime}}(t)\right]\] \[\quad\times\delta(t-t^{\prime})\,,\] (10e) \[\langle\hat{F}_{21^{\prime}}^{\dagger}(t)\hat{F}_{21^{\prime}}(t^ {\prime})\rangle_{\rm R} =\left[r_{1^{\prime}3}\langle\hat{\sigma}_{33}(t)\rangle_{\rm R}+ r_{1^{\prime}2}\langle\hat{\sigma}_{22}(t)\rangle_{\rm R}\right.\] \[\quad\left.+\left(2\gamma_{1^{\prime}2}-\frac{1}{\tau_{1^{\prime }}}\right)\langle\hat{\sigma}_{1^{\prime}1^{\prime}}(t)\rangle_{\rm R}\right]\] \[\quad\times\delta(t-t^{\prime})\,,\] (10f) \[\langle\hat{F}_{21^{\prime}}(t)\hat{F}_{21^{\prime}}^{\dagger}(t^ {\prime})\rangle_{\rm R} =\left[r_{23}\langle\hat{\sigma}_{33}(t)\rangle_{\rm R}+\left(2 \gamma_{1^{\prime}2}-\frac{1}{\tau_{2}}\right)\right.\] \[\quad\left.\times\langle\hat{\sigma}_{22}(t)\rangle_{\rm R}+r_{21 ^{\prime}}\langle\hat{\sigma}_{1^{\prime}1^{\prime}}(t)\rangle_{\rm R}\right]\] \[\quad\delta(t-t^{\prime})\,,\] (10g) \[\langle\hat{F}_{33}(t)\hat{F}_{33}(t^{\prime})\rangle_{\rm R} =\left[\frac{1}{\tau_{3}}\langle\hat{\sigma}_{33}(t)\rangle_{\rm R }+r_{32}\langle\hat{\sigma}_{22}(t)\rangle_{\rm R}\right.\] \[\quad\left.+r_{31^{\prime}}\langle\hat{\sigma}_{1^{\prime}1^{ \prime}}(t)\rangle_{\rm R}\right]\!\delta(t-t^{\prime})\,,\] (10h) \[\langle\hat{F}_{22}(t)\hat{F}_{22}(t^{\prime})\rangle_{\rm R} =\left[r_{23}\langle\hat{\sigma}_{33}(t)\rangle_{\rm R}+\frac{1}{ \tau_{2}}\langle\hat{\sigma}_{22}(t)\rangle_{\rm R}\right.\] \[\quad\left.+r_{21^{\prime}}\langle\hat{\sigma}_{1^{\prime}1^{ \prime}}(t)\rangle_{\rm R}\right]\!\delta(t-t^{\prime})\,,\] (10i) \[\langle\hat{F}_{1^{\prime}1^{\prime}}(t)\hat{F}_{1^{\prime}1^{ \prime}}(t^{\prime})\rangle_{\rm R} =\left[r_{1^{\prime}3}\langle\hat{\sigma}_{33}(t)\rangle_{\rm R }+r_{1^{\prime}2}\langle\hat{\sigma}_{22}(t)\rangle_{\rm R}\right.\] \[\quad\left.+\frac{1}{\tau_{1^{\prime}}}\langle\hat{\sigma}_{1^{ \prime}1^{\prime}}(t)\rangle_{\rm R}\right]\!\delta(t-t^{\prime})\,. \tag{10j}\]
Here, \(n_{\rm th}(\omega_{0})=\left[\exp\!\left(\frac{\hbar\omega_{0}}{k_{\rm B}T} \right)-1\right]^{-1}\) is the number of thermal photons in the lasing mode at temperature \(T\), where \(k_{\rm B}\) denotes the Boltzmann constant.
### The c-number Langevin equations
In order to derive the stochastic noise terms for the semiclassical Maxwell-density matrix equations, the operator Langevin equations have to be converted into the associated c-number Langevin equations.
The quantum Langevin equation for the operator \(\hat{A}_{\mu}(t)\) in chosen order is given by
\[\partial_{t}\hat{A}_{\mu}(t)=-{\rm i}\hbar^{-1}[\hat{A}_{\mu}(t),\hat{H}_{\rm s }(t)]^{\rm c}+\hat{D}_{\mu}^{\rm c}(t)+\hat{F}_{\mu}^{\rm c}(t)\,, \tag{11}\]
where we make use of the commutation relation \(\hat{A}_{\mu}^{\dagger}\hat{A}_{\nu}=\hat{A}_{\nu}\hat{A}_{\mu}^{\dagger}-[ \hat{A}_{\nu},\hat{A}_{\mu}^{\dagger}]\) to bring the equation into the chosen order. We use the superscript \(c\) to highlight that we have put the operators in chosen order. To explain this formulation in more detail, we use the fluctuation operator \(\hat{F}_{\mu}(t)\) as an example, but the following description holds for the other operators in the same way. For a chosen order \(\hat{A}_{1},\dots,\hat{A}_{\mu}\), we can write
\[\hat{F}_{\mu}=\hat{F}_{\mu}^{\rm c}(\hat{A}_{1},\dots,\hat{A}_{\mu})\,, \tag{12}\]
where the fluctuation operator \(\hat{F}_{\mu}^{\rm c}\) in the chosen order is, of course, equal to the fluctuation operator \(\hat{F}_{\mu}\) in the original order. The associated c-number fluctuation term \(F_{\mu}^{\rm c}(A_{1},\dots,A_{\mu})\) is obtained using c-numbers \(A_{\nu}\). By defining a linear chosen ordering operator \(\hat{C}\) we can further indicate [124; 127]
\[\hat{F}_{\mu}^{\rm c}(\hat{A}_{1},\dots,\hat{A}_{\mu})=\hat{C}(F_{\mu}^{\rm c}( A_{1},\dots,A_{\mu}))\,, \tag{13}\]
where the operator \(\hat{C}\) has the function of replacing each \(A_{\nu}\) by the corresponding operator \(\hat{A}_{\nu}\) and bringing all terms into chosen order.
If we now convert the quantum Langevin equation into the equivalent c-number Langevin equation, we may write
\[\partial_{t}A_{\mu}(t) =L_{\mu}(t)+D_{\mu}(t)+F_{\mu}^{\rm c}(t) \tag{14}\] \[=M_{\mu}(t)+F_{\mu}^{\rm c}(t)\,,\]
with \(L_{\mu}(t)\) being the coherent term corresponding to the commutation of \(\hat{A}_{\mu}(t)\) with the system Hamiltonian \(\hat{H}_{\rm s}\) and \(D_{\mu}(t)\) the drift term. Furthermore, by the use of Eq. (14) we obtain the c-number equation
\[\partial_{t}(A_{\mu}(t)A_{\nu}(t)) =A_{\mu}(t)\partial_{t}A_{\nu}(t)+A_{\nu}(t)\partial_{t}A_{\mu}(t) \tag{15}\] \[=A_{\mu}(t)M_{\nu}(t)+A_{\nu}(t)M_{\mu}(t)\] \[\quad+A_{\mu}(t)F_{\nu}^{\rm c}(t)+A_{\nu}(t)F_{\mu}^{\rm c}(t)\,.\]
In analogy to the reservoir average in the operator case, we may write the c-number equation
\[\partial_{t}\langle A_{\mu}(t)A_{\nu}(t)\rangle_{\rm R} =\langle A_{\mu}(t)M_{\nu}(t)\rangle_{\rm R}+\langle A_{\nu}(t)M_{ \mu}(t)\rangle_{\rm R} \tag{16}\] \[\quad+2\langle D_{\mu\nu}(t)\rangle_{\rm R}\,,\]
where we can make use of the following relation under the Markovian approximation [124; 12]
\[2\langle D_{\mu\nu}(t)\rangle_{\rm R} =\langle A_{\mu}(t)F_{\nu}^{\rm c}(t)+A_{\nu}(t)F_{\mu}^{\rm c}(t) \rangle_{\rm R}\,. \tag{17}\]
The diffusion coefficients in the c-number Langevin equations may differ from the ones in the quantum Langevin
equations, as the c-numbers commute, whereas the operators do not. By requiring the equivalence of Eq. (16) and Eq. (6) in both c-number and quantum Langevin theory, it can be shown that in general
\[2\langle\hat{D}_{\mu\nu}(t)\rangle_{\text{R}}\neq 2\langle\hat{C}(D_{\mu\nu}(t)) \rangle_{\text{R}}\,. \tag{18}\]
By taking our chosen ordered operator representation of the system operators \(\hat{a}^{\dagger}\), \(\hat{\sigma}_{23}^{\dagger}\), \(\hat{\sigma}_{31^{\prime}}^{\dagger}\), \(\hat{\sigma}_{21^{\prime}}^{\dagger}\), \(\hat{\sigma}_{33}\), \(\hat{\sigma}_{22}\), \(\hat{\sigma}_{1^{\prime}1^{\prime}}\), \(\hat{\sigma}_{21^{\prime}}\), \(\hat{\sigma}_{31^{\prime}}\), \(\hat{\sigma}_{23}\), \(\hat{a}\), we obtain the corresponding c-numbers \(a^{*}\), \(\sigma_{23}^{*}\), \(\sigma_{31^{\prime}}^{*}\), \(\sigma_{21^{\prime}}^{*}\), \(\sigma_{33}\), \(\sigma_{22}\), \(\sigma_{1^{\prime}1^{\prime}}\), \(\sigma_{21^{\prime}}\), \(\sigma_{31^{\prime}}\), \(\sigma_{23}\), \(a\). Therefore, we derive the c-number second-order moments and obtain for the populations differing terms compared to the operator case:
\[\langle F_{33}(t)F_{33}(t^{\prime})\rangle_{\text{R}} =\bigg{[}\frac{1}{7_{3}}\langle\sigma_{33}(t)\rangle_{\text{R}}+r _{32}\langle\sigma_{22}(t)\rangle_{\text{R}}+r_{31^{\prime}}\langle\sigma_{1^ {\prime}1^{\prime}}(t)\rangle_{\text{R}}+\text{i}g(\langle a^{*}(t)\sigma_{2 3}(t)\rangle_{\text{R}}-\langle a(t)\sigma_{23}^{*}(t)\rangle_{\text{R}})\] \[\quad+\text{i}\Omega_{1^{\prime}3}(\langle\sigma_{31^{\prime}}^{* }(t)\rangle_{\text{R}}-\langle\sigma_{31^{\prime}}(t)\rangle_{\text{R}}) \bigg{]}\delta(t-t^{\prime})\,, \tag{19a}\] \[\langle F_{22}(t)F_{22}(t^{\prime})\rangle_{\text{R}} =\bigg{[}r_{23}\langle\sigma_{33}(t)\rangle_{\text{R}}+\frac{1}{ \tau_{2}}\langle\sigma_{22}(t)\rangle_{\text{R}}+r_{21^{\prime}}\langle\sigma_ {1^{\prime}1^{\prime}}(t)\rangle_{\text{R}}\text{i}g(\langle a^{*}(t)\sigma_{ 23}(t)\rangle_{\text{R}}-\langle a(t)\sigma_{23}^{*}(t)\rangle_{\text{R}}) \bigg{]}\delta(t-t^{\prime})\,,\] (19b) \[\langle F_{1^{\prime}1^{\prime}}(t)F_{1^{\prime}1^{\prime}}(t^{ \prime})\rangle_{\text{R}} =\bigg{[}r_{1^{\prime}3}\langle\sigma_{33}(t)\rangle_{\text{R}}+r _{1^{\prime}2}\langle\sigma_{22}(t)\rangle_{\text{R}}+\frac{1}{\tau_{1^{ \prime}}}\langle\sigma_{1^{\prime}1^{\prime}}(t)\rangle_{\text{R}}+\text{i} \Omega_{1^{\prime}3}(\langle\sigma_{31^{\prime}}^{*}(t)\rangle_{\text{R}}- \langle\sigma_{31^{\prime}}(t)\rangle_{\text{R}})\bigg{]}\delta(t-t^{\prime})\,. \tag{19c}\]
As an example, we provide a detailed derivation of the diffusion term from Eq.(19)(a) in Appendix A. Additionally, we obtain diffusion coefficients absent in the quantum Langevin theory, e.g.
\[\langle F_{23}(t)F_{23}(t^{\prime})\rangle_{\text{R}} =2\text{i}g\langle a(t)\sigma_{23}(t)\rangle_{\text{R}}\delta(t-t ^{\prime})\,, \tag{20a}\] \[\langle F_{31^{\prime}}(t)F_{31^{\prime}}(t^{\prime})\rangle_{\text{R }} =-2\text{i}\Omega_{1^{\prime}3}\langle\sigma_{31^{\prime}}(t)\rangle_{ \text{R}}\delta(t-t^{\prime})\,. \tag{20b}\]
The complete diffusion matrix \(\mathbf{D}(\mathbf{A},t)\) including all relevant cross-correlation terms of the three-level QCL system with the c-number vector \(\mathbf{A}=(\begin{smallmatrix}a^{*}&a^{*}&\sigma_{23}^{*}&\sigma_{31^{\prime}}^{* }&\sigma_{21^{\prime}}^{*}&\sigma_{33}&\sigma_{22}&\sigma_{1^{\prime}1^{\prime} }&\sigma_{21^{\prime}}&\sigma_{31^{\prime}}&\sigma_{23}\end{smallmatrix})^{ \text{T}}\) is illustrated in Appendix B.
In literature,[128, 129] it has been shown that a set of Ito stochastic differential equations (SDEs) can be derived for the given c-number vector and can serve as an efficient basis for numerical simulations. The equivalent Ito SDEs to the Langevin theory are given by
\[\partial_{t}\mathbf{A}(t)=\mathbf{M}(t)+\mathbf{F}(t)=\mathbf{M}(t)+\mathbf{B}(\mathbf{A},t)\cdot\mathbf{ \xi}(t)\,, \tag{21}\]
where \(\mathbf{\xi}(t)\) is a vector with real, independent Gaussian random numbers. Here, a semi-definite and symmetric diffusion matrix \(\mathbf{D}(\mathbf{A},t)\) is required, which then can be factorized into the form[130, 128, 97]
\[\mathbf{D}(\mathbf{A},t)=\mathbf{B}(\mathbf{A},t)\mathbf{B}^{\text{T}}(\mathbf{A},t)\,, \tag{22}\]
where the derived noise matrix \(\mathbf{B}(\mathbf{A},t)\) is not necessarily symmetric.
To calculate the full noise matrix \(\mathbf{B}(\mathbf{A},t)\) for the three-level QCL system, we can divide the diffusion matrix \(\mathbf{D}(\mathbf{A},t)\) into four different submatrices, where a correlation between the corresponding terms is identified. The given subvector \(\mathbf{A}_{\nu}\) as well as the submatrices \(\mathbf{B}_{\nu}(\mathbf{A}_{\nu},t)\) and \(\mathbf{D}_{\nu}(\mathbf{A}_{\nu},t)\) are illustrated in Table 1. Here, we include correlations between three states by taking into account a tunneling transition followed by an optical transition. This leads to a substantial extension of the initially derived quantum theory of propagation of nonclassical radiation in a two-level system[97] and is of essential importance for the description of quantum fluctuations in THz QCL systems, where electron transport across thick barriers is mediated by tunneling between closely aligned energy levels. A detailed symbolic derivation of the noise submatrices \(\mathbf{B}_{\nu}(\mathbf{A}_{\nu},t)\) and the resulting noise matrix \(\mathbf{B}(\mathbf{A},t)\) for the three-level QCL system can be found in the GitHub project mbsolve.[131]
By calculating the operator expectation value in the Schrodinger picture; we can demonstrate that the c-numbers representing the quantum system can be replaced by the density matrix elements \(\rho_{23}\), \(\rho_{31^{\prime}}\), \(\rho_{21^{\prime}}\), \(\rho_{33}\), \(\rho_{22}\), \(\rho_{1^{\prime}1^{\prime}}\), \(\rho_{1^{\prime}2}\), \(\rho_{1^{\prime}3}\), \(\rho_{32}\). The expectation value can be written as
\[\begin{split}\langle\hat{\sigma}_{ij}\rangle&=\text{Tr} \{|i\rangle\langle j|\hat{\rho}(t)\}\\ &=\text{Tr}\Bigg{\{}|i\rangle\langle j|\sum_{j^{\prime},i^{\prime}} \rho_{j^{\prime}i^{\prime}}(t)|j^{\prime}\rangle\langle i^{\prime}|\Bigg{\}}= \rho_{ji}(t)\,.\end{split} \tag{23}\]
Furthermore, we can write the interaction Hamiltonian \(\hat{H}_{\text{I}}\) of the quantum system and the optical field as
\[\hat{H}_{\text{I}}=-\hat{\mathbf{\mu}}_{z}\hat{\mathbf{E}}_{z}=-\mathbf{\mu}_{z,23}\hat{\mathbf{E} }_{z}(\hat{\sigma}_{32}+\hat{\sigma}_{23})\,, \tag{24}\]
where the electrical field operator \(\hat{\mathbf{E}}_{z}\) is defined as
\[\hat{\mathbf{E}}_{z}=\sqrt{\frac{\omega_{0}}{2\hbar\epsilon_{r}\epsilon_{0}V_{\text{P}}}} \big{(}\hat{a}^{\dagger}+\hat{a}\big{)}\mathbf{e}_{z}\,. \tag{25}\]
Here, \(\mathbf{e}_{z}\) denotes the unit vector in \(z\)-direction. For devices in which the intraband transitions between quantized states occur within the conduction band, e.g. the QCL quantum well heterostructure, only the dipole matrix element \(\mathbf{\mu}_{z}\) for the polarization in growth direction \(z\) is nonzero and relevant.
### Generalized Maxwell-density matrix Langevin equations in 1D
In the following, we derive the generalized Maxwell-density matrix Langevin equations with additional microscopic fluctuation terms and characterize the influence of spontaneous emission noise on the optical field evolution. For the description of the coherent carrier dynamics and the incoherent relaxation processes, as well as the interaction with the classical optical field, the generalized full-wave Maxwell-density matrix equations constitute a compact semiclassical model. By combining it with the Langevin approach the microscopic noise characteristics can be fully taken into account. Here, the carrier dynamics in a SCL system are described in the density matrix formulation using the Lindblad equation
\[\partial_{t}\hat{\rho}=-\mathrm{i}\hbar^{-1}[\hat{H}_{0}+\Delta\hat{V}_{\mathrm{ tb}}+\hat{H}_{\mathrm{I}},\hat{\rho}]+\mathcal{D}(\hat{\rho})+\mathcal{F}(\hat{ \rho})\,, \tag{26}\]
which is coupled to the one-dimensional Maxwell's equations
\[\partial_{t}E_{z}=\varepsilon^{-1}(-\sigma E_{z}-\partial_{t}P_{z,\mathrm{ class}}-\Gamma\partial_{t}P_{z,\mathrm{qm}}+\partial_{x}H_{y})\,, \tag{27a}\] \[\partial_{t}H_{y}=\mu^{-1}\partial_{x}E_{z}\,, \tag{27b}\]
where \(\mathcal{D}(\hat{\rho})\) is the dissipation superoperator, \(\mathcal{F}(\hat{\rho})\) is an additional Langevin fluctuation superoperator and the other operators have their usual meanings. The permittivity is given by the product \(\varepsilon=\varepsilon_{0}\varepsilon_{\mathrm{r}}\), \(\sigma\) is the material conductivity, \(\mu\) is the permeability, and the confinement factor \(\Gamma\in[0,1]\) gives the spatial overlap of the transverse optical field mode with the quantum system. As we focus in this work on optoelectronic devices with invariant transverse field distribution, the reduction to a one-dimensional model for the optical propagation in the waveguide is justified.[69] The Lindblad equation is the general form of a time-local and Markovian linear master equation for a quantum system, described by its completely positive trace-preserving density matrix, interacting with an environment. Obviously, the conventional Bloch equations, corresponding to a two-level system, describe the interaction of the laser levels with the optical field \(E_{z}\) and constitute a special case of the Lindblad equation given in Eq. (26). The interaction with the environment is here modeled by scattering and dephasing rates, \(r_{ij}\) and \(\gamma_{ij}\). Further levels can be considered in Eq. (26), and additional effects such as tunneling are
\begin{table}
\begin{tabular}{c c} subvector \(\mathbf{A}_{\nu}\) & submatrix \(\mathbf{B}_{\nu}(\mathbf{A}_{\nu},t)\) & submatrix \(\mathbf{D}_{\nu}(\mathbf{A}_{\nu},t)\) \\ \hline \(\left(\begin{smallmatrix}\sigma_{233}^{*}\\ \sigma_{233}^{*}\\ \sigma_{233}^{*}\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}\sigma_{33^{\prime}}^{*}\\ \sigma_{233}^{*}\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}\sigma_{23^{\prime}}^{*}\\ \sigma_{23^{\prime}}^{*}\\ \sigma_{31^{\prime}}^{*}\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}\sigma_{23^{\prime}}^{*}\\ \sigma_{23^{\prime}}^{*}\\ \sigma_{31^{\prime}}^{*}\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}\sigma_{23^{\prime}}^{*}\\ \sigma_{23^{\prime}}^{*}\\ \sigma_{23^{\prime}}^{*}\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}\sigma_{23^{\prime}}^{*}\\ \sigma_{23^{\prime}}^{*}\\ \sigma_{23^{\prime}}^{*}\end{smallmatrix}\right)\) \\ \(\left(\begin{smallmatrix}\sigma_{233}^{*}\\ \sigma_{233}^{*}\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}\sigma_{33^{\prime}}^{*}\\ \sigma_{23^{\prime}}^{*}\\ \sigma_{31^{\prime}}^{*}\end{smallmatrix}\right)\) & \(\left(\begin{smallmatrix}\sigma_{23^{\prime}}^{*}\\ \sigma_{23}^{*}\\ \sigma_{23}^{*}\end{smallmatrix}\right)\) \\ \hline \(\left(\begin{smallmatrix}\sigma_{22}^{*}\\ \sigma_{23}^{*}\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}\sigma_{33}^{*}\\ \sigma_{1^{\prime}}^{*}\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}\sigma_{22}^{*}\\ \sigma_{1^{\prime}}^{*}\end{smallmatrix}\right)\) & \(\left(\begin{smallmatrix}\sigma_{22}^{*}\\ \sigma_{1^{\prime}}^{*}\end{smallmatrix}\right)\) & \(\left[\begin{smallmatrix}\sigma_{23}^{*}\\ \sigma_{23}^{*}\end{smallmatrix}\right]\) \\ \hline \(\left(\begin{smallmatrix}\sigma_{23}^{*}\\ \sigma_{23}^{*}\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}\sigma_{33^{\prime}}^{*}\\ \sigma_{31^{\prime}}^{*}\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}\sigma_{23}^{*}\\ \sigma_{23}^{*}\end{smallmatrix}\right)\) & \(\left[\begin{smallmatrix}\sigma_{23}^{*}\\ \sigma_{23}^{*}\end{smallmatrix}\right]\) & \(\left[\begin{smallmatrix}\sigma_{23}^{*}\\ \sigma_{23}^{*}\end{smallmatrix}\right]\) \\ \hline \(\left(\begin{smallmatrix}\sigma_{23}^{*}\\ \sigma_{23}^{*}\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}\sigma_{31^{\prime}}^{*}\\ \sigma_{31^{\prime}}^{*}\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}\sigma_{21^{\prime}}^{*}\\ \sigma_{21^{\prime}}^{*}\end{smallmatrix}\right)\) & \(\left[\begin{smallmatrix}\sigma_{23}^{*}\\ \sigma_{23}^{*}\end{smallmatrix}\right]\) & \(\left[\begin{smallmatrix}\sigma_{23}^{*}\\ \sigma_{23}^{*}\end{smallmatrix}\right]\) \\ \hline \(\left(\begin{smallmatrix}\sigma_{23}^{*}\\ \sigma_{23}^{*}\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}\sigma_{31^{\prime}}^{*}\\ \sigma_{31^{\prime}}^{*}\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}\sigma_{21^{\prime}}^{*}\\ \sigma_{21^{\prime}}^{*}\end{smallmatrix}\right)\) & \(\left[\begin{smallmatrix}\sigma_{21^{\prime}}^{*}\\ \sigma_{21^{\prime}}^{*}\end{smallmatrix}\right]\) & \(\left[\begin{smallmatrix}\sigma_{23}^{*}\\ \sigma_{23}^{*}\end{smallmatrix}\right]\), \(\left(\begin{smallmatrix}\sigma_{23}^{*}\\ \sigma_{31^{\prime}}^{*}\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}\sigma_{21^{\prime}}^{*}\\ \sigma_{21^{\prime}}^{*}\end{smallmatrix}\right)\) & \(\left[\begin{smallmatrix}\sigma_{21^{\prime}}^{*}\\ \sigma_{21^{\prime}}^{*}\end{smallmatrix}\right]\) \\ \hline \(\left(\begin{smallmatrix}\sigma_{23}^{*}\\ \sigma_{23}^{*}\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}\sigma_{31^{\prime}}^{*}\\ \sigma_{31^{\prime}}^{*}\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}\sigma_{21^{\prime}}^{*}\\ \sigma_{21^{\prime}}^{*}\end{smallmatrix}\right)\) & \(\left[\begin{smallmatrix}\sigma_{21^{\prime}}^{*}\\ \sigma_{21^{\prime}}^{*}\end{smallmatrix}\right]\) & \(\left[\begin{smallmatrix}\sigma_{21^{\prime}}^{*}\\ \sigma_{21^{\prime}}^{*}\end{smallmatrix}\right]\) \\ \hline \(\left(\begin{smallmatrix}\sigma_{23}^{*}\\ \sigma_{23}^{*}\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}\sigma_{31^{\prime}}^{*}\\ \sigma_{31^{\prime}}^{*}\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}\sigma_{21^{\prime}}^{*}\\ \sigma_{21^{\prime}}^{*}\end{smallmatrix}\right)\) & \(\left[\begin{smallmatrix}\sigma_{21^{\prime}}^{*}\\ \sigma_{21^{\prime}}^{*}\end{smallmatrix}\right]\) & \(\left[\begin{smallmatrix}\sigma_{21^{\prime}}^{*}\\ \sigma_{21^{\prime}}^{*}\end{smallmatrix}\right]\) \\ \hline \(\left(\begin{smallmatrix}\sigma_{23}^{*}\\ \sigma_{23}^{*}\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}\sigma_{31^{\prime}}^{*}\\ \sigma_{31^{\prime}}^{*}\end{smallmatrix}\right)\), \(\left(\begin{smallmatrix}\sigma_{21^{\prime}}^{*}\\ \sigma_{21^{\prime}}^{*}\end{smallmatrix}\right)\) & \(\left[\begin{smallmatrix}\sigma_{21^{\prime}}^{*}\\ \sigma_{21^{\prime}}^{*}\end{smallmatrix}\right]\) & \(\left[\begin{smallmatrix}\sigma_{21^{\prime}}^{*}\\ \sigma_{21^{\prime}}^{*}\end{smallmatrix}\right]\) \\ \hline \end{tabular}
\end{table}
Table 1: Division of the diffusion matrix into submatrices \(\mathbf{D}_{\nu}
included in the Hamiltonian. Moreover, quantum fluctuations are considered in the model given by Eq. (26) by adding a suitable Langevin fluctuation superoperator \(\mathcal{F}\). The Maxwell's equations capture the optical propagation through the waveguide resonator, where the coupling with the quantum system is described by the macroscopic polarization \(P_{z,\mathrm{qm}}\) arising from the contributions of the dipole matrix elements. The expectation value of the dipole moment operator \(\hat{\mu}_{z}\) is calculated by averaging over a large ensemble of quantum systems within an adequate volume \(V_{p}\) around the position \(z\), and we can write for the macroscopic polarization
\[\begin{split} P_{z,\mathrm{qm}}&=n_{\mathrm{3D}} \operatorname{Tr}\{\hat{\mu}_{z}\hat{\rho}\}=n_{\mathrm{3D}}(\mu_{z,23}\rho_{ 32}+\mu_{z,32}\rho_{23})\\ &=n_{\mathrm{3D}}\mu_{z,23}(\rho_{32}+\rho_{23})\,,\end{split} \tag{28}\]
where \(n_{\mathrm{3D}}\) is the carrier number density. The two classical contributions, \(P_{z,\mathrm{class}}=\epsilon_{0}\chi E_{z}\) and \(\sigma E_{z}\), account for the polarization caused by bulk and waveguide dispersion as well as the material losses.[86]
Finally, the update equations of the density matrix elements for the QCL laser system depicted in Fig. 2 can be written as
\[\partial_{t}\rho_{32}(t)= -\frac{\mathrm{i}}{\hbar}\Delta_{32}\rho_{32}(t)-\gamma_{23}\rho _{32}(t)+\mathrm{i}\Omega_{1^{\prime}3}\rho_{1^{\prime}2}(t)\] \[+\frac{\mathrm{i}}{\hbar}\mu_{z,23}E_{z}(\rho_{33}(t)-\rho_{22}(t ))+F_{23}(t)\,, \tag{29a}\] \[\partial_{t}\rho_{1^{\prime}3}(t)= -\frac{\mathrm{i}}{\hbar}\Delta_{1^{\prime}3}\rho_{1^{\prime}3}(t )-\gamma_{1^{\prime}3}\rho_{1^{\prime}3}(t)+\mathrm{i}\Omega_{1^{\prime}3}( \rho_{33}(t)\] \[-\rho_{1^{\prime}1^{\prime}}(t))+\frac{\mathrm{i}}{\hbar}\mu_{z,23 }E_{z}\rho_{1^{\prime}2}(t)+F_{31^{\prime}}(t)\,,\] (29b) \[\partial_{t}\rho_{1^{\prime}2}(t)= -\frac{\mathrm{i}}{\hbar}\Delta_{1^{\prime}2}\rho_{1^{\prime}2}(t )-\gamma_{1^{\prime}2}\rho_{1^{\prime}2}(t)+\mathrm{i}\Omega_{1^{\prime}3}\rho _{32}(t)\] \[+\frac{\mathrm{i}}{\hbar}\mu_{z,23}E_{z}\rho_{1^{\prime}3}(t)+F_{ 21^{\prime}}(t)\,,\] (29c) \[\partial_{t}\rho_{33}(t)= -\frac{1}{\tau_{3}}\rho_{33}(t)+r_{32}\rho_{22}(t)+r_{31^{\prime} }\rho_{1^{\prime}1^{\prime}}(t)\] \[-2\hbar^{-1}\mu_{z,23}E_{z}\mathfrak{Im}\{\rho_{32}(t)\}\] \[-2\Omega_{1^{\prime}3}\mathfrak{Im}\{\rho_{1^{\prime}3}(t)\}+F_{ 33}(t)\,,\] (29d) \[\partial_{t}\rho_{22}(t)= r_{23}\rho_{33}(t)-\frac{1}{\tau_{2}}\rho_{22}(t)+r_{21^{ \prime}}\rho_{1^{\prime}1^{\prime}}(t)\] \[+2\hbar^{-1}\mu_{z,23}E_{z}\mathfrak{Im}\{\rho_{32}(t)\}+F_{22}(t)\,,\] (29e) \[\partial_{t}\rho_{1^{\prime}1^{\prime}}(t)= r_{1^{\prime}3}\rho_{33}(t)+r_{1^{\prime}2}\rho_{22}(t)-\frac{1}{ \tau_{1^{\prime}}}\rho_{1^{\prime}1^{\prime}}(t)\] \[+2\Omega_{1^{\prime}3}\mathfrak{Im}\{\rho_{1^{\prime}3}(t)\}+F_{ 1^{\prime}1^{\prime}}(t)\,. \tag{29f}\]
Via the macroscopic polarization \(P_{z,\mathrm{qm}}\), the quantum fluctuations added to the coherence term of Eq. (29a) have an influence on the evolution of the classical optical field. The quantum-mechanical fluctuations are based on the symbolic evaluations derived in Sec. II.2 and can be found in Appendix C. For the reduction to a two-level system, we obtain similar noise terms as derived by Drummond and Raymer.[97] However, unlike Drummond and Raymer we can assure the preservation of the physical properties of the density matrix, i.e., positive definiteness and unit trace. This is accomplished by a suitable choice of the submatrices \(\mathbf{B}_{\nu}(\mathbf{A}_{\nu},t)\) depicted in Table 1.
## III Simulation
As mentioned above, the Maxwell-density matrix equations are commonly treated in the so-called rotating-wave/slowly varying amplitude approximation to reduce the numerical load, which is only valid for relatively narrowband (and not too strong) optical fields.[67; 69; 70] However, as discussed in the introduction, in the framework of this paper a broadband frequency comb with many modes is highly beneficial, and QCLs even offer the potential for generating spectra extending over a full octave and beyond.[71] None of the available open-source platforms are suitable for our purposes, mostly because they employ the rotating-wave approximation. Thus, we have developed our own open-source project mbsolve, allowing for numerically extensive simulations of multilevel systems based on the full-wave Maxwell-density matrix equations.[85; 131] In detail, the development of the code-base has been based on various principles. Here, the generalized Lindblad equation (Eq. (26)) instead of the usual, quite restrictive two-level Bloch equation model is used. We have developed numerical methods that preserve physical properties, such as the complete positivity and trace preservation of the density matrix.[132; 69] This is especially important in the context of long-term simulations, as required for frequency comb modeling. Furthermore, a computational speedup is obtained by using parallelization techniques,[133] which is also especially important for long-term simulations of rather complex quantum systems, e.g. intracavity THz frequency comb difference frequency generation by mid-IR QCLs.[134] Our scientific software package mbsolve is developed following sustainable software engineering strategies and includes all common and essential best software engineering practices.[135; 133] It is based on C++ for performance reasons and features an easy-to-use Python interface facilitating the setup and active quantum system of the low-dimensional optoelectronic structures. The central part of the software is the mbsolve-lib base library, providing a framework for defining a simulation setup and the infrastructure to add solver and writer components. Importantly, mbsolve supports different numerical methods for solving the Lindblad equation,[132] as well as different parallelization techniques, e.g. OpenMP for shared memory systems. For a detailed package description, the reader is referred to Ref. [85].
As pointed out above, we extended our code base considering vacuum fluctuations due to spontaneous emission and fluctuations associated with the electronic transport.[89; 90; 110] The solver class **solver_cpu_fdtd** has the method run, which executes the simulation loop by updating the magnetic field \(H_{y}\), electric field \(E_{z}\), den
sity matrix \(\rho\) and polarization \(P_{z,\text{qm}}\) for all spatial and temporal grid points in the Yee grid. This update procedure is explained in detail in Ref. [85]. We add a new density matrix algorithm class, which is based on the Strang operator splitting method,[132, 136] to account for fluctuations accompanying the electronic transport and vacuum fluctuations. The density algorithm class **ago_lindblad_reg_cayley_qnoise** contains the method **propagate_fluctuation**, which calculates the fluctuations for an update step by adding the product of the fluctuation superoperator and the time interval \((\Delta t\cdot\mathcal{F}(\rho))\) to the updated density matrix \(\rho^{n+1/2}\). Here, we investigate active SCL gain media in lasing operation above threshold. In the simulations, we have to find the balance between numerical efficiency and modeling accuracy. Most of the fluctuation terms given in Sec. II arise from the operator ordering when reducing the operator equations to c-number Langevin equations. As it was proven in literature,[108, 90] these terms are negligible in the lasing regime above threshold with strong optical fields in the laser cavity. The fluctuation terms for a \(N\)-level system featuring diagonal elements \(F_{ii}\) and off-diagonal elements \(F_{ij}=F_{ji}^{\dagger}\) can thus be significantly reduced for the numerical treatment and are described by
\[F_{ii}^{j} = -F_{jj}^{i}=\xi_{1,ij}\sqrt{\frac{r_{ji}\rho_{ii}+r_{ij}\rho_{jj} }{N_{\text{cell}}}}\,,\] (30a) \[F_{ii} = \sum_{j\neq i}F_{ii}^{j}\,,\] (30b) \[F_{ij} = (\xi_{2,ij}+\text{i}\xi_{3,ij})\] \[\times\sqrt{\frac{-r_{j}^{-1}\rho_{jj}+\sum_{n\neq j}r_{jn}\rho_ {nn}+2\gamma_{ij}\rho_{jj}}{2N_{\text{cell}}}}\,,\] for \[i>j\], (30c)
where \(N_{\text{cell}}\) is the number of carriers in one grid cell. The \(\xi_{2,ij},\xi_{2,ij}\) and \(\xi_{3,ij}\) are real Gaussian random numbers and fulfill the correlation function
\[\langle\xi_{k,ij}(t)|\xi_{l,mn}(t^{\prime})\rangle=\delta_{kl}\delta_{im} \delta_{jn}\delta(t-t^{\prime})\,. \tag{31}\]
For future applications, in which a more detailed fluctuations treatment would be beneficial, it might be necessary to extend our numerical model by additional noise terms derived in the previous section. Furthermore, we have implemented a class **ic_density_random_2lvl**, which represents random initial conditions for the common Maxwell-Bloch two-level system. As the dipole moment operators \(\sigma_{12},\,\sigma_{21}\) and the atomic operators \(\sigma_{11},\,\sigma_{22}\) do not commute, we have to take into account a non-vanishing initial stochastic value for the polarization term following the uncertainty principle.[137, 90] The tipping angle \(\theta\) is obtained by drawing a random number from a Gaussian distribution with a standard deviation \(\sigma=2N_{\text{cell}}^{-1/2}\) and the angle \(\phi\) in xy-plane is obtained by drawing a random number from a uniform distribution.[110]
### Superfluorescence and amplified spontaneous emission
The system was tested with a superfluorescence (SF) setup in a two-level configuration.[114, 90] This setup describes the spontaneous build-up of a macroscopic coherent dipole moment in an initially inverted system, resulting in a collective emission of a superfluorescent pulse. This behavior can be reproduced numerically within our mbsolve framework by simulating an ensemble of excited ions and using a dephasing time \(T_{2}=100\,\)ps. The simulated SF pulse is illustrated in Fig. 3(a), and compares well with previous numerical and experimental findings.[114, 138, 90] By increasing the collisional dephasing rate within the system, the SF pulse is significantly disturbed and gets broadened until the spontaneous build-up of the coherent dipole moment is prevented. For a dephasing time \(T_{2}=14.3\,\)ps below the critical point, the SF pulse is replaced by amplified spontaneous emission (ASE). The increased noise amplitude accompanying the smaller dephasing time is crucial for the modeling of ASE,
Figure 3: Simulation results for a superfluorescence test setup[90] in an initially inverted two-level system using the Maxwell-density matrix Langevin equations. (a) Cooperative emission characteristic of superfluorescence for the dephasing time \(T_{2}=100\,\)ps. (b) Amplified spontaneous emission pulse for the dephasing time \(T_{2}=14.3\,\)ps.
which cannot be reproduced otherwise. The ASE simulation results are presented in Fig. 3(b).
Furthermore, the degree of decoherence is studied using the quantity \(\rho_{3}/\rho_{\mathrm{B}}\), where \(\rho_{\mathrm{B}}=\sqrt{\rho_{1}^{2}+\rho_{2}^{2}+\rho_{3}^{2}}\) is the Bloch vector. When the dephasing time \(T_{2}\) is high (Fig. 3(a)), the population inversion \(\rho_{3}\) is quickly depleted through the spontaneous buildup of the macroscopic dipole moment and the SF emission, which clearly surpasses the decay of \(\rho_{1}\) and \(\rho_{2}\) and results in a rapid drop of \(\rho_{3}/\rho_{\mathrm{B}}\). In the second case (Fig. 3(b)) we have used a smaller dephasing time \(T_{2}\), which prevents the macroscopic dipole moment build-up and limits the radiative decay. This decoherence state indicates a very slow decay of \(\rho_{3}/\rho_{\mathrm{B}}\), which stays close to one.
### Noise characteristics in THz QCL harmonic frequency comb emission
Concerning the experimental investigation of intensity correlations in QCL,[52] we aim to characterize the noise properties of a self-starting THz QCL HFC setup. Here, the THz QCL active region is based on a homogeneous four-quantum well design with diagonal transition.[139] The charge carrier transport in the active gain medium at a bias of 50 kV/period is analyzed using our in-house Monaco framework, consisting of a Schrodinger-Poisson solver and a density matrix-ensemble Monte Carlo modeling tool.[140, 141, 117, 116, 117] For an appropriate description of the physical properties, we consider five wavefunctions in the active quantum well heterostructure. Furthermore, one incoherent tunneling transition from the injector state into the upper laser level and one optical transition are specified for the quantum mechanical description of the QCL system in the dynamical simulation. The Python script **forrer_2021_50mVperperiod.py** with the simulation setup to start the mbsolve simulation can be found in the GitHub repository.[131]
In the following, we present simulation results for a 4 mm long double-metal THz QCL with a free spectral range (FSR) of 9.94 GHz. The intensity spectrum of the THz HFC at 3.5 THz with a mode spacing of 5 FSR is illustrated in Fig. 4(a). The THz QCL emits a broadband HFC with a cavity repetition rate of 49.7 GHz. In Fig. 4(b), the temporal evolution of the intensity at the facet and the calculated instantaneous frequency are depicted. We can identify a regular field pattern, which shows a periodic repetition with five times the RT. Here, only the three strongest modes are involved in the temporal evolution of the instantaneous frequency, as their intensities are of similar magnitude and contribute mostly to the overall comb emission power. To specify the degree of coherence of the obtained HFC and for comparison with the experimental findings, we investigate the RF spectrum using an observation time window of 2 us. The obtained simulation results are shown in Fig. 5(a), and the clear appearance of the harmonic beatnote proves the purity of the harmonic state. The linewidth is substantially below the numerical frequency resolution of 500 kHz, which is confirmed by the zoom on the extremely narrow harmonic beatnote in the inset of Fig. 5(a). In addition, we can identify sub-beatnotes, which arise due to the beating of the center mode with the sub-comb lines. These sub-comb lines are generated by FWM processes, where the strong harmonic sidemodes act as pump modes and generate weak sidebands with a frequency spacing of 1 FSR from the corresponding pump modes. As can be seen in Fig. 4(a), the intensities of the sub-modes are at least \(\sim 5\) orders of magnitude smaller than those of the pump modes.
To further analyze the noise characteristics of the THz QCL HFC setup, we calculate the relative intensity noise (RIN) for the total output power and for the power of the five harmonic comb lines contributing mostly to the HFC
Figure 4: Maxwell-density matrix Langevin simulation results of HFC emission with a mode spacing of 5 FSR in a 4 mm long THz QCL device with a metal-metal waveguide at 80 K and for \(V=50\) mV/period. (a) Intensity spectra of the optical radiation at the facet. (b) Simulated instantaneous intensity at the facet and calculated instantaneous frequency from the Hilbert transform of the simulated electric field over a single roundtrip time (RT).
emission. Here, the RIN spectrum can be calculated by
\[\text{RIN}_{i}(f)=\lim_{T\to\infty}\frac{1}{T}\frac{|\int_{0}^{T}[P_{i}(t)-<P_{i} (t)>]\text{e}^{-i2\pi ft}\,\text{d}t|^{2}}{<P_{i}(t)>^{2}}\,, \tag{32}\]
where \(T\) denotes the simulation time, and \(P_{i}\) is either the power of a specific mode \(i\) or of the total power \(P_{\text{all}}\). By numerically filtering the electric field at the facet \(E_{\text{facet}}(t)\) using a filter with a 3 dB bandwidth of 20 GHz, we can extract the temporal electric field components \(E_{i}(t)\) of the individual modes. The RIN results are depicted in Fig. 5(b) for the total power \(P_{\text{all}}\) and the power of the five central harmonic modes \(P_{i}\) with indices \(i=1\dots 5\). The total power RIN is around --180 dBc/Hz, whereas for the three central harmonic modes having a similar power a RIN around --155 dBc/Hz is calculated. For the remaining two weaker modes 1 and 5 a higher RIN is obtained. This is in very good agreement with the experimental findings of a three-mode mid-IR HFC QCL setup.[52] For increasing power, the RIN of the sidemodes decreases to that of the central mode, while sidebands closer to threshold exhibit a noisier behavior. Furthermore, we identify an overlapping RIN for sidemodes featuring a comparable power level, which indicates a comparable noise level. A similar result could be retrieved from the mid-IR HFC RIN measurements.[52]
## IV Conclusion
In this paper, we have theoretically derived a c-number Langevin approach for a three-level quantum system from the non-classical operator description within the Heisenberg-Langevin equations. Our approach is an extension of the well-known two-level quantum theory by Drummond and Raymer,[97] where we additionally take into account incoherent tunneling injection into the upper laser level. Within the generalized Maxwell-density matrix Langevin equations we can ensure the preservation of the physical properties of the density matrix, i.e., positive definiteness and unit trace. Furthermore, by including the derived noise terms into our open-source simulation tool mbsolve, we can model the fluctuations accompanying electronic transport and spontaneous emission in the dynamical simulations of light-matter interaction in multilevel quantum optoelectronic systems such as QCLs and QD lasers. The simulation approach is tested using a superfluorescence setup, where we prove the validity of our implementation by obtaining an excellent match with previous experimental and theoretical results.[110, 114, 138, 90] Additionally, we have characterized the noise properties of a coherent THz QCL HFC setup and obtained a good match with experimental findings.[52, 139] Our modeling approach based on the generalized Maxwell-density matrix Langevin equations shows great potential for the theoretical investigation of intermodal intensity correlations in photonic devices and the development of low-noise integrated light emitters also with regard to the generation of non-classical light.
###### Acknowledgements.
The authors acknowledge financial support by the European Union's QuantERA II [G.A. n. 101017733] - QATACOMB Project "Quantum correlations in terahertz QCL combs" (Funding organization: DFG - Germany [Project n. 491801597]), by the European Union's Research and Innovation Programmes Horizon 2020 and Horizon Europe with the Qombs Project [G.A. n. 820419] "Quantum simulation and entanglement engineering in quantum cascade laser frequency combs", by Deutsche Forschungsgemeinschaft (DFG) under the DFG DACH project [Project No. 471090402], by the FWF Project I 5682-N "Cavity-assisted non-classical light generation"
Figure 5: (a) Simulated RF spectrum of the THz QCL HFC setup with a clear beatnote signal at 49.7 GHz. Inset, zoom on the harmonic beatnote, indicating a narrow linewidth below the numerical frequency resolution (500 kHz). (b) Calculated RIN spectra associated with the total power \(P_{\text{all}}\) (blue) and the modal power \(P_{i}\) of each of the five harmonic modes contributing mostly to the HFC emission (The colors of the individual RIN spectra correspond to those of the individual comb lines in Fig. 4(a)).
and by the ESA (Discovery EISI) Project 4000142337: "Simulation toolbox for unconditionally secure on-chip satellite quantum communication networks operating in the telecom wavelength range".
## Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Appendix A Calculation of the diffusion coefficient within the quantum and the c-number Langevin treatment
By taking into account the orthogonality of the levels \(\langle i|j\rangle=\delta_{ij}\), we obtain
\[\hat{\sigma}_{23}^{\dagger}\hat{\sigma}_{23}=(|3\rangle\langle 2|2\rangle \langle 3|)=(|3\rangle\langle 3|)=\hat{\sigma}_{33}\,. \tag{10}\]
The diffusion coefficient in Eq. (9) is calculated by using the _generalized Einstein relation_ from Eq. (6). The detailed calculation is given by
\[\begin{split} 2\langle\hat{D}_{3223}(t)\rangle_{\rm R}& =\partial_{t}\langle\hat{\sigma}_{33}(t)\rangle_{\rm R}-\langle \hat{M}_{23}^{\dagger}(t)\hat{\sigma}_{23}(t)\rangle_{\rm R}-\langle\hat{ \sigma}_{23}^{\dagger}(t)\hat{M}_{23}(t)\rangle_{\rm R}\\ &=\bigg{\langle}-\frac{1}{\tau_{3}}\hat{\sigma}_{33}(t)+r_{32} \hat{\sigma}_{22}(t)+r_{31^{\prime}}\hat{\sigma}_{1^{\prime}1^{\prime}}(t)+{ \rm i}g\Big{(}\hat{a}^{\dagger}(t)\hat{\sigma}_{23}(t)-\hat{a}(t)\hat{\sigma} _{23}^{\dagger}(t)\Big{)}-{\rm i}\Omega_{1^{\prime}3}\Big{(}\hat{\sigma}_{3 1^{\prime}}^{\dagger}(t)-\hat{\sigma}_{31^{\prime}}(t)\Big{)}\bigg{\rangle}_{ \rm R}\\ &\quad-\bigg{\langle}\bigg{(}\frac{{\rm i}}{\hbar}\Delta_{32} \hat{\sigma}_{23}^{\dagger}(t)-\gamma_{23}\hat{\sigma}_{23}^{\dagger}(t)-{ \rm i}g(\hat{\sigma}_{33}(t)-\hat{\sigma}_{22}(t))\hat{a}^{\dagger}(t)-{\rm i }\Omega_{1^{\prime}3}\hat{\sigma}_{1^{\prime}2}(t)\bigg{)}\hat{\sigma}_{23}(t )\bigg{\rangle}_{\rm R}\\ &\quad-\bigg{\langle}\hat{\sigma}_{23}^{\dagger}(t)\bigg{(}- \frac{{\rm i}}{\hbar}\Delta_{32}\hat{\sigma}_{23}(t)-\gamma_{23}\hat{\sigma}_ {23}(t)+{\rm i}g(\hat{\sigma}_{33}(t)-\hat{\sigma}_{22}(t))\hat{a}(t)+{\rm i }\Omega_{1^{\prime}3}\hat{\sigma}_{21^{\prime}}(t)\bigg{)}\bigg{\rangle}_{ \rm R}\\ &=\bigg{(}2\gamma_{23}-\frac{1}{\tau_{3}}\bigg{)}\langle\hat{ \sigma}_{33}(t)\rangle_{\rm R}+r_{32}\langle\hat{\sigma}_{22}(t)\rangle_{\rm R }+r_{31^{\prime}}\langle\hat{\sigma}_{1^{\prime}1^{\prime}}(t)\rangle_{\rm R} \,.\end{split} \tag{11}\]
In addition, we prove the difference in diffusion coefficients, which arises through the transition from operator to c-number Langevin equations. Therefore, we calculate the diffusion coefficient \(D_{3333}(t)\) based on the theoretical description presented in Sec. II.2. By the use of Eqs. (6) and (7)(e), we obtain
\[\begin{split}\partial_{t}\langle\hat{\sigma}_{33}(t)\hat{\sigma} _{33}(t)\rangle_{\rm R}=&-\frac{2}{\tau_{3}}\langle\hat{\sigma} _{33}(t)\hat{\sigma}_{33}(t)\rangle_{\rm R}+r_{32}\Big{(}\langle\hat{\sigma}_{ 22}(t)\hat{\sigma}_{33}(t)\rangle_{\rm R}+\langle\hat{\sigma}_{33}(t)\hat{ \sigma}_{22}(t)\rangle_{\rm R}\Big{)}+r_{31^{\prime}}\Big{(}\langle\hat{ \sigma}_{1^{\prime}1^{\prime}}(t)\hat{\sigma}_{33}(t)\rangle_{\rm R}\\ &+\langle\hat{\sigma}_{33}(t)\hat{\sigma}_{1^{\prime}1^{\prime}}(t )\rangle_{\rm R})+{\rm i}g\Big{(}\langle\hat{a}^{\dagger}(t)\hat{\sigma}_{23}(t )\hat{\sigma}_{33}(t)\rangle_{\rm R}-\langle\hat{a}(t)\hat{\sigma}_{23}^{ \dagger}(t)\hat{\sigma}_{33}(t)\rangle_{\rm R}+\langle\hat{a}^{\dagger}(t) \hat{\sigma}_{33}(t)\hat{\sigma}_{23}(t)\rangle_{\rm R}\\ &-\langle\hat{\underline{a}}(t)\hat{\sigma}_{33}(t)\hat{\sigma}_{ 23}^{\dagger}(t)\rangle_{\rm R}\Big{)}-{\rm i}\Omega_{1^{\prime}3}\Big{(} \langle\hat{\sigma}_{31^{\prime}}^{\dagger}(t)\hat{\sigma}_{33}(t)\rangle_{ \rm R}-\langle\hat{\sigma}_{31^{\prime}}(t)\hat{\sigma}_{33}(t)\rangle_{\rm R }+\langle\hat{\sigma}_{33}(t)\hat{\sigma}_{31^{\prime}}^{\dagger}(t)\rangle_{ \rm R}\\ &-\langle\hat{\sigma}_{33}(t)\hat{\sigma}_{31^{\prime}}(t)\rangle_{ \rm R}\Big{)}+2\langle\hat{D}_{3333}(t)\rangle_{\rm R}\,.\end{split} \tag{12}\]
Here, the terms, which are underlined in red, are not in the chosen order defined in Sec. II.2. The commutation relations are used to bring these terms into chosen order, and by exploiting the level orthogonality similar to Eq. (10) we derive
\[\begin{split}[\hat{\sigma}_{22}(t),\hat{\sigma}_{33}(t)]=\hat{ \sigma}_{22}(t)\hat{\sigma}_{33}(t)-\hat{\sigma}_{33}(t)\hat{\sigma}_{22}(t)\\ =& 0\,,\end{split} \tag{13a}\] \[\begin{split}[\hat{\sigma}_{23}(t),\hat{\sigma}_{33}(t)]= \hat{\sigma}_{23}(t)\underline{\otimes}\,,\] (13b) \[\begin{split}[\hat{\sigma}_{33}(t),\hat{\sigma}_{23}^{ \dagger}(t)]=\hat{\sigma}_{23}^{\dagger}(t)\,,\end{split}\] (13c) \[\begin{split}[\hat{\sigma}_{31^{\prime}}(t),\hat{\sigma}_{33}(t )]=&-\hat{\sigma}_{31^{\prime}}(t)\,,\end{split}\] (13d) \[\begin{split}[\hat{\sigma}_{33}(t),\hat{\sigma}_{31^{\prime}} ^{\dagger}(t)]=&-\hat{\sigma}_{31^{\prime}}^{\dagger}(t)\,.\end{split} \tag{13e}\]
With this, we can restructure Eq. 12 as follows:
\[\partial_{t}\langle\hat{\sigma}_{33}(t)\hat{\sigma}_{33}(t)\rangle_{ \mathrm{R}}= -\frac{2}{\tau_{3}}\langle\hat{\sigma}_{33}(t)\hat{\sigma}_{33}(t) \rangle_{\mathrm{R}}+2r_{32}\langle\hat{\sigma}_{33}(t)\hat{\sigma}_{22}(t) \rangle_{\mathrm{R}}+2r_{31^{\prime}}\langle\hat{\sigma}_{33}(t)\hat{\sigma}_ {1^{\prime}1^{\prime}}(t)\rangle_{\mathrm{R}}\] \[+2\mathrm{i}g\Big{(}\langle\hat{a}^{\dagger}(t)\hat{\sigma}_{33}( t)\hat{\sigma}_{23}(t)\rangle_{\mathrm{R}}-\langle\hat{a}(t)\hat{a}^{\dagger}_{23}(t) \hat{\sigma}_{33}(t)\rangle_{\mathrm{R}}\Big{)}-2\mathrm{i}\Omega_{1^{\prime }3}\Big{(}\langle\hat{\sigma}^{\dagger}_{31^{\prime}}(t)\hat{\sigma}_{33}(t) \rangle_{\mathrm{R}}-\langle\hat{\sigma}_{33}(t)\hat{\sigma}_{31^{\prime}}(t) \rangle_{\mathrm{R}}\Big{)}\] \[+2\langle\hat{D}_{333}(t)\rangle_{\mathrm{R}}+\mathrm{i}g\Big{(} \langle\hat{a}^{\dagger}(t)\hat{\sigma}_{23}(t)\rangle_{\mathrm{R}}-\langle \hat{a}(t)\hat{\sigma}^{\dagger}_{23}(t)\rangle_{\mathrm{R}}\Big{)}+\mathrm{i }\Omega_{1^{\prime}3}\Big{(}\langle\hat{\sigma}^{\dagger}_{31^{\prime}}(t) \rangle_{\mathrm{R}}-\langle\hat{\sigma}_{31^{\prime}}(t)\rangle_{\mathrm{R} }\Big{)}\,. \tag{100}\]
Here, the additional terms resulting from the operator ordering are underlined in green. With the use of Eqs. (14) and (15) we can derive the corresponding c-number equation
\[\partial_{t}\langle\sigma_{33}(t)\sigma_{33}(t)\rangle_{\mathrm{R}}= -\frac{2}{\tau_{3}}\langle\sigma_{33}(t)\sigma_{33}(t)\rangle_{ \mathrm{R}}+2r_{32}\langle\sigma_{33}(t)\sigma_{22}(t)\rangle_{\mathrm{R}}+2r _{31^{\prime}}\langle\sigma_{33}(t)\sigma_{1^{\prime}1^{\prime}}(t)\rangle_{ \mathrm{R}}+2\mathrm{i}g(\langle a^{*}(t)\sigma_{33}(t)\sigma_{23}(t)\rangle_{ \mathrm{R}}\] \[-\langle a(t)\sigma_{23}^{*}(t)\sigma_{33}(t)\rangle_{\mathrm{R} })-2\mathrm{i}\Omega_{1^{\prime}3}(\langle\sigma_{31^{\prime}}^{*}(t)\sigma_{ 33}(t)\rangle_{\mathrm{R}}-\langle\sigma_{33}(t)\sigma_{31^{\prime}}(t) \rangle_{\mathrm{R}})+2\langle D_{333}(t)\rangle_{\mathrm{R}}\,. \tag{101}\]
If we now require the equivalence of the left-hand sides of Eqs. (100) and (101), we end up with the diffusion coefficient from Eq.(19)(a).
## Appendix B Complete diffusion matrix for a three-level THz QCL system
The complete diffusion matrix for a three-level THz QCL system with incoherent tunneling injection is derived within the framework of the c-number Langevin theory and is given by
\[A=\begin{bmatrix}0&\sqrt{n_{\mathrm{th}}\kappa}&0&0&0\\ \sqrt{n_{\mathrm{th}}\kappa}&0&0&0&0\\ 0&0&-2\mathrm{i}ga^{*}\sigma_{23}^{*}&\mathrm{i}ga^{*}\sigma_{31^{\prime}}^{* }&-\mathrm{i}ga^{*}\sigma_{21^{\prime}}^{*}\\ 0&0&\mathrm{i}ga^{*}\sigma_{31^{\prime}}^{*}&2\mathrm{i}\Omega_{1^{\prime}3} \sigma_{31^{\prime}}^{*}&0\\ 0&0&-\mathrm{i}ga^{*}\sigma_{21^{\prime}}^{*}&0&0\\ 0&0&-r_{32}\sigma_{23}^{*}&-\mathrm{i}ga\sigma_{21^{\prime}}^{*}+(r_{23}+r_{ 1^{\prime}3})\sigma_{31^{\prime}}^{*}&-r_{32}\sigma_{21^{\prime}}^{*}\\ 0&0&(r_{32}+r_{1^{\prime}2})\sigma_{23}^{*}&\mathrm{i}ga\sigma_{21^{\prime}}^ {*}-r_{23}\sigma_{31^{\prime}}^{*}&(r_{32}+r_{1^{\prime}2})\sigma_{21^{\prime} }^{*}\\ 0&0&-r_{1^{\prime}2}\sigma_{23}^{*}&-r_{1^{\prime}3}\sigma_{31^{\prime}}&-r_{ 1^{\prime}2}\sigma_{21^{\prime}}^{*}\\ 0&0&(\gamma_{23}+\gamma_{1^{\prime}2}-\gamma_{1^{\prime}3})\sigma_{31^{ \prime}}&0&(\gamma_{2^{\prime}1^{\prime}2-r_{1^{\prime}3})\sigma_{1^{\prime} 1^{\prime}}}\\ 0&0&0&(2\gamma_{1^{\prime}3}-r_{2^{\prime}1^{\prime}}-r_{31^{\prime}3^{ \prime}})\sigma_{1^{\prime}1^{\prime}}&+r_{1^{\prime}2}\sigma_{22}+r_{1^{ \prime}3}\sigma_{33}^{*}&0\\ 0&0&0&(2\gamma_{23}-r_{32}-r_{1^{\prime}3})\sigma_{33}&0\\ 0&0&(2\gamma_{23}-r_{32}-r_{1^{\prime}3})\sigma_{33}&0&(\gamma_{1^{\prime}2}+ \gamma_{23}-\gamma_{1^{\prime}3})\sigma_{31^{\prime}}^{*}\\ \end{bmatrix}\]
Appendix C Quantum mechanical fluctuation terms within the generalized Maxwell-density matrix Langevin equations
The quantum-mechanical fluctuation terms for the three-level QCL quantum system are derived within the framework of the Langevin theory. In this paper, we have calculated the full diffusion matrix resulting from the c-number Langevin equations. Exploiting the positive semi-definiteness of the diffusion matrix, one can show that there exists a set of Ito stochastic differential equations equivalent to the Langevin equations. We can factorize the diffusion matrix to obtain a noise matrix that can be directly integrated into the Maxwell-density matrix approach for numerical modeling of fluctuations in dynamical optoelectronic devices. With a suitable choice of the noise matrix, one can guarantee a completely positive trace-preserving update map for long-term simulations. For the three-level QCL system, the fluctuation terms will fully account for the influence of the reservoirs and the properties of the nonlinear coupling between QCL system and optical field, including the incoherent tunneling transition, and can be represented as follows:
\[F_{23}(t) =\xi_{11}(t)\sqrt{r_{32}}+\xi_{14}(t)\sqrt{r_{1^{\prime}2}}-\xi_ {24}(t)\sqrt{2\mathrm{i}\mu_{z,23}E_{z}(t)\rho_{32}(t)}-\xi_{31}^{*}(t)\frac{ \mathrm{i}\mu_{z,23}E_{z}(t)}{2}+\xi_{32}^{*}(t)\frac{\mathrm{i}\mu_{z,23}E_{z} (t)}{2}\] \[\quad+\xi_{33}^{*}(t)\sqrt{\frac{\gamma_{1^{\prime}2}-\gamma_{1^ {\prime}3}+\gamma_{23}}{2}}+\xi_{41}^{*}(t)\bigg{(}\frac{\gamma_{1^{\prime}3}- \gamma_{1^{\prime}2}-\gamma_{23}}{2}+\frac{r_{32}\rho_{22}(t)}{2}+\frac{2 \gamma_{23}-r_{1^{\prime}3}-r_{23}}{2}\rho_{33}(t)-\frac{\mu_{z,23}^{2}E_{z}(t )^{2}}{2}\]
\[F_{31^{\prime}}(t) =\xi_{13}(t)\sqrt{r_{1^{\prime}2}}+\xi_{16}(t)\sqrt{r_{32}}+\xi_{32}(t )\rho_{1^{\prime}2}(t)+\xi_{33}^{*}(t)\sqrt{\frac{\gamma_{1^{\prime}2}-\gamma_{1 ^{\prime}3}+\gamma_{23}}{2}}\rho_{1^{\prime}3}(t)+\xi_{43}^{*}(t)\bigg{(}\frac{2 \gamma_{1^{\prime}3}-\gamma_{1^{\prime}2}-\gamma_{23}}{2}\rho_{1^{\prime}1^{ \prime}}(t)\] \[\quad+\frac{r_{1^{\prime}2}}{2}\rho_{22}(t)-|\rho_{21^{\prime}}(t )|^{2}+\frac{r_{1^{\prime}3}}{2}\rho_{33}(t)-r_{1^{\prime}2}-\gamma_{32}}{2} \rho_{31^{\prime}}(t)|^{2}\bigg{)}^{1/2}\,,\] (43c) \[F_{33}(t) =-\xi_{11}^{*}(t)\frac{\sqrt{r_{32}}\rho_{32}(t)}{2}-\xi_{11}(t) \frac{\sqrt{r_{32}}\rho_{23}(t)}{2}+\xi_{12}^{*}(t)\frac{\sqrt{r_{1^{\prime}3 }}\rho_{1^{\prime}3}(t)}{2}+\xi_{11}(t)\frac{\sqrt{r_{1^{\prime}3}}\rho_{21^{ \prime}}(t)}{2}+\xi_{15a}^{*}(t)\frac{\sqrt{r_{23}}\rho_{1^{\prime}3}(t)}{2}\] \[\quad+\xi_{15a}(t)\frac{\sqrt{r_{23}}\rho_{31^{\prime}}(t)}{2}- \xi_{15b}^{*}(t)\frac{\rho_{1^{\prime}2}(t)}{2}-\xi_{15b}(t)\frac{\rho_{21^{ \prime}}(t)}{2}-\xi_{16}^{*}(t)\frac{\sqrt{r_{32}}\rho_{1^{\prime}2}(t)}{2}- \xi_{16}(t)\frac{\sqrt{r_{32}}\rho_{21^{\prime}}(t)}{2}\] \[\quad-\rho_{23}(t))\bigg{)}^{1/2}+\xi_{22}(t)\bigg{(}\mathrm{i} \Omega_{31^{\prime}}\big{(}-\rho_{1^{\prime}3}(t)+\rho_{31^{\prime}}(t)\big{)} +r_{31^{\prime}}\rho_{1^{\prime}1^{\prime}}(t)+r_{1^{\prime}3}\big{(}\rho_{33 }(t)-|\rho_{1^{\prime}3}(t)|^{2}\big{)}\bigg{)}^{1/2},\] (43d) \[F_{22}(t) =\xi_{11}^{*}(t)\frac{\sqrt{r_{32}}\rho_{32}(t)}{2}+\xi_{11}(t) \frac{\sqrt{r_{32}}\rho_{23}(t)}{2}+\xi_{13}^{*}(t)\frac{\sqrt{r_{1^{\prime}2 }}\rho_{1^{\prime}2}(t)}{2}+\xi_{13}(t)\frac{\sqrt{r_{1^{\prime}2}}\rho_{21^ {\prime}}(t)}{2}+\xi_{14}^{*}(t)\frac{\sqrt{r_{1^{\prime}2}}\rho_{32}(t)}{2}\] \[\quad+\xi_{14}(t)\frac{\sqrt{r_{1^{\prime}2}}\rho_{23}(t)}{2}- \xi_{15a}^{*}(t)\frac{\sqrt{r_{23}}\rho_{1^{\prime}3}(t)}{2}-\xi_{15a}(t) \frac{\sqrt{r_{23}}\rho_{31^{\prime}}(t)}{2}+\xi_{15b}^{*}(t)\frac{\rho_{21^ {\prime}}(t)}{2}\] \[\quad+\xi_{16}^{*}(t)\frac{\sqrt{r_{32}}\rho_{1^{\prime}2}(t)}{2}+ \xi_{16}(t)\frac{\sqrt{r_{32}}\rho_{21^{\prime}}(t)}{2}-\xi_{21}(t)\bigg{(}r_{ 32}\rho_{22}(t)+r_{23}\rho_{33}(t)+\mathrm{i}\mu_{z,23}E_{z}(t)\big{(}\rho_{32} (t)-\rho_{23}(t)\big{)}\] \[\quad-(r_{32}+1)|\rho_{1^{\prime}2}(t)|^{2}-r_{32}|\rho_{32}(t)|^{ 2}-r_{23}|\rho_{1^{\prime}3}(t)|^{2}\bigg{)}^{1/2}+\xi_{23}(t)\bigg{(}r_{21^{ \prime}}\rho_{1^{\prime}1^{\prime}}(t)-r_{1^{\prime}2}\big{(}|\rho_{1^{\prime} 2}(t)|^{2}-\rho_{22}(t)\] \[\quad+|\rho_{23}(t)|^{2}\big{)}\bigg{)}^{1/2}\,,\] (43e) \[F_{1^{\prime}1^{\prime}}(t) =-\xi_{12}^{*}(t)\frac{\sqrt{r_{1^{\prime}3}}\rho_{1^{\prime}3}(t)} {2}-\xi_{12}(t)\frac{\sqrt{r_{1^{\prime}3}}\rho_{31^{\prime}}(t)}{2}-\xi_{13}^ {*}(t)\frac{\sqrt{r_{1^{\prime}2}}\rho_{1^{\prime}2}(t)}{2}-\xi_{13}(t)\frac{ \sqrt{r_{1^{\prime}2}}\rho_{21^{\prime}}(t)}{2}-\xi_{14}^{*}(t)\frac{\sqrt{r_ {1^{\prime}2}}\rho_{23}(t)}{2}\] \[\quad-\xi_{14}(t)\frac{\sqrt{r_{1^{\prime}2}}\rho_{23}(t)}{2}- \xi_{22}(t)\bigg{(}\mathrm{i}\Omega_{31^{\prime}}\big{(}-\rho_{1^{\prime}3}(t)+ \rho_{31^{\prime}}(t)\big{)}+r_{31^{\prime}}\rho_{1^{\prime}1^{\prime}}(t)+r_{ 1^{\prime}3}\big{(}\rho_{33}(t)-|\rho_{1^{\prime}3}(t)|^{2}\big{)}\bigg{)}^{1/2}\] \[\quad-\xi_{23}(t)\bigg{(}r_{21^{\prime}1^{\prime}}(t)-r_{1^{ \prime}2}\big{(}|\rho_{1^{\prime}2}(t)|^{2}-\rho_{22}(t)+|\rho_{23}(t)|^{2} \big{)}\bigg{)}^{1/2}\,,\] (43f) \[F_{1^{\prime}2}(t) =\xi_{13}^{*}(t)\sqrt{r_{1^{\prime}2}}+\xi_{16}^{*}(t)\sqrt{r_{32} }+\xi_{32}^{*}(t)\rho_{21^{\prime}}(t)+\xi_{33}(t)\sqrt{\frac{\gamma_{1^{\prime }2}-\gamma_{1^{\prime}3}+\gamma_{23}}{2}}\rho_{1^{\prime}3}(t)+\xi_{43}(t) \bigg{(}\frac{r_{1^{\prime}2}}{2}\rho_{22}(t)\] \[\quad+\frac{2\gamma_{1^{\prime}3}-\gamma_{1^{\prime}2}-\gamma_{23}}{2 }\rho_{1^{\prime}1^{\prime}}(t)-|\rho_{21^{\prime}}(t)|^{2}+\frac{r_{1^{\prime}3}}{2 }\rho_{33}(t)-r_{1^{\prime}2}-r_{32}-\frac{\gamma_{1^{\prime}3}-\gamma_{1^{ \prime}2}-\gamma_{23}}{2}|\rho_{31^{\prime}}(t)|^{2}\bigg{)}^{1/2}\,,\] (43g) \[F_{1^{\prime}3}(t) =\xi_{12}^{*}(t)\sqrt{r_{1^{\prime}3}}+\xi_{15a}^{*}(t)\sqrt{r_{2 3}}+\xi_{15b}^{*}(t)\mathrm{i}\mu_{z,23}E_{z}(t)+\xi_{25}(t)\sqrt{2\mathrm{i} \Omega_{31^{\prime}}\rho_{31^{\prime}}(t)}+\xi_{31}^{*}(t)\rho_{31^{\prime}}(t)+ \xi_{42}(t)\bigg{(}\frac{r_{1^{\prime}2}}{2}\rho_{22}(t)\] \[\quad+\frac{2\gamma_{1^{\prime}3}-\gamma_{1^{\prime}2}-\gamma_{23}}{2 }\rho_{1^{\prime}1^{\prime}}(t)-|\rho_{31^{\prime}}(t)|^{2}+\frac{r_{1^{\prime}3}}{2 }\rho_{33}(t)-\mu_{z,23}^{2}E_{z}(t)^{2}-r_{1^{\prime}3}-r_{23}+\Omega_{31^{ \prime}1}|\rho_{31^{\prime}}(t)|\bigg{)}^{1/2}\,,\] (43h) \[F_{32}(t) =\xi_{11}^{*}(t)\sqrt{r_{32}}+\xi_{14}^{*}(t)\sqrt{r_{1^{\prime}2 }}+\xi_{24}(t)\sqrt{-2\
\[-r_{1^{\prime}2}-r_{32}+\mu_{z,23}E_{z}(t)|\rho_{32}(t)|\right)^{1/2}.\]
Here, the terms \(\xi_{11},\xi_{12},\xi_{13},\xi_{14},\xi_{15a},\xi_{15b},\xi_{16},\xi_{31},\xi_{32}\), \(\xi_{33},\xi_{41},\xi_{42},\xi_{43}\) are complex, while \(\xi_{21},\xi_{22},\xi_{23},\xi_{24},\xi_{25}\) are real.
|
2302.01754 | Robust Funnel Model Predictive Control for output tracking with
prescribed performance | We propose a novel robust Model Predictive Control (MPC) scheme for nonlinear
multi-input multi-output systems of relative degree one with stable internal
dynamics. The proposed algorithm is a combination of funnel MPC, i.e., MPC with
a particular stage cost, and the model-free adaptive funnel controller. The new
robust funnel MPC scheme guarantees output tracking of reference signals within
prescribed performance bounds -- even in the presence of unknown disturbances
and a structural model-plant mismatch. We show initial and recursive
feasibility of the proposed control scheme without imposing terminal conditions
or any requirements on the prediction horizon. Moreover, we allow for model
updates at runtime. To this end, we propose a proper initialization strategy,
which ensures that recursive feasibility is preserved. Finally, we validate the
performance of the proposed robust MPC scheme by simulations. | Thomas Berger, Dario Dennstädt, Lukas Lanza, Karl Worthmann | 2023-02-03T14:15:01Z | http://arxiv.org/abs/2302.01754v2 | # Robust Funnel MPC for nonlinear systems with relative degree one1
###### Abstract
We propose a novel Model Predictive Control scheme for robust output reference tracking with prescribed performance for nonlinear multi-input multi-output systems of relative degree one with stable internal dynamics. Combining funnel MPC with the model-free adaptive funnel controller, the new _robust funnel MPC_ algorithm guarantees output tracking of reference signals within predefined boundaries even in the presence of unknown disturbances and a structural model-plant mismatch. Initial and recursive feasibility of the control scheme is shown without imposing terminal conditions or requirements on the length of the prediction horizon. To this end, we propose a proper initialization strategy; and to ensure that the feedback controller does not contribute unnecessarily much, we introduce an activation function for the latter.
**Key words:** model predictive control, funnel control, nonlinear systems, reference tracking, robustness, model-plant mismatch
**AMS subject classifications:** 93B45, 93C40, 93B51, 93B52
## 1 Introduction
Model Predictive Control (MPC) is a well-established control technique for linear and nonlinear systems due to its ability to handle multi-input multi-output systems under control and state constraints, see the textbooks [17, 29] and the references therein. Given a model of the system, the idea is to predict the future system behavior on a finite-time horizon and, based on the predictions, solve a respective Optimal Control Problem (OCP) to apply the first portion of the computed optimal control (function) before repeating this process ad infinitum. Although MPC is nowadays widely used and has seen various applications, see e.g. [27], there are two main obstacles: On the one hand a sufficiently accurate model is required to predict the system behavior and compute the optimal control. On the other hand, initial and recursive feasibility have to be ensured. The latter means that one has to guarantee the solvability of the OCP at every time instant of the MPC algorithm, provided that initial feasibility, i.e., solvability at the initial time, holds. This is a non-trivial task and usually requires either some controllability properties like, e.g., cost controllability [14], in combination with a sufficiently long prediction horizon, see e.g. [11] and [15] for discrete and continuous-time systems, or the construction of suitable terminal conditions, see e.g. [29] and the references therein. Especially in the presence of time-varying state or output constraints, this task becomes even more challenging as, e.g., the extension [1] to time-varying reference signals has shown.
_Funnel MPC_ (FMPC), a novel approach to guarantee initial and recursive feasibility for a large class of systems, was proposed in [6]. Using a stage cost design inspired by _funnel control_, which is a model-free adaptive feedback controller first proposed in [18], this novel MPC scheme allows output tracking of a given reference signal within predefined error boundaries. Incorporating output constraints in the OCP, initial and recursive feasibility were proven if the system satisfies the output constraints at the initial time only, i.e., without imposing additional terminal conditions and independent of the length of the prediction horizon. Moreover, it was then shown in [3] that, for systems with relative degree one and, in a certain sense, input-to-state stable internal dynamics, the funnel-inspired stage costs automatically ensure (initial and recursive) feasibility - even
without imposing the output constraints. Utilizing so called _feasibility constraints_, applicability of funnel MPC to systems with arbitrary relative degree was shown in [2].
In the context of a simulation study, learning of unknown system parameters in order to apply funnel MPC was discussed in [6]. Beyond this, research into FMPC, so far, assumes the system to be precisely known and does not account for a structural model-plant mismatch or disturbances. However, every model, no matter how good, deviates from the actual system and disturbances are omnipresent. Furthermore, utilizing a highly detailed model is oftentimes not even desired. Instead, one wants to use a simplified and lower dimensional model or approximation of the system (e.g. a discretized model for a system of partial differential equations) in order to reduce complexity and computational effort, see e.g. [30]. To account for external disturbances and model-plant mismatches we propose _robust funnel MPC_, a combination of funnel MPC and the model-free adaptive high-gain funnel control technique. The funnel MPC control signal, computed with the aid of a (potentially inaccurate) model, is applied to the real system. Then the funnel controller, computing its signal using the instantaneous measurement data, rejects disturbances and compensates model-plant mismatches.
The funnel controller is inherently robust and allows for reference tracking with prescribed performance of the tracking error for a fairly large class of systems, see also [5] for a comprehensive literature overview. In contrast to MPC, funnel control does not use a model of the system and the control input is solely determined by the instantaneous values of the system output, the reference signal, and the funnel function. Therefore, the controller cannot "plan ahead" and this often results in high control values and a rapidly changing control signal with peaks. Furthermore, the controller requires a high sampling rate to stay feasible, which results in quite demanding hardware requirements. Numerical simulations in [6] and [3] show that FMPC exhibits a considerably better controller performance than funnel control.
While funnel MPC guarantees that the system output evolves within predefined boundaries, previous works on reference tracking with MPC mostly focus on ensuring asymptotic stability of the tracking error, see e.g. [1, 23]. To this end, terminal sets around the reference signal and corresponding costs are introduced, resulting in so-called terminal conditions. In [22] tracking is achieved while avoiding such terminal constraints by assuming a sufficiently long prediction horizon. Tube-based MPC schemes construct tubes around the reference signal, which always contain the actual system output in order to ensure reference tracking in the presence of disturbances or uncertainties, see e.g. [26] for linear systems and [28, 16, 24] for nonlinear systems. These tubes, however, usually cannot be arbitrarily chosen since they have to encompass the uncertainties of the system. To guarantee that the system output evolves within these tubes, terminal conditions are added to the optimization problem. The tracking of a reference signal within constant bounds is studied in [13] for linear systems. Hereby, so-called robust control invariant sets are calculated in order to achieve that performance, input, and state constraints are met. The calculation of these robust control invariant sets, however, requires a lot of computational effort and, more important, in general it cannot be ensured that the algorithm proposed in [13] terminates in finite time. In [35] the aforementioned approach was extended to systems with external disturbances. A close relative of funnel MPC is barrier-function-based MPC, cf. [36, 34, 37]. Here, the cost function involves a term (the barrier function) which diverges, if the tracking error approaches the boundary of a given set. However, this approach relies on imposing terminal constraints as well as terminal costs to ensure recursive feasibility. Contrary to the aforementioned approaches, funnel MPC uses a different cost function and thereby circumvents the mentioned drawbacks.
**Nomenclature**: In the following let \(\mathds{N}\) denote the natural numbers, \(\mathds{N}_{0}=\mathds{N}\cup\{0\}\), and \(\mathds{R}_{\geq 0}=[0,\infty)\). By \(\|x\|=\sqrt{\langle x,x\rangle}\) we denote the Euclidean norm of \(x\in\mathds{R}^{n}\). \(\mathrm{GL}_{n}(\mathds{R})\) is the group of invertible \(\mathds{R}^{n\times n}\) matrices. For some interval \(I\subseteq\mathds{R}\) and \(k\in\mathds{N}\), \(L^{\infty}(I,\mathds{R}^{n})\)\(\left(L^{\infty}_{\mathrm{loc}}(I,\mathds{R}^{n})\right)\) is the Lebesgue space of measurable, (locally) essentially bounded functions \(f\colon I\to\mathds{R}^{n}\) with norm \(\|f\|_{\infty}=\mathrm{ess}\sup_{t\in I}\|f(t)\|\). \(W^{k,\infty}(I,\mathds{R}^{n})\) is the Sobolev space of all functions \(f:I\to\mathds{R}^{n}\) with \(k\)-th order weak derivative \(f^{(k)}\) and \(f,f^{(1)},\dots,f^{(k)}\in L^{\infty}(I,\mathds{R}^{n})\). For some \(V\subseteq\mathds{R}^{m}\) we denote by \(\mathcal{C}^{k}(V,\mathds{R}^{n})\) the set of \(k\)-times continuously differentiable functions \(f:V\to\mathds{R}^{n}\), and for brevity \(\mathcal{C}(V,\mathds{R}^{n}):=\mathcal{C}^{0}(V,\mathds{R}^{n})\). Furthermore, \(\mathcal{R}(I,\mathds{R}^{n})\) is the space of all regulated functions \(f:I\to\mathds{R}^{n}\), i.e., the left and right limits \(f(t-)\) and \(f(t+)\) exist for all interior points \(t\in I\) and \(f(a-)\) and \(f(b+)\) exist whenever \(a=\inf I\in I\) or \(b=\sup I\in I\).
## 2 Problem formulation
Before we establish the problem formulation and the control objective, we emphasize the following terminology used throughout the article. The term _system_ refers to the actual plant to be controlled. By _model_ we mean
a model given by the designer, which will be used to compute predictions. We consider nonlinear multi-input multi-output control systems of the form
\[\dot{y}(t)=F(d(t),\mathbf{T}(y)(t),u(t)),\quad y|_{[-\sigma,0]}=y^{0}\in\mathcal{ C}([-h,0],\mathds{R}^{m}), \tag{1}\]
with input \(u\in L^{\infty}_{\mathrm{loc}}(\mathds{R}_{\geq 0},\mathds{R}^{m})\) and output \(y(t)\in\mathds{R}^{m}\) at time \(t\geq 0\). Note that \(u\) and \(y\) have the same dimension \(m\in\mathds{N}\). The system consists of the _unknown_ nonlinear function \(F\in\mathcal{C}(\mathds{R}^{p}\times\mathds{R}^{q}\times\mathds{R}^{m}, \mathds{R}^{m})\), _unknown_ nonlinear operator \(\mathbf{T}:\mathcal{C}([-\sigma,\infty),\mathds{R}^{m})\to L^{\infty}_{\mathrm{ loc}}(\mathds{R}_{\geq 0},\mathds{R}^{q})\), and may incorporate bounded disturbances \(d\in L^{\infty}(\mathds{R}_{\geq 0},\mathds{R}^{p})\). The operator \(\mathbf{T}\) is assumed to be causal, locally Lipschitz, and to fulfill a bounded-input bounded-output property. Such systems encompass, among other things, nonlinear control affine systems with strict relative degree one and stable internal dynamics. Moreover, physical effects such as _backlash_, _relay hysteresis_, and _nonlinear time delays_ can be represented by the operator \(\mathbf{T}\). Then, the constant \(\sigma\geq 0\) quantifies the "memory" of the system. Later, in Definition 3.11 we precisely specify the properties of the operator \(\mathbf{T}\) in (1) and properly introduce the system class under consideration.
For a control input \(u\in L^{\infty}_{\mathrm{loc}}(\mathds{R}_{\geq 0},\mathds{R}^{m})\), the system (1) has a solution in the sense of _Caratheodory_, meaning a function \(y:[-\sigma,\omega)\to\mathds{R}^{m}\), \(\omega>0\), with \(y|_{[-\sigma,0]}=y^{0}\) such that \(y|_{[0,\omega)}\) is absolutely continuous and satisfies the ODE (1) for almost all \(t\in[0,\omega)\). A solution \(y\) is said to be _maximal_, if it has no right extension that is also a solution.
As a surrogate for the unknown system (1), we consider a control-affine model of the form
\[\begin{split}\dot{x}(t)&=f(x(t))+g(x(t))u(t),\quad x (0)=x^{0},\\ y_{\mathrm{M}}(t)&=h(x(t)),\end{split} \tag{2}\]
with \(x^{0}\in\mathds{R}^{n}\) and _known_ functions \(f\in\mathcal{C}^{1}(\mathds{R}^{n},\mathds{R}^{n})\), \(g\in\mathcal{C}^{1}(\mathds{R}^{n},\mathds{R}^{n\times m})\) and \(h\in\mathcal{C}^{2}(\mathds{R}^{n},\mathds{R}^{m})\). Note that, in many situations, systems of the form (2) can be written in the form (1). Since the right-hand side of (2) is locally Lipschitz in \(x\), there exists a unique maximal solution of (2) for any \(u\in L^{\infty}_{\mathrm{loc}}(\mathds{R}_{\geq 0},\mathds{R}^{m})\), cf. [33, SS 10 Thm. XX]. This maximal solution is denoted by \(x(\cdot;0,x^{0},u)\). Contrary to the actual system (1), the model (2) lays out its states \(x\) in an explicit way. The model is used to make predictions about the future system output and, based on them, to compute optimal control signals. The discrepancies between the model predictions \(y_{\mathrm{M}}(t)\) and the actual system output \(y(t)\) is described by the model-plant mismatch
\[e_{\mathrm{S}}(t):=y(t)-y_{\mathrm{M}}(t).\]
### Control objective
The objective is to design a combination of a model predictive control scheme with a feedback controller (based on the measurement \(y(t)\)) which, if applied to system (1), allows for reference tracking of a given trajectory \(y_{\mathrm{ref}}\in W^{1,\infty}(\mathds{R}_{\geq 0},\mathds{R}^{m})\) within predefined boundaries. To be precise, the tracking error \(t\mapsto e(t):=y(t)-y_{\mathrm{ref}}(t)\) shall evolve within the prescribed performance funnel
\[\mathcal{F}_{\psi}:=\left\{\ (t,e)\in\mathds{R}_{\geq 0}\times\mathds{R}^{m}\ \mid\ \|e\|<\psi(t)\ \right\},\]
see Figure 1.
The funnel \(\mathcal{F}_{\psi}\) is determined by the choice of \(\psi\) belonging to the following set of bounded functions with bounded weak derivative
\[\mathcal{G}=\left\{\ \psi\in W^{1,\infty}(\mathds{R}_{\geq 0},\mathds{R})\ \left|\ \inf_{t\geq 0}\psi(t)>0\ \right.\right\}.\]
Figure 1: Error evolution in a funnel \(\mathcal{F}_{\psi}\) with boundary \(\psi\).
The specific application usually dictates the constraints on the tracking error and thus indicates suitable choices for \(\psi\). Note that signals evolving in \(\mathcal{F}_{\psi}\) are not forced to asymptotically converge to \(0\). To achieve that the tracking error \(e\) remains within \(\mathcal{F}_{\psi}\), it is necessary that the output \(y(t)\) of the system (1) at time \(t\geq 0\) is an element of the set
\[\mathcal{D}_{t}:=\left\{\ y\in\mathds{R}^{m}\ \left|\ \left\|y-y_{\mathrm{ref}}(t) \right\|<\psi(t)\ \right.\right\}.\]
### Funnel MPC and funnel control
To solve the problem of tracking a reference signal \(y_{\mathrm{ref}}\in W^{1,\infty}(\mathds{R}_{\geq 0},\mathds{R}^{m})\) within pre-defined funnel boundaries \(\psi\in\mathcal{G}\) for the model (2) with MPC, _funnel MPC_ was proposed in [3]. Assuming system and model to be identical, perfectly known, and of form (2), the _stage cost_\(\ell:\mathds{R}_{\geq 0}\times\mathds{R}^{n}\times\mathds{R}^{m}\to\mathds{R} \cup\left\{\infty\right\}\) defined by
\[\ell(t,x,u)=\begin{cases}\frac{\left\|h(x)-y_{\mathrm{ref}}(t) \right\|^{2}}{\psi(t)^{2}-\left\|h(x)-y_{\mathrm{ref}}(t)\right\|^{2}}+\lambda _{u}\left\|u\right\|^{2},&\left\|h(x)-y_{\mathrm{ref}}(t)\right\|\neq\psi(t) \\ \infty,&\mathrm{else},\end{cases} \tag{3}\]
with design parameter \(\lambda_{u}\in\mathds{R}_{\geq 0}\) was proposed. To further ensure a bounded control signal with a maximal pre-defined control value \(M>0\), the constraint \(\left\|u\right\|_{\infty}\leq M\) has been added as an additional constraint to the OCP. Using this stage cost and given a sufficiently large \(M>0\), it was shown that the following funnel MPC Algorithm 2.1 is initially and recursively feasible and applying this control scheme to a model of the form (2) guarantees \(\left\|y_{\mathrm{M}}(t)-y_{\mathrm{ref}}(t)\right\|<\psi(t)\) for all \(t\in[0,\infty)\), provided that \(\left\|y_{\mathrm{M}}(0)-y_{\mathrm{ref}}(0)\right\|<\psi(0)\) holds, see [3, Thm. 2.10].
**Algorithm 2.1** (Funnel MPC).:
**Given:** Model (2), reference signal \(y_{\mathrm{ref}}\in W^{1,\infty}(\mathds{R}_{\geq 0},\mathds{R}^{m})\), funnel function \(\psi\in\mathcal{G}\), control bound \(M>0\), initial state \(x^{0}\) such that \(h(x^{0})\in\mathcal{D}_{0}\), and stage cost function \(\ell\) as in (3).
**Set** time shift \(\delta>0\), prediction horizon \(T\geq\delta\), define the time sequence \((t_{k})_{k\in\mathds{N}_{0}}\) by \(t_{k}:=k\delta\) and set the current index \(k=0\).
**Steps:**
1. [label=()]
2. Obtain a measurement of the state \(x\) of (2) at time \(t_{k}\) and set \(\hat{x}:=x(t_{k})\).
3. Compute a solution \(u^{\star}\in L^{\infty}([t_{k},t_{k}+T],\mathds{R}^{m})\) of \[\underset{\left\|u\right\|_{\infty}\leq M}{\text{minimize}}, \quad\int_{t_{k}}^{t_{k}+T}\ell(t,x(t;t_{k},\hat{x},u),u(t))\ \mathrm{d}t.\]
4. Apply the time-varying feedback law \[\mu:[t_{k},t_{k+1})\times\mathds{R}^{n}\to\mathds{R}^{m},\quad \mu(t,\hat{x})=u^{\star}(t)\] to the model (2). Increment \(k\) by \(1\) and go to Step (a).
**Remark 2.2**.: The cost function (3) used in the funnel MPC Algorithm 2.1 is inspired by the _funnel controller_. For systems (1) and given reference trajectory \(y_{\mathrm{ref}}\in W^{1,\infty}(\mathds{R}_{\geq 0},\mathds{R}^{m})\) it was shown in [5, Thm. 1.9] that the tracking error \(e(t):=y(t)-y_{\mathrm{ref}}(t)\) always evolves within the performance funnel \(\mathcal{F}_{\psi}\) by applying the control signal
\[u(t)=(N\circ\alpha)(\|e(t)/\psi(t)\|^{2})e(t)/\psi(t), \tag{4}\]
where \(N\in\mathcal{C}(\mathds{R}_{\geq 0},\mathds{R})\) is a surjection and \(\alpha\in\mathcal{C}([0,1),[1,\infty))\) is a bijection. A simple (and often used) feasible choice is \(\alpha(s)=1/(1-s)\) and \(N(s)=s\sin(s)\).
## 3 Robust funnel MPC
We present in detail the idea of how to combine the funnel MPC Algorithm 2.1, see also [2, 3, 6], with results on the _model-free_ funnel controller to achieve the control objective in the presence of a mismatch between the
system (1) and the model (2). The idea is depicted in Figure 2. The left block with red background contains the model (2), the FMPC Algorithm 2.1 and a given reference trajectory \(y_{\text{ref}}\). We emphasize that the model is given by the designer and hence it is known. FMPC achieves, for given \(\psi\in\mathcal{G}\), that the model's output \(y_{\text{M}}\) tracks the reference with predefined accuracy, i.e., \(\|e_{\text{M}}(t)\|=\|y_{\text{M}}(t)-y_{\text{ref}}(t)\|<\psi(t)\) for all \(t\geq 0\), while the control input \(u_{\text{FMPC}}\) minimizes the stage cost (3), cf. [3].
The right block contains the system to be controlled, and a funnel control feedback loop. Given a reference signal \(\rho\in W^{1,\infty}(\mathds{R}_{\geq 0},\mathds{R}^{m})\) and a funnel function \(\varphi\in\mathcal{G}\), funnel control (4) achieves that the system's output \(y\) tracks the reference with predefined accuracy, i.e., \(\|y(t)-\rho(t)\|<\varphi(t)\) for all \(t\geq 0\), cf. [5, 7, 18]. Note that this control scheme is model free, i.e., besides some structural assumptions, no knowledge of the system's parameters is assumed; given any system of the form (1) satisfying some structural properties and an initial error inside the funnel, the control (4) ensures feasibility.
The advantage of FMPC is that the control input \(u_{\text{FMPC}}\) is optimal in the sense of minimizing a given cost function. The advantage of funnel control is that it does not require the knowledge of a model and is hence inherently robust. We admit that by combining both control strategies we loose both: the combination is neither model-free, nor is the control signal optimal. The main idea is to _robustify_ the FMPC scheme w.r.t. uncertainties and disturbances, which is indeed guaranteed by the funnel controller component. However, the funnel controller should remain inactive as long as the prediction \(y_{\text{M}}\) by the model of the system output \(y\) is sufficiently accurate. It should only be active when the system is, according to the model, in a critical state, meaning that the predicted error \(e_{\text{M}}\) is close to the funnel boundary \(\psi\). In this case the funnel controller achieves that system and model behave similarly. Therefore, \(\varphi=\psi-\|e_{\text{M}}\|\) is chosen as funnel function for the funnel controller. This approach ensures that the funnel controller only compensates disturbances when necessary and hence the combined control signal \(u=u_{\text{FMPC}}+u_{\text{FC}}\) deviates from the optimal control \(u_{\text{FMPC}}\) as slightly as possible.
Let us give a more precise description of the controller structure depicted in Figure 2: On the left hand side (red box), FMPC computes the control signal \(u_{\text{FMPC}}(t)\), \(t\in[t_{k},t_{k}+\delta)\), and the corresponding output is \(y_{\text{M}}(t)\), \(t\in[t_{k},t_{k}+\delta)\), which is handed over to the funnel controller on the right side of Figure 2 (blue box) and serves as a reference signal for system (1). Via the application of the funnel control \(u_{\text{FC}}\) the system's output \(y\) follows the model's output \(y_{\text{M}}\) with pre-defined accuracy, i.e., \(\|e_{\text{S}}(t)\|=\|y(t)-y_{\text{M}}(t)\|<\varphi(t)\), where \(\varphi=\psi-\|e_{\text{M}}\|\) as mentioned above. The control signal \(u=u_{\text{FMPC}}+u_{\text{FC}}\) is applied to the system, which has the following consequence: If the model and the system coincide and are initialized equally, then the application of the control \(u_{\text{FMPC}}\) has the same effect on both dynamics, and so the system's output \(y\) equals the model's output \(y_{\text{M}}\), i.e., \(\|y(t)-y_{\text{ref}}(t)\|=\|y_{\text{M}}(t)-y_{\text{ref}}(t)\|<\psi(t)\). Invoking (4), this in particular means \(u_{\text{FC}}=0\). If, however, the model does not match the system, then \(e_{S}(t)\neq 0\) and \(u_{\text{FC}}(t)\neq 0\). Roughly speaking, the more model and system differ, the more the funnel controller has to compensate; and the better the model matches the system, the more the control \(u_{\text{FMPC}}\) can contribute to the tracking task.
Since \(u_{\text{FMPC}}\) is a (piecewise) optimal control calculated using the model (2), the aim is to keep \(u_{\text{FC}}\) as small as possible while achieving the tracking objective. When the OCP is solved in FMPC at time instance \(t_{k}\)
Figure 2: Structure of the robust FMPC scheme
it is necessary to update the initial value \(x(t_{k})\) of the model (2), if there is a mismatch between \(y(t_{k})\) and \(y_{\mathrm{M}}(t_{k})\). One has to find a _proper initialization_ for the model, meaning, based on the information of \(y(t_{k})\), it is necessary to find a starting configuration of the model such that its output \(y_{\mathrm{M}}(t_{k})\) is close to the system's output \(y(t_{k})\) in order to calculate a control signal \(u_{\mathrm{FMPC}}\) for the next time interval \([t_{k},t_{k+1}]\), which contributes to the tracking task. Otherwise, due to deviation between system and model, it might be possible, that the control signal \(u_{\mathrm{FMPC}}\) is unsuitable and needs to be compensated by \(u_{\mathrm{FC}}\).
The remainder of this section is organized as follows. In Section 3.1 we introduce the class of models to be used in the robust funnel MPC Algorithm 3.7. Then in Section 3.2 we discuss in detail the controller's structure. To prove recursive feasibility of the proposed MPC algorithm, we introduce a _proper initialization strategy_ in Definition 3.5. In order to avoid that the funnel feedback controller unnecessarily compensates small model-plant mismatches, we introduce an _activation function_ in Definition 3.6. In Section 3.3 we introduce the class of (real) systems to be controlled; in particular, the operator \(\mathbf{T}\) is presented in detail. With the introductory work at hand, we finally establish the main result in Section 3.4.
### Model class
We stress that the model (2) itself is, in its essence, a controller design parameter - the better the model, the better the controller performance. But we will be able to show that even with a very poor model the robust FMPC Algorithm 3.7 achieves the control objective. Since in the later analysis we utilize the so-called _Byrnes-Isidori form_, we make the following assumptions about the model (2) throughout this work.
**Assumption 3.1**.: The model (2) has global relative degree \(r=1\), i.e., the _high-gain matrix_\(\Gamma(x):=(h^{\prime}g)(x)\) is invertible for all \(x\in\mathds{R}^{n}\), where \(h^{\prime}\) denotes the Jacobian of \(h\). Additionally, \(h^{-1}(0)\) is diffeomorphic to \(\mathds{R}^{n-m}\) and the mapping \(x\mapsto G(x):=\operatorname{im}g(x)\) is involutive, i.e., for all smooth vector fields \(V_{i}:\mathds{R}^{n}\to\mathds{R}^{n}\) with \(V_{i}(x)\in G(x)\), \(i\in\{1,2\}\), we have that the Lie bracket \([V_{1},V_{2}](x)=V_{1}^{\prime}(x)V_{2}(x)-V_{2}^{\prime}(x)V_{1}(x)\) satisfies \([V_{1},V_{2}](x)\in G(x)\) for all \(x\in\mathds{R}^{n}\). Then, by [12, Cor. 5.7] there exists a diffeomorphism \(\Phi:\mathds{R}^{n}\to\mathds{R}^{n}\) such that the coordinate transformation \((y_{\mathrm{M}}(t),\eta(t))=\Phi(x(t))\) puts the model (2) into Byrnes-Isidori form
\[\dot{y}_{\mathrm{M}}(t) =p\left(y_{\mathrm{M}}(t),\eta(t)\right)+\Gamma\left(\Phi^{-1} \left(y_{\mathrm{M}}(t),\eta(t)\right)\right)\,u(t),\hskip 14.226378pt(y_{ \mathrm{M}}(0),\eta(0))=(y_{\mathrm{M}}^{0},\eta^{0})=\Phi(x^{0}), \tag{5a}\] \[\dot{\eta}(t) =q\left(y_{\mathrm{M}}(t),\eta(t)\right), \tag{5b}\]
where \(p\in\mathcal{C}^{1}(\mathds{R}^{m}\times\mathds{R}^{n-m},\mathds{R}^{m})\) and \(q\in\mathcal{C}^{1}(\mathds{R}^{m}\times\mathds{R}^{n-m},\mathds{R}^{n-m})\). Here, equation (5b) describes the so-called _internal dynamics_.
**Remark 3.2**.: Finding the diffeomorphism \(\Phi:\mathds{R}^{n}\to\mathds{R}^{n}\) is in general a hard task, cf. [21]. In [25] an approach is presented to compute \(\Phi\) algorithmically. In the simple but relevant case of linear output \(h(x)=Hx\), \(H\in\mathds{R}^{m\times n}\), and constant input distribution \(g(x)=G\in\mathds{R}^{n\times m}\) such that \(HG\) is invertible, the assumptions of [12, Cor. 5.7] are satisfied and following the derivations in [25], the transformation can be written as
\[\begin{pmatrix}y\\ \eta\end{pmatrix}=\Phi(x)=\begin{bmatrix}H\\ V^{\dagger}(I_{n}-G(HG)^{-1}H)\end{bmatrix}x,\quad V\in\mathds{R}^{n\times(n-m )}\text{ with }\operatorname{im}V=\ker H, \tag{6}\]
where \(V^{\dagger}\in\mathds{R}^{(n-m)\times n}\) denotes the pseudoinverse of \(V\). In this particular case the inverse transformation is given by \(x=\Phi^{-1}(y,\eta)=G(HG)^{-1}y+V\eta\). Then equations (5) read
\[\dot{y}_{\mathrm{M}}(t) =f(G(HG)^{-1}y_{\mathrm{M}}(t)+V\eta(t))+HGu(t),\] \[\dot{\eta}(t) =V^{\dagger}(I_{n}-G(HG)^{-1}H)f(G(HG)^{-1}y_{\mathrm{M}}(t)+V \eta(t)).\]
Since we want to ensure that the internal dynamics do not destabilize the system during control, we make the following assumption. Note that this assumption ensures that the maximal solution \(\eta(\cdot;0,\eta^{0},\zeta)\) can be extended to a global solution.
**Assumption 3.3**.: The internal dynamics (5b) satisfy the following _bounded-input, bounded-state_ (BIBS) condition:
\[\forall\,c_{0}>0\ \exists\,c_{1}>0\ \forall\,\eta^{0}\in\mathds{R}^{n-m}\ \forall\,\zeta\in L^{\infty}(\mathds{R}_{\geq 0},\mathds{R}^{m}):\ \big{\|}\eta^{0}\big{\|}+\left\|\zeta\right\|_{\infty}\leq c_{0}\, \Longrightarrow\,\big{\|}\eta(\cdot;0,\eta^{0},\zeta)\big{\|}_{\infty}\leq c_{1}, \tag{7}\]
where \(\eta(\cdot;0,\eta^{0},\zeta):\mathds{R}_{\geq 0}\to\mathds{R}^{n-m}\) denotes the unique global solution of (5b) when \(y_{\mathrm{M}}\) is substituted by \(\zeta\).
With the definitions made above, we may now introduce the class of models to be considered.
**Definition 3.4**.: _We say that a model (2) belongs to the model class \(\mathcal{M}^{m}\), written \((f,g,h)\in\mathcal{M}^{m}\), if it satisfies Assumptions 3.1 and 3.3._
Since in many applications the usage of a linear model is reasonable, we emphasize that linear systems of the form
\[\dot{x}(t) =Ax(t)+Bu(t),\] \[y_{\mathrm{M}}(t) =Cx(t),\]
with \(A\in\mathds{R}^{n\times n}\) and \(C,B^{\top}\in\mathds{R}^{m\times n}\) belong to \(\mathcal{M}^{m}\), provided that \(CB\) is invertible and
\[\forall\,\lambda\in\mathds{C}\text{ with }\mathrm{Re}\,\lambda\geq 0:\ \det \begin{bmatrix}\lambda I-A&B\\ C&0\end{bmatrix}\neq 0. \tag{8}\]
### Controller structure
Before we establish the main result, we informally introduce some aspects of the robust funnel MPC algorithm.
**Model Predictive Control.** While the model (2) lays out its system states in an explicit way, the internal states of the system (1) are unknown. Moreover, only the measurement of the system output \(y\) is available. However, the measurement of the current state of the model in Step (a) of Algorithm 2.1 (FMPC) is essential for its functioning. When applying the control resulting from MPC to system (1), it is therefore necessary to initialize the current model state \(\hat{x}\) based on the measured system output \(\hat{y}\) and the previous prediction of the model state \(x^{\mathrm{pre}}=x(t_{k+1};t_{k},\hat{x}_{k},u_{\mathrm{FMPC}})\) in a more sophisticated way. There are two reasonable possibilities to initialize \(\hat{x}\). One option is to choose the model output such that it coincides with the system output, i.e.,
\[h(\hat{x})=\hat{y}.\]
However, since we cannot measure the internal state of the system, there are two ways to treat the internal dynamics (5b) of the model. Either, we set \(\hat{x}\) such that we do not manipulate the internal dynamics, i.e.,
\[[0,I_{n-m}]\,\Phi(\hat{x})=[0,I_{n-m}]\,\Phi(x^{\mathrm{pre}}),\]
or we "reset" the internal dynamics of the model, i.e., for a fixed a priori defined bound \(\xi\in\mathds{R}_{\geq 0}\) for the internal dynamics, we initialize \(\hat{x}\) such that
\[\|[0,I_{n-m}]\Phi(\hat{x})\|\leq\xi.\]
The other option to initialize the model is to allow for a (temporary) open-loop operation of Algorithm 2.1, meaning that we allow initializing the current model state \(\hat{x}\) with the previous prediction \(x^{\mathrm{pre}}\). The following definition formalizes these possibilities.
**Definition 3.5**.: _Let a model \((f,g,h)\in\mathcal{M}^{m}\) be given and let \(\Phi\) be the diffeomorphism from Assumption 3.1. For \(\xi\in\mathds{R}_{\geq 0}\), \(x^{\mathrm{pre}}\in\mathds{R}^{n}\) and \(\hat{y}\in\mathds{R}^{m}\) define the set_
\[\Omega_{\xi}(x^{\mathrm{pre}},\hat{y}):=\left\{\begin{array}{l}x\in\mathds{ R}^{n}\\ \end{array}\right|\begin{array}{l}h(x)=\hat{y},\\ \left[0,I_{n-m}\right]\Phi(x)=\left[0,I_{n-m}\right]\Phi(x^{\mathrm{pre}}) \text{ or }\|[0,I_{n-m}]\Phi(x)\|\leq\xi\end{array}\right\}\cup\left\{x^{ \mathrm{pre}}\right\}.\]
_We call \(\hat{x}\in\Omega_{\xi}(x^{\mathrm{pre}},\hat{y})\) a proper initialization and a function \(\kappa_{\xi}:\mathds{R}^{n}\times\mathds{R}^{m}\to\mathds{R}^{n}\) with \(\kappa_{\xi}(x,y)\in\Omega_{\xi}(x,y)\) for all \((x,y)\in\mathds{R}^{n}\times\mathds{R}^{m}\) a proper initialization strategy._
We emphasize that, for \(x^{\mathrm{pre}}\in\mathds{R}^{n}\) and \(\hat{y}\in\mathds{R}^{m}\), there always exists a proper initialization since \(x^{\mathrm{pre}}\in\Omega_{\xi}(x^{\mathrm{pre}},\hat{y})\).
**Funnel control.** In Algorithm 3.7 (robust FMPC) we choose the funnel function for the funnel controller very specifically. Namely, we use \(\varphi=\psi-\|e_{\mathrm{M}}\|\), where \(e_{\mathrm{M}}=y_{\mathrm{M}}-y_{\mathrm{ref}}\). This choice reflects the following idea: If the error \(e_{\mathrm{M}}\) is small, then the funnel boundary \(\varphi\) is approximately given by the MPC funnel boundary \(\psi\). If, however, the error \(e_{\mathrm{M}}\) is close to \(\psi\), then \(\varphi\) becomes tight, such that the system's output is forced to be very close to the model's output. This means, whenever the tracking is critical, the system is forced to behave very
similar to the model such that even in critical situations it is reasonable to use _model_ predictive control. The choice of \(\varphi\) ensures that the tracking error evolves within the funnel \(\psi\), i.e., we have
\[\forall\,t\geq 0\;:\;\|y(t)-y_{\mathrm{ref}}(t)\|<\psi(t).\]
Besides the particular choice of \(\varphi\) we will utilize an _activation function_\(\beta:[0,1]\to[0,\beta^{+}]\), \(\beta^{+}>0\), in order to determine when the funnel control signal \(u_{\mathrm{FC}}\) will be nonzero. Since very small deviations between \(y(t)\) and \(y_{\mathrm{M}}(t)\) can be neglected, we use the term \(\beta(\|\mathrm{e}_{\mathrm{S}}(t)/\varphi(t)\|)\), where \(\mathrm{e}_{\mathrm{S}}=y-y_{\mathrm{M}}\), which can be set to zero when \(\mathrm{e}_{\mathrm{S}}\) is small. A reasonable and simple choice would be
\[\beta(s)=\begin{cases}0,&s\leq S_{\mathrm{crit}},\\ s-S_{\mathrm{crit}},&s\geq S_{\mathrm{crit}},\end{cases}\]
for \(S_{\mathrm{crit}}\in(0,1)\). In this particular case we may set \(\beta^{+}=1-S_{\mathrm{crit}}\). In the context of machine learning, in particular, artificial neural networks, this type of functions is known as _rectified linear unit_ (ReLU). Note that \(\beta\) defined above satisfies \(\beta(S_{\mathrm{crit}})=0\), whereby it is a continuous function and thus the funnel controller contributes continuously to the overall control signal.
**Definition 3.6**.: _For \(\beta^{+}>0\), we call a function \(\beta\in\mathcal{C}([0,1],[0,\beta^{+}])\) an activation function, if \(\beta(1)=\beta^{+}\)._
With the definitions and concepts introduced so far at hand, we are in the position to establish the _robust funnel MPC algorithm_.
**Algorithm 3.7** (Robust funnel MPC).:
**Given:**
* instantaneous measurements of the output \(y(t)\) of system (1), reference signal \(y_{\mathrm{ref}}\in W^{1,\infty}(\mathds{R}_{\geq 0},\mathds{R}^{m})\), funnel function \(\psi\in\mathcal{G}\),
* model (2) with \((f,g,h)\in\mathcal{M}^{m}\) and diffeomorphism \(\Phi\) as in Assumption 3.1, stage cost function \(\ell\) as in (3), \(\xi\in\mathds{R}_{\geq 0}\), a proper initialization strategy \(\kappa_{\xi}:\mathds{R}^{n}\times\mathds{R}^{m}\to\mathds{R}^{n}\) as in Definition 3.5, bound \(M>0\) for the MPC control signal, and the initial value \(x^{0}\) of the model's state such that \[x^{0}\in X^{0}:=\left\{\begin{array}{l}x\in\mathds{R}^{n}\;\left|\begin{array} []{l}\|h(x)-y_{\mathrm{ref}}(0)\|<\psi(0),\\ \|y(0)-h(x)\|<\psi(0)-\|h(x)-y_{\mathrm{ref}}(0)\|\,,\\ \|[0,I_{n-m}]\Phi(x)\|\leq\xi\end{array}\right.\end{array}\right\},\] (9)
* a surjection \(N\in\mathcal{C}(\mathds{R}_{\geq 0},\mathds{R})\), a bijection \(\alpha\in\mathcal{C}([0,1),[1,\infty))\), and an activation function \(\beta\in\mathcal{C}([0,1],[0,\beta^{+}])\) with \(\beta^{+}>0\).
**Set** time shift \(\delta>0\), prediction horizon \(T\geq\delta\), and index \(k:=0\).
**Define** the time sequence \((t_{k})_{k\in\mathds{N}_{0}}\) by \(t_{k}:=k\delta\) and the first element of the sequence of predicted states \((x_{k}^{\mathrm{pre}})_{k\in\mathds{N}_{0}}\) by \(x_{0}^{\mathrm{pre}}:=x^{0}\).
**Steps:**
* Obtain a measurement \(y(t_{k})=:\hat{g}_{k}\) of the system output \(y\) at time \(t_{k}\), and choose a _proper initialization_\(\hat{x}_{k}=\kappa_{\xi}(x_{k}^{\mathrm{pre}},\hat{y}_{k})\in\Omega_{\xi}(x_{k }^{\mathrm{pre}},\hat{y}_{k})\) for the model.
* Compute a solution \(u_{\mathrm{FMPC}}\in L^{\infty}([t_{k},t_{k}+T],\mathds{R}^{m})\) of the optimization problem \[\underset{u\in L^{\infty}([t_{k},t_{k}+T],\mathds{R}^{m}),}{\text{minimize}} \quad\int_{t_{k}}^{t_{k}+T}\ell(t,x(t;t_{k},\hat{x}_{k},u),u(t))\ \mathrm{d}t.\] (10)
* Predict the output \(y_{\mathrm{M}}\) of the model (2) on the interval \([t_{k},t_{k+1}]\) \[y_{\mathrm{M}}(t)=h(x(t;t_{k},\hat{x}_{k},u_{\mathrm{FMPC}})),\] and define the adaptive funnel \(\varphi:[t_{k},t_{k+1}]\to\mathds{R}_{>0}\) by \[\varphi(t):=\psi(t)-\|\mathrm{e}_{\mathrm{M}}(t)\|\quad\text{ with }e_{\mathrm{M}}(t)=y_{\mathrm{M}}(t)-y_{\mathrm{ref}}(t).\] (11)
* Define the funnel control law as in (4) with \(y_{\rm M}\) and funnel function \(\varphi\) as in (11) by \[u_{\rm FC}(t):=\beta(\|e_{\rm S}(t)/\varphi(t)\|)(N\circ\alpha)(\|e_{\rm S}(t)/ \varphi(t)\|^{2})e_{\rm S}(t)/\varphi(t)\quad\mbox{ with }\quad e_{\rm S}(t)=y(t)-y_{\rm M}(t).\] (12)
* Apply the feedback law \[\mu:[t_{k},t_{k+1})\to{\mathds{R}}^{m},\quad\mu(t)=u_{\rm FMPC}(t)+u_{\rm FC}(t)\] (13) to system (1). Set the predicted state \(x_{k+1}^{\rm pre}=x(t_{k+1};t_{k},\hat{x}_{k},u_{\rm FMPC})\), then increment \(k\) by \(1\) and go to Step (a).
**Remark 3.8**.: Since \(\hat{x}_{k}\) is chosen via the the proper initialization strategy \(\kappa_{\xi}(x_{k}^{\rm pre},\hat{y}_{k})\) at every time instant \(t_{k}\) for \(k\in{\mathds{N}}_{0}\), in general \(\hat{x}_{k}\neq x_{k}^{\rm pre}\). In particular, it is possible that \(\hat{x}_{0}\neq x^{0}\).
Before we show how the robust funnel MPC Algorithm 3.7 achieves the control objective in the presence of a model-plant mismatch and disturbances, we introduce the class of systems, for which our approach guarantees output tracking with predefined behavior of the error while continuously balancing tracking performance and control effort by adhering to funnel MPC whenever possible.
### System class
We introduce the class of systems, to which the _real_ system to be controlled is assumed to belong. We consider systems, which are given by
\[\dot{y}(t)=F(d(t),\textbf{T}(y)(t),u(t)),\quad y|_{[-\sigma,0]}=y^{0}, \tag{14}\]
where for \(\sigma\geq 0\) an initial history function \(y^{0}\in\mathcal{C}([-\sigma,0],{\mathds{R}}^{m})\) is given. System (14) consists of the unknown nonlinear function \(F\in\mathcal{C}({\mathds{R}}^{p}\times{\mathds{R}}^{q}\times{\mathds{R}}^{m},{ \mathds{R}}^{m})\), the bounded disturbance \(d\in L^{\infty}({\mathds{R}}_{\geq 0},{\mathds{R}}^{p})\) and unknown nonlinear operator \(\textbf{T}:\mathcal{C}([-\sigma,\infty),{\mathds{R}}^{m})\to L^{\infty}_{\rm loc }({\mathds{R}}_{\geq 0},{\mathds{R}}^{q})\), where the structural properties of \(F\) and **T** are specified in the subsequent paragraph. First, we introduce the operator class \(\mathcal{T}\) to which the operator **T** in (14) belongs.
**Definition 3.9**.: _For \(m,q\in{\mathds{N}}\) and \(\sigma\geq 0\), the set \(\mathcal{T}\) denotes the class of operators \(\textbf{T}:\mathcal{C}([-\sigma,\infty),{\mathds{R}}^{m})\to L^{\infty}_{\rm loc }({\mathds{R}}_{\geq 0},{\mathds{R}}^{q})\) for which the following properties hold:_
* _Causality:_ \(\forall\,y_{1},y_{2}\in\mathcal{C}([-\sigma,\infty),{\mathds{R}}^{m})\)__\(\forall\,t\geq 0\)_:_ \[y_{1}|_{[-\sigma,t]}=y_{2}|_{[-\sigma,t]}\quad\Longrightarrow\quad\textbf{T}(y_{ 1})|_{[0,t]}=\textbf{T}(y_{2})|_{[0,t]}.\]
* _Local Lipschitz:_ \(\forall\,t\geq 0\)__\(\forall\,y\in\mathcal{C}([-\sigma,t];{\mathds{R}}^{m})\)__\(\exists\,\Delta,\delta,c>0\)__\(\forall\,y_{1},y_{2}\in\mathcal{C}([-\sigma,\infty);{\mathds{R}}^{m})\) _with_ \(y_{1}|_{[-\sigma,t]}=y\)_,_ \(y_{2}|_{[-\sigma,t]}=y\) _and_ \(\|y_{1}(s)-y(t)\|<\delta\)_,_ \(\|y_{2}(s)-y(t)\|<\delta\) _for all_ \(s\in[t,t+\Delta]\)_:_ \[\operatorname*{ess\,sup}_{s\in[t,t+\Delta]}\|\textbf{T}(y_{1})(s)-\textbf{T}(y _{2})(s)\|\leq c\sup_{s\in[t,t+\Delta]}\|y_{1}(s)-y_{2}(s)\|\,.\]
* _Bounded-input bounded-output (BIBO):_ \(\forall\,c_{0}>0\)__\(\exists\,c_{1}>0\)__\(\forall\,y\in\mathcal{C}([-\sigma,\infty),{\mathds{R}}^{m})\)_:_ \[\sup_{t\in[-\sigma,\infty)}\|y(t)\|\leq c_{0}\ \Longrightarrow\ \sup_{t\in[0,\infty)}\|\textbf{T}(y)(t)\|\leq c_{1}.\]
In order to have a familiar picture in mind, we briefly discuss a special instance of the operator **T**. Consider a linear system
\[\dot{x}(t) =Ax(t)+Bu(t),\quad x(0)=x^{0}\] \[y(t) =Cx(t),\]
with \(A\in{\mathds{R}}^{n\times n}\) and \(C,B^{\top}\in{\mathds{R}}^{m\times n}\) satisfying (8) and \(CB\) is positive definite. Then, invoking the findings in [20], there exists an invertible \(U\in{\mathds{R}}^{n\times n}\) such that with \((y^{\top},\eta^{\top})^{\top}=Ux\) the above system can be transformed into
\[\dot{y}(t) =Ry(t)+S\eta(t)+\Gamma u(t),\] \[\dot{\eta}(t) =Q\eta(t)+Py(t),\]
where \(R\in\mathds{R}^{m\times m}\), \(S,P^{\top}\in\mathds{R}^{m\times(n-m)}\), \(Q\in\mathds{R}^{(n-m)\times(n-m)}\), \(\Gamma=CB\), and \(\operatorname{Re}\lambda<0\) for all eigenvalues \(\lambda\in\mathds{C}\) of \(Q\). Defining the linear integral operator
\[L:y(\cdot)\mapsto\left(t\mapsto\int_{0}^{t}e^{Q(t-s)}Py(s)\mathrm{d}s\right),\]
and setting \(d(\cdot):=Se^{Q}\left[0,I_{n-m}\right]Ux^{0}\), the above system can be written as
\[\dot{y}(t)=d(t)+\mathbf{T}(y)(t)+\Gamma u(t),\]
where \(\mathbf{T}:y(\cdot)\mapsto(t\mapsto Ry(t)+SL(y)(t))\).
We emphasize that the operator class \(\mathcal{T}\) encompasses a large number of operators appearing in modelling processes, such as nonlinear delay operators, backlash and relay hysteresis operators, as well as solution operators for infinite dimensional dynamical systems, cf. [5, 9, 18]. While the first property (causality) introduced in Definition 3.9 is quite intuitive, the second (locally Lipschitz) is of a more technical nature, required to guarantee existence and uniqueness of solutions, and the third property (BIBO) can be motivated from a practical point of view. Namely, the latter is an inifinite-dimensional extension of the BIBS property (7) of the internal dynamics.
Next, we introduce a version of the so-called _high-gain_ property, which is essential in high-gain adaptive control. Roughly speaking this property guarantees that, if a large enough input is applied, the system reacts sufficiently fast. The definition is in virtue of [5, Def. 1.2].
**Definition 3.10**.: _For \(p,q,m\in\mathds{N}\) a function \(F\in\mathcal{C}(\mathds{R}^{p}\times\mathds{R}^{q}\times\mathds{R}^{m}, \mathds{R}^{m})\) is said to have the perturbation high-gain property, if for every compact set \(K_{m}\subset\mathds{R}^{m}\) there exists \(\nu\in(0,1)\) such that for every compact sets \(K_{p}\subset\mathds{R}^{p}\), \(K_{q}\subset\mathds{R}^{q}\) the function_
\[\chi\colon\mathds{R}\to\mathds{R},\ s\mapsto\min\left\{\ \langle v,F(\delta,z, \Delta-sv)\rangle\ \left|\ \delta\in K_{p},\Delta\in K_{m},z\in K_{q},v\in\mathds{R}^{m},\ \nu\leq\|v\|\leq 1\ \right.\right\}\]
_satisfies \(\sup_{s\in\mathds{R}}\chi(s)=\infty\)._
Note that the perturbation high-gain property introduced in Definition 3.10 is, at first glance, stronger than the high-gain property defined in [5, Def. 1.2]. In order to account for possible bounded perturbations of the input, we require this modified property. Although it is clear that control affine systems satisfy both properties, it is an open problem whether the perturbation high-gain property and the high-gain property are equivalent.
With Definitions 3.9 and 3.10 at hand, we may formally introduce the class of systems under consideration.
**Definition 3.11**.: _We say that the system (14) belongs to the system class \(\mathcal{N}^{m}\), written \((d,F,\textbf{T})\in\mathcal{N}^{m}\), if, for some \(p,q\in\mathds{N}\) and \(\sigma\geq 0\), the following holds: \(d\in L^{\infty}(\mathds{R}_{\geq 0},\mathds{R}^{p})\), \(F\in\mathcal{C}(\mathds{R}^{p}\times\mathds{R}^{q}\times\mathds{R}^{m}, \mathds{R}^{m})\) has the perturbation high-gain property from Definition 3.10 and \(\textbf{T}\in\mathcal{T}\)._
**Remark 3.12**.: We comment on the system class and on the model class.
1. The system class \(\mathcal{N}^{m}\) encompasses control affine systems with relative degree one, (perturbation) high-gain property and stable internal dynamics. In particular, linear minimum-phase systems are contained in the system class.
2. Although there are many systems belonging to both the model class \(\mathcal{M}^{m}\) and the system class \(\mathcal{N}^{m}\), neither the set of admissible models \(\mathcal{M}^{m}\) is a subset of all considered systems \(\mathcal{N}^{m}\) nor the opposite is true. As an example for a model belonging to \(\mathcal{M}^{m}\) but not to \(\mathcal{N}^{m}\) consider \((f,g,h)\in\mathcal{M}^{m}\) with high-gain matrix \(\Gamma=h^{\prime}g=\left[\begin{smallmatrix}-1&0\\ 0&1\end{smallmatrix}\right]\). On the other hand, every system \((d,F,\mathbf{T})\in\mathcal{N}^{m}\) which involves a time delay, e.g. \(\mathbf{T}(y)(t)=y(t-\sigma)\) with \(\sigma>0\), cannot belong to \(\mathcal{M}^{m}\).
3. Note that the operator \(\mathbf{T}\) in (14) can be the solution operator of an infinite dimensional dynamical system, e.g. a partial differential equation. This situation was studied in [10], where a moving water tank was subject to funnel control, and the water in the tank was modelled by the linearized Saint-Venant equations.
### Main result
In the following main result we show that the robust funnel MPC Algorithm 3.7 is initially and recursively feasible and achieves tracking of a given reference signal with prescribed behavior.
**Theorem 3.13**.: _Consider a system (14) with \((d,F,\mathbf{T})\in\mathcal{N}^{m}\) and choose a model (2) with \((f,g,h)\in\mathcal{M}^{m}\). Let \(\psi\in\mathcal{G}\) and \(y_{\mathrm{ref}}\in W^{1,\infty}(\mathds{R}_{\geq 0},\mathds{R}^{m})\) be given. Let \(y^{0}\in\mathcal{C}([-\sigma,0],\mathds{R}^{m})\) with \(\sigma\geq 0\) be an initial history function for system (14) with \(y^{0}(0)\in\mathcal{D}_{0}\). Then, for any \(\xi\geq 0\), the set \(X^{0}\) in (9) is non-empty and there exists \(M>0\) such that the robust funnel MPC Algorithm 3.7 with \(\delta>0\) and \(T\geq\delta\) is initially and recursively feasible for every \(x^{0}\in X^{0}\), i.e.,_
* _at every time instance_ \(t_{k}:=k\delta\) _for_ \(k\in\mathds{N}_{0}\) _the OCP (_10_) has a solution_ \(u_{k}^{*}\in L^{\infty}([t_{k},t_{k}+T],\mathds{R}^{m})\)_, and_
* _the closed-loop system consisting of the system (_14_) and the feedback law (_13_) has a global solution_ \(y:[-\sigma,\infty)\to\mathds{R}^{m}\)_._
_Each global solution \(y\) satisfies that_
* _all signals are bounded, in particular,_ \(u\in L^{\infty}(\mathds{R}_{\geq 0},\mathds{R}^{m})\) _and_ \(y\in L^{\infty}([-\sigma,\infty),\mathds{R}^{m})\)_,_
* _the tracking error between the system's output and the reference evolves within prescribed boundaries, i.e.,_ \[\forall\,t\geq 0\,:\,\,\|y(t)-y_{\mathrm{ref}}(t)\|<\psi(t).\]
The proof is relegated to Section 5.
**Remark 3.14**.: With Theorem 3.13 at hand we comment on the difference between the proposed control scheme and a straightforward combination of a MPC scheme with a feedback control law.
* The combination of feedforward control with feedback control, i.e., the two degree of freedom controller design [31], is a popular approach. The specific combination of funnel control with feedforward control methods was investigated in [4, 8]. In a similar fashion it is possible to combine a MPC scheme, in particular, a FMPC scheme, with an additional feedback controller. This possibility (i.e., no feedback between FMPC and the actual system) is realized in the robust FMPC Algorithm 3.7 by allowing that the initialization at the beginning of a MPC cycle consists only of the previous prediction \(x_{k}^{\mathrm{pre}}\) of the current _model_ state such that \(\hat{x}_{k}:=x_{k}^{\mathrm{pre}}\), which is a special instance of _proper initialization_. In this case, the FMPC control signal \(u_{\mathrm{FMPC}}\) can be computed offline using the given model. Then it is applied to the system as an open-loop control and the additional feedback control compensates errors, which occur due to deviations between the model and the system. This situation is illustrated as the second scenario in the simulation in Section 4, cf. Figure 4.
* The alternative to the open-loop operation of Algorithm 3.7 is a feedback induced by the utilization of measurements of the _system_ output \(\hat{y}_{k}:=y(t_{k})\). Then, initializing the model properly with \(\hat{x}_{k}\in\Omega_{\xi}(x_{k}^{\mathrm{pre}},\hat{y}_{k})\) ensures recursive feasibility of the MPC scheme on the one hand, and on the other hand, the state \(\hat{x}_{k}\) is chosen such that the model's output \(h(\hat{x}_{k})\) equals the system's output \(\hat{y}_{k}\), i.e., \(h(\hat{x}_{k})=\hat{y}_{k}\). With this re-initialization at the beginning of the MPC cycle, the influence of the control signal \(u_{\mathrm{FMPC}}\) to the system is taken into account, and moreover, since the error between the model's and the system's output is zero, the optimal control signal may have a better effect on the system's tracking behavior. This situation is illustrated in the third scenario in the simulation in Section 4, cf. Figure 5.
**Remark 3.15**.: If the model (5) and the system (14) coincide up to an additive bounded disturbance, then we may derive an explicit bound for the overall control input \(u=u_{\mathrm{FMPC}}+u_{\mathrm{FC}}\) a priori. Consider a model (5) and let the system be given by
\[\dot{y}(t) =p(y(t),\eta_{\mathrm{S}}(t))+\Gamma(\Phi^{-1}(y(t),\eta_{ \mathrm{S}}(t)))\,u(t)+d(t), y(0) =y^{0},\] \[\dot{\eta}_{\mathrm{S}}(t) =q(y(t),\eta_{\mathrm{S}}(t)), \eta_{\mathrm{S}}(0) =\eta_{\mathrm{S}}^{0},\]
where \(d\in L^{\infty}(\mathds{R}_{\geq 0},\mathds{R}^{m})\) and the high-gain matrix satisfies \(\Gamma(x)+\Gamma(x)^{\top}>0\) for all \(x\in\mathds{R}^{n}\). In this case the surjection in the funnel controller can be replaced with the function \(N(s)=-s\), whereby the funnel control law simplifies to \(u_{\mathrm{FC}}(\cdot)=-\beta(\|w(\cdot)\|)\alpha(\|w(\cdot)\|^{2})w(\cdot)\), \(w:=(y-y_{\mathrm{M}})/\varphi\), cf. [5, Rem. 1.8]. Let \(\varepsilon\in(0,1)\) be the smallest number such that
\[\beta(\varepsilon)\alpha(\varepsilon^{2})\varepsilon^{2}=\frac{\|\dot{\psi}\|_ {\infty}+3\max_{(y,\eta)\in K}\|p(y,\eta)\|+3\max_{(y,\eta)\in K}\|\Gamma( \Phi^{-1}(y,\eta))\|M+\|d\|_{\infty}}{\lambda_{\Gamma}},\]
where \(\lambda_{\Gamma}>0\) is the smallest eigenvalue of \(\Gamma(\cdot)+\Gamma(\cdot)^{\top}\) on \(\Phi^{-1}(K)\), and the compact set \(K\) is given in the proof of Proposition 5.1. Then, invoking the same arguments as in _Step three_ in the proof of Theorem 3.13, the overall control satisfies
\[\|u\|_{\infty}\leq M+\beta(\varepsilon)\alpha(\varepsilon^{2}).\]
Note that although the bound on the control \(u\) is explicitly given, this bound involves some non-trivial computations such as deriving the compact set \(K\) in Proposition 5.1 explicitly and computing the maximal values of the system parameters on this compact set. Moreover, the bound is conservative in the sense that in applications the maximal input will typically be much smaller.
## 4 Simulation
We illustrate the application of the proposed control strategy in Algorithm 3.7 by a numerical simulation. To this end, we consider a continuous chemical reactor and concentrate on the control goal to steer the reactor's temperature to a certain given value \(\bar{y}=y_{\rm ref}(t)\). In the reactor the first order and exothermic reaction Substance-\(1\)\(\rightarrow\) Substance-\(2\) takes place. Such a reactor can be modelled by the following system of equations, cf. [32]
\[\dot{y}(t) =bp(x_{1}(t),x_{2}(t),y(t))-qy(t)+u(t),\] \[\dot{x}_{1}(t) =c_{1}p(x_{1}(t),x_{2}(t),y(t))+d(x_{1}^{\rm in}-x_{1}(t)), \tag{15}\] \[\dot{x}_{2}(t) =c_{2}p(x_{1}(t),x_{2}(t),y(t))+d(x_{2}^{\rm in}-x_{2}(t)),\]
where \(x_{1}\) is the concentration of the reactant Substance-\(1\), \(x_{2}\) the concentration of the product Substance-\(2\) and \(y\) describes the reactor temperature; \(u\) is the feed temperature/coolant control input. The value \(x_{i}^{\rm in}\) is the (positive) concentration of Substance-\(i\) (\(i=1,2\)) in the feed flow. Further, the constant \(b>0\) describes the exothermicity of the reaction, \(d>0\) is associated with the dilution rate and \(q>0\) is a constant consisting of the combination of the dilution rate and the heat transfer rate. Further, \(c_{1},c_{2}\in\mathds{R}\) are the stoichiometric coefficients and \(p:\mathds{R}_{\geq 0}\times\mathds{R}_{\geq 0}\times\mathds{R}_{\geq 0}\rightarrow \mathds{R}_{\geq 0}\) is the reaction heat; here the latter involves the Arrhenius function and is assumed to be given as
\[p(x_{1},x_{2},y)=k_{0}e^{-\frac{k_{1}}{y}}x_{1},\]
where \(k_{0},k_{1}\) are positive parameters. As a model for this nonlinear reaction process we consider a linearization of system (15), obtained by linearizing the Arrhenius function around the desired final temperature \(\bar{y}=337.1K\) and \(x_{1}=\frac{1}{2}x_{1}^{\rm in}\). This results in
\[p_{\rm in}(x_{1},x_{2},y)=k_{0}e^{-\frac{k_{1}}{y}}x_{1}+\frac{k_{0}k_{1}e^{- \frac{k_{1}}{y}}}{\bar{y}^{2}}\frac{x_{1}^{\rm in}}{2}(y-\bar{y}).\]
We set \(a_{1}:=\frac{k_{0}k_{1}e^{-\frac{k_{1}}{y}}}{\bar{y}^{2}}\frac{x_{1}^{\rm in} }{2}\), \(a_{2}:=k_{0}e^{-\frac{k_{1}}{y}}\) and define the expressions
\[A=\begin{bmatrix}ba_{1}-q&ba_{2}&0\\ c_{1}a_{1}&c_{1}a_{2}-d&0\\ c_{2}a_{1}&c_{2}a_{2}&-d\end{bmatrix}\in\mathds{R}^{3\times 3},\quad D= \begin{bmatrix}-ba_{1}\bar{y}\\ c_{1}a_{2}\bar{y}+dx_{1,\rm M}^{\rm in}\\ dx_{2,\rm M}^{\rm in}\end{bmatrix}\in\mathds{R}^{3}.\]
Then, with \(x:=(y_{\rm M},x_{1,\rm M},x_{2,\rm M})^{\top}\in\mathds{R}^{3}\) the model is given by
\[\dot{x}(t) =Ax(t)+Bu_{\rm FMPC}(t)+D,\] \[y_{\rm M}(t) =Cx(t),\]
where \(C=B^{\top}=[1,0,0]\in\mathds{R}^{1\times 3}\). We run the simulation on an interval of \([0,4]\) minutes, and choose according to [19, 32] the following values for the parameters: \(c_{1}=-1=-c_{2}\), \(k_{0}=e^{25}\), \(k_{1}=8700\), \(d=1.1\), \(q=1.25\), \(x_{1}^{\rm in}=1\), \(x_{2}^{\rm in}=0\) and \(b=209.2\). We simulate the following scenarios:
* _Case 1:_ FMPC without robustification, i.e., \(u_{\rm FMCP}\) is computed via Algorithm 2.1 and applied to the system without an additional funnel control loop; this is shown in Figure 3.
* _Case 2:_ RFMPC with trivial proper re-initialization, i.e., \(\hat{x}_{k}=x_{k}^{\rm pre}\) in Step (a) of Algorithm 3.7; this is depicted in Figure 4.
* _Case 3:_ RFMPC with proper initialization according to the system's output, i.e., \(\hat{x}_{k}\in\Omega_{\xi}(x_{k}^{\text{pre}},\hat{y}_{k})\) such that \(h(\hat{x}_{k})=\hat{y}_{k}\) in Step (a) of Algorithm 3.7; this is shown in Figure 5.
As activation function we take the ReLU-like map
\[\beta(s)=\begin{cases}0,&s\leq S_{\text{crit}},\\ s-S_{\text{crit}},&s\geq S_{\text{crit}},\end{cases}\]
where we choose \(S_{\text{crit}}=0.4\), i.e., the funnel controller becomes active, if the error \(y-y_{\text{M}}\) exceeds \(40\%\) of the maximal distance to its funnel boundary. In this example, we restrict the MPC control signal to \(\|u_{\text{FMPC}}\|_{\infty}\leq 600\). Further, we choose the design parameters \(\lambda_{u}=10^{-4}\), prediction horizon \(T=0.5\), and time shift \(\delta=0.05\). In the following figures, the control signal generated via funnel MPC is labeled with the subscript FMCP (\(u_{\text{FMPC}}\)); the signal generated by the additional funnel controller is labeled with the subscript FC (\(u_{\text{FC}}\)).
Figure 3 shows the application of the control signal computed with FMPC (Algorithm 2.1) in Case 1 to the system without an additional funnel control feedback loop. The error \(e_{\text{M}}(t)=y_{\text{M}}(t)-\bar{y}\) between the model's output \(y_{\text{M}}(t)\) and the reference \(\bar{y}\) evolves within the funnel boundaries \(\psi(t)\). However, the control signal computed with FMPC using the linear model is not sufficient to achieve that the tracking error \(e(t)=y(t)-\bar{y}\) evolves within the funnel boundaries \(\psi(t)\). Obviously, the deviation is induced during the initial phase. After about one minute, the system is in a state which is close to the linearized model and hence the latter is a good approximation of the system. In this region, the control \(u_{\text{FMPC}}\) has a similar effect on both dynamics; however, the error \(y(t)-\bar{y}\) already evolves outside the funnel boundaries \(\psi(t)\). Figure 4 shows the application of the
Figure 4: Application of the control computed by robust FMPC with additional funnel control feedback loop, without re-initialization.
Figure 3: Application of the control computed by FMPC without re-initialization and without additional funnel control feedback loop.
control signal computed with robust funnel MPC (Algorithm 3.7) in Case 2, i.e., besides the FMPC control signal the additional funnel controller is applied in order to guarantee that the error \(y(t)-\bar{y}\) evolves within the boundaries \(\psi(t)\). Since the model and the system do not coincide, the system evolves differently and hence the funnel controller has to compensate the model-plant mismatch.
Figure 5 shows the application of Algorithm 3.7 in Case 3. Besides the additional application of the funnel controller, the model's state is updated with \(h(\hat{x}_{k})=\hat{y}_{k}\) at the beginning of every MPC cycle. Note that in Figure 5 the funnel controller is inactive most of the time, i.e., the applied control signal can be viewed to be close to _optimal_ with respect to the cost function (3) (i.e., optimal in the sense of the OCP (10)).
## 5 Auxiliary results and proofs
Before we present the main proof, we establish some auxiliary results. In Section 5.1 we show the existence of a solution of the OPC (10). In Section 5.2 we state some results concerning the application and combination of the funnel controller with the MPC scheme. Finally in Section 5.3 we provide a proof of the main result Theorem 3.13.
### Existence of an optimal control
The first proposition concerns the existence of a solution of the OCP (10).
**Proposition 5.1**.: _Consider the model (2) with \((f,g,h)\in\mathcal{M}^{m}\). Let \(\delta>0\), \(\xi\geq 0\), \(\psi\in\mathcal{G}\), \(y_{\mathrm{ref}}\in W^{1,\infty}(\mathds{R}_{\geq 0},\mathds{R}^{m})\) and a proper initialization strategy \(\kappa_{\xi}:\mathds{R}^{n}\times\mathds{R}^{m}\to\mathds{R}^{n}\) be given. Let the sequence \((t_{k})_{k\in\mathds{N}_{0}}\) be defined by \(t_{k}=k\delta\) and \((\hat{y}_{k})_{k\in\mathds{N}_{0}}\) be an arbitrary sequence with \(\hat{y}_{k}\in\mathcal{D}_{t_{k}}\) for all \(k\in\mathds{N}_{0}\). Then there exists \(M>0\), independent of \(\delta\), such that for all \(T\geq\delta\) and all \(x_{0}^{\mathrm{pre}}\in\{\ x\in\mathds{R}^{n}\ \mid h(x)\in\mathcal{D}_{t_{0}},\ \|[0,I_{n-m}]\Phi(x)\|\leq\xi\ \}\) the OCP_
\[\underset{\begin{subarray}{c}u\in L^{\infty}([t_{k},t_{k}+T],\mathds{R}^{m}),\\ \|u\|_{\infty}\leq M\end{subarray}}{\mathrm{minimize}}\quad\int_{t_{k}}^{t_{k}+T }\ell(t,x(t;t_{k},\kappa_{\xi}(x_{k}^{\mathrm{pre}},\hat{y}_{k}),u),u(t))\ dt \tag{16}\]
_has a solution \(u_{k}^{\star}\in L^{\infty}([t_{k},t_{k}+T],\mathds{R}^{m})\) for all \(k\in\mathds{N}_{0}\), where \((x_{k}^{\mathrm{pre}})_{k\in\mathds{N}_{0}}\) is defined by_
\[x_{k+1}^{\mathrm{pre}}:=x(t_{k+1};t_{k},\kappa_{\xi}(x_{k}^{\mathrm{pre}},\hat {y}_{k}),u_{k}^{\star}).\]
_Moreover, the piecewise continuous function_
\[y_{\mathrm{M}}:\mathds{R}_{\geq 0}\to\mathds{R}^{m},\quad t\mapsto\sum_{k\in \mathds{N}_{0}}h(x(t;t_{k},\kappa_{\xi}(x_{k}^{\mathrm{pre}},\hat{y}_{k}),u_{ k}^{\star}))|_{[t_{k},t_{k+1})}\]
_satisfies \(y_{M}(t)\in\mathcal{D}_{t}\) for all \(t\geq 0\) and there exists \(\bar{\lambda}>0\) such that \(\mathrm{ess}\sup_{t\geq 0}\|\hat{y}_{M}(t)\|\leq\bar{\lambda}\). The bound \(\bar{\lambda}\) is independent of \((\hat{y}_{k})_{k\in\mathds{N}_{0}}\), \(\delta\), \(x_{0}^{\mathrm{pre}}\) and \(\kappa_{\xi}\)._
Figure 5: Application of the control computed by robust FMPC with additional funnel control feedback loop and with re-initialization.
Proof.: _Step one._ We introduce some notation. We denote by \(\mathcal{Y}_{\hat{\zeta}}(I)\) the set of all functions \(\zeta\in\mathcal{R}(I,\mathds{R}^{m})\) which start at \(\hat{\zeta}\in\mathds{R}^{m}\) and \(\zeta-y_{\text{ref}}\) evolves within the funnel given by \(\psi\) on an interval \(I\subseteq\mathds{R}_{\geq 0}\) of the form \(I=[a,b]\) with \(b\in(a,\infty)\) or \(I=[a,b)\) with \(b=\infty\):
\[\mathcal{Y}_{\hat{\zeta}}(I):=\left\{\ \zeta\in\mathcal{R}(I,\mathds{R}^{m})\ \Big{|}\ \zeta(\inf I)=\hat{\zeta},\ \ \forall\,t\in I:\ \zeta(t)\in\mathcal{D}_{t}\ \right\}.\]
Recall that for \(k\in\mathds{N}_{0}\), \(\hat{\eta}\in\mathds{R}^{n-m}\) and \(\zeta\in\mathcal{R}([t_{k},\infty),\mathds{R}^{m})\), \(\eta(\cdot;t_{k},\hat{\eta},\zeta):[t_{k},\infty)\to\mathds{R}^{n-m}\) denotes the global solution of the initial value problem (5b), \(\eta(t_{k})=\hat{\eta}\), where \(y_{\text{M}}\) is substituted by \(\zeta\). Define for \(k\in\mathds{N}_{0}\) and \(t\geq t_{k}\) the set
\[N_{t_{k}}^{t}:=\left\{\ \eta(t;t_{k},\hat{\eta},\zeta)\ \Big{|}\ (\hat{\zeta},\hat{ \eta})\in\mathcal{D}_{t_{k}}\times\mathds{B}_{\xi},\ \zeta\in\mathcal{Y}_{\hat{\zeta}}([t_{k},\infty))\ \right\},\]
where \(\mathds{B}_{\xi}:=\left\{\ z\in\mathds{R}^{n-m}\ |\ \|z\|\leq\xi\ \right\}\). Finally, define the set
\[\mathcal{U}(t_{k},\hat{x}):=\left\{\ u\in L^{\infty}([t_{k},t_{k}+T],\mathds{R }^{m})\ \Big{|}\ \begin{array}{l}\|u\|_{\infty}\leq M,\\ \forall\,t\in[t_{k},t_{k}+T]:\ h(x(t;t_{k},\hat{x},u))\in\mathcal{D}_{t}\end{array}\right\}\]
of all \(L^{\infty}\)-controls \(u\) bounded by \(M>0\) which, if applied to the model (2), guarantee that the error \(e_{\text{M}}=y_{\text{M}}-y_{\text{ref}}\) evolves within the funnel \(\mathcal{F}_{\psi}\) on the interval \([t_{k},t_{k+1}]\).
_Step two._ For arbitrary \(k\in\mathds{N}_{0}\), we make three observations:
1. Since \(\kappa_{\xi}\) is a proper initialization strategy and \(\hat{y}_{k}\in\mathcal{D}_{t_{k}}\) for all \(k\in\mathds{N}_{0}\), the following holds: \[x_{k}^{\text{pre}}\in\Phi^{-1}\left(\mathcal{D}_{t_{k}}\times\bigcup_{i=0}^{k }N_{t_{i}}^{t_{k}}\right)\quad\Longrightarrow\quad\kappa_{\xi}(x_{k}^{\text{ pre}},\hat{y}_{k})\in\Phi^{-1}\left(\mathcal{D}_{t_{k}}\times\bigcup_{i=0}^{k}N_{t_{i}}^ {t_{k}}\right).\]
2. If, for \(\hat{x}\in\Phi^{-1}\left(\mathcal{D}_{t_{k}}\times\bigcup_{i=0}^{k}N_{t_{i}} ^{t_{k}}\right)\), the set \(\mathcal{U}(t_{k},\hat{x})\) is non-empty and an element \(u\in\mathcal{U}(t_{k},\hat{x})\) is applied to the model (2), then \[x(t_{k+1};t_{k},\hat{x},u)\in\Phi^{-1}\left(\mathcal{D}_{t_{k+1}}\times\bigcup _{i=0}^{k+1}N_{t_{i}}^{t_{k+1}}\right).\]
3. If, for \(\hat{x}\in\mathds{R}^{n}\), the set \(\mathcal{U}(t_{k},\hat{x})\) is non-empty, then the OCP \[\operatorname*{minimize}_{u\in L^{\infty}\begin{subarray}{c}([t_{k},t_{k}+T], \mathds{R}^{m}),\\ \|u\|_{\infty}\leq M\end{subarray}}\quad\int_{t_{k}}^{t_{k}+T}\ell(t,x(t;t_{k},\hat{x}),u(t))\ \text{d}t\] has a solution \(u_{k}^{\star}\in\mathcal{U}(t_{k},\hat{x})\) according to [3, Thm. 4.6].
For (i) observe that \([I_{m},0]\Phi(x)=h(x)\) for all \(x\in\mathds{R}^{n}\). To see (ii), let \(\hat{x}\in\Phi^{-1}\left(\mathcal{D}_{t_{k}}\times\bigcup_{i=0}^{k}N_{t_{i}}^ {t_{k}}\right)\) be such that \(\mathcal{U}(t_{k},\hat{x})\) is non-empty. If \(u\in\mathcal{U}(t_{k},\hat{x})\) is applied to the model (2), then \(h(x(t;t_{k},\hat{x},u))\in\mathcal{D}_{t_{k}}\) for all \(t\in[t_{k},t_{k}+T]\), in particular \(h(x(t_{k+1};t_{k},\hat{x},u))\in\mathcal{D}_{t_{k+1}}\). Furthermore, there exists \(i\leq k\) such that \([0,I_{n-m}]\Phi(\hat{x})\in N_{t_{i}}^{t_{k}}\) and hence there exist \((\hat{\zeta},\hat{\eta})\in\mathcal{D}_{t_{i}}\times\mathds{B}_{\xi}\) and \(\zeta\in\mathcal{Y}_{\hat{\zeta}}([t_{i},\infty))\) with \([0,I_{n-m}]\Phi(\hat{x})=\eta(t_{k};t_{i},\hat{\eta},\zeta)\). Define \(\tilde{\zeta}:[t_{i},\infty)\to\mathds{R}^{m}\) by
\[\tilde{\zeta}(t):=\begin{cases}h(x(t;t_{k},\hat{x},u)),&t\in[t_{k},t_{k+1}]\\ \zeta(t),&t\in[t_{i},t_{k})\cup(t_{k+1},\infty).\end{cases}\]
Then, \(\tilde{\zeta}\in\mathcal{Y}_{\hat{\zeta}}([t_{i},\infty))\) and \(\eta(t_{k+1};t_{i},\hat{\eta},\tilde{\zeta})\in N_{t_{i}}^{t_{k+1}}\). Thus,
\[\Phi(x(t_{k+1};t_{k},\hat{x},u))=\begin{pmatrix}h(x(t_{k+1};t_{k},\hat{x},u))\\ [0,I_{n-m}]\Phi(x(t_{k+1};t_{k},\hat{x},u))\end{pmatrix}=\begin{pmatrix}h(x(t_{k +1};t_{k},\hat{x},u))\\ \eta(t_{k+1};t_{i},\hat{\eta},\tilde{\zeta})\end{pmatrix}\in\mathcal{D}_{t_{k+1 }}\times N_{t_{i}}^{t_{k+1}}.\]
_Step three._ Since \(\psi\in W^{1,\infty}(\mathds{R}_{\geq 0},\mathds{R})\) and \(y_{\text{ref}}\in W^{1,\infty}(\mathds{R}_{\geq 0},\mathds{R}^{m})\) are bounded, the set \(O:=\bigcup_{t\geq 0}\mathcal{D}_{t}\) is bounded. Thus, for all \(k\in\mathds{N}_{0}\) and all \(\hat{\zeta}\in\mathcal{D}_{t_{k}}\) every function \(\zeta\in\mathcal{Y}_{\hat{\zeta}}([t_{k},\infty))\) is bounded. Since \(O\times\mathds{B}_{\xi}\) is bounded,
it follows from the BIBS condition (7) that the set \(N:=\bigcup_{k\in\mathds{N}_{0}}\bigcup_{t\geq t_{k}}N_{t_{k}}^{t_{k}}\) is also bounded. Then the set \(K:=\overline{O\times N}\) is compact and
\[\forall\,T>0\ \forall\,k\in\mathds{N}_{0}\ \forall\,(\hat{\zeta},\hat{\eta})\! \in\!\mathcal{D}_{t_{k}}\times\bigcup_{i=0}^{k}N_{t_{i}}^{t_{k}}\ \forall\,\zeta\in\mathcal{Y}_{\hat{\zeta}}([t_{k},t_{k}+T])\ \forall\,t\in\![t_{k},t_{k}+T]\!:(\zeta(t),\eta(t;t_{k},\hat{\eta},\zeta))\! \in\!K.\]
To see this, let \(T>0\), \(k\in\mathds{N}_{0}\), and \((\hat{\zeta},\hat{\eta})\in\mathcal{D}_{t_{k}}\times\bigcup_{i=0}^{k}N_{t_{i}} ^{t_{k}}\) be arbitrarily given. Then, there exists \(i\leq k\) with \(\hat{\eta}\in N_{t_{i}}^{t_{k}}\). By definition of \(N_{t_{i}}^{t_{k}}\), there exist \((\hat{\zeta}_{0},\hat{\eta}_{0})\in\mathcal{D}_{t_{k}}\times\mathds{B}_{\xi}\) and \(\zeta_{0}\in\mathcal{Y}_{\hat{\zeta}_{0}}([t_{i},\infty))\) such that \(\hat{\eta}=\eta(t_{k};t_{i},\hat{\eta}_{0},\zeta_{0})\). Let \(\zeta\in\mathcal{Y}_{\hat{\zeta}}([t_{k},t_{k}+T])\), then \(\zeta(t)\in\mathcal{D}_{t}\subseteq O\) for all \(t\in[t_{k},t_{k}+T]\). Define \(\tilde{\zeta}:[t_{i},\infty)\to\mathds{R}^{m}\) by
\[\tilde{\zeta}(t):=\begin{cases}\zeta(t),&t\in[t_{k},t_{k}+T],\\ \zeta_{0}(t),&t\in[t_{i},t_{k})\cup(t_{k}+T,\infty).\end{cases}\]
Then, \(\tilde{\zeta}\in\mathcal{Y}_{\hat{\zeta}_{0}}([t_{i},\infty))\) and for \(t\in[t_{k},t_{k}+T]\) we have
\[\eta(t;t_{k},\hat{\eta},\zeta)=\eta(t;t_{k},\eta(t_{k};t_{i},\hat{\eta}_{0}, \zeta_{0}),\zeta)=\eta(t;t_{i},\hat{\eta}_{0},\tilde{\zeta})\in N_{t_{k}}^{t} \subseteq N.\]
_Step four._ A straightforward adaption of [3, Prop. 4.9], using the constructed compact set \(K\), yields the existence of \(M>0\) such that for all \(k\in\mathds{N}_{0}\) the set \(\mathcal{U}(t_{k},\hat{x})\) is non-empty if \(\hat{x}\in\Phi^{-1}\left(\mathcal{D}_{t_{k}}\times\bigcup_{i=0}^{k}N_{t_{i}} ^{t_{k}}\right)\).
_Step five._ We show by induction that
\[\forall\,k\in\mathds{N}_{0}:\ x_{k}^{\text{pre}}\in\Phi^{-1}\left(\mathcal{D} _{t_{k}}\times\bigcup_{i=0}^{k}N_{t_{i}}^{t_{k}}\right)\]
and
\[\forall\,k\in\mathds{N}_{0}\ \forall\,t\in[t_{k},t_{k+1}]:\quad h(x(t;t_{k}, \kappa_{\xi}(x_{k}^{\text{pre}},\hat{y}_{k}),u_{k}^{\star}))\in\mathcal{D}_{t}.\]
Since \(x_{0}^{\text{pre}}\in\{\ x\in\mathds{R}^{n}\ |\ h(x)\in\mathcal{D}_{t_{0}},\ \|[0,I_{n- }]\Phi(x)\|\leq\xi\ \}\) by assumption, \(x_{0}^{\text{pre}}\in\Phi^{-1}\left(\mathcal{D}_{t_{0}}\times N_{t_{0}}^{t_{0}}\right)\). Due to observation (i) of Step two, \(\kappa_{\xi}(x_{0}^{\text{pre}},\hat{y}_{0})\in\Phi^{-1}\left(\mathcal{D}_{t_{0 }}\times N_{t_{0}}^{t_{0}}\right)\). Thus, \(\mathcal{U}(t_{0},\kappa_{\xi}(x_{0}^{\text{pre}},\hat{y}_{0}))\neq\emptyset\) according to Step four. The optimization problem (16) has a solution \(u_{0}^{\star}\in\mathcal{U}(t_{0},\kappa_{\xi}(x_{0}^{\text{pre}},\hat{y}_{0}))\) because of observation (iii) in Step two. Due to the definition of \(\mathcal{U}(t_{0},\kappa_{\xi}(x_{0}^{\text{pre}},\hat{y}_{0}))\), this implies in particular \(h(x(t;t_{0},\kappa_{\xi}(x_{0}^{\text{pre}},\hat{y}_{0}),u_{0}^{\star}))\in \mathcal{D}_{t}\) for all \(t\in[t_{0},t_{1}]\) and
\[x_{1}^{\text{pre}}=x(t_{1};t_{0},\kappa_{\xi}(x_{0}^{\text{pre}},\hat{y}_{0}),u _{0}^{\star})\in\Phi^{-1}\left(\mathcal{D}_{t_{1}}\times\bigcup_{i=0}^{1}N_{t_{ i}}^{t_{1}}\right)\]
according to observation (ii) in Step two.
If \(x_{k}^{\text{pre}}\in\Phi^{-1}\left(\mathcal{D}_{t_{k}}\times\bigcup_{i=0}^{k}N _{t_{i}}^{t_{k}}\right)\) for \(k\in\mathds{N}\), then \(\kappa_{\xi}(x_{k}^{\text{pre}},\hat{y}_{k})\in\Phi^{-1}\left(\mathcal{D}_{t_{ k}}\times\bigcup_{i=0}^{k}N_{t_{i}}^{t_{k}}\right)\) due to observation (i) in Step two. Thus, \(\mathcal{U}(t_{k},\kappa_{\xi}(x_{k}^{\text{pre}},\hat{y}_{k}))\neq\emptyset\) according to Step four. Because of observation (iii) in Step two, the OCP (16) has a solution \(u_{k}^{\star}\in\mathcal{U}(t_{k},\kappa_{\xi}(x_{k}^{\text{pre}},\hat{y}_{k}))\). By definition of \(\mathcal{U}(t_{k},\kappa_{\xi}(x_{k}^{\text{pre}},\hat{y}_{k}))\), this results in \(h(x(t;t_{k},\kappa_{\xi}(x_{k}^{\text{pre}},\hat{y}_{k}),u_{k}^{\star}))\in \mathcal{D}_{t}\) for all \(t\in[t_{k},t_{k+1}]\) and
\[x_{k+1}^{\text{pre}}=x(t_{k+1};t_{k},\kappa_{\xi}(x_{k}^{\text{pre}},\hat{y}_{k} ),u_{k}^{\star})\in\Phi^{-1}\left(\mathcal{D}_{t_{k+1}}\times\bigcup_{i=0}^{k+1 }N_{t_{i}}^{t_{k+1}}\right)\]
according to observation (ii) in Step two.
_Step six._ It follows from Step five that \(y_{\text{M}}(t)\in\mathcal{D}_{t}\) for all \(t\in\mathds{R}_{\geq 0}\). Define \(y_{\text{M},k}:=y_{\text{M}}|_{[t_{k},t_{k+1}]}\) and \(\hat{\eta}_{k}:=[0,I_{n-m}]\Phi(\kappa_{\xi}(x_{k}^{\text{pre}},\hat{y}_{k}))\) for all \(k\in\mathds{N}_{0}\). Due to the definition of the compact set \(K\) and since \(y_{\text{M},k}\in\mathcal{Y}_{y_{\text{M},k}(t_{k})}([t_{k},t_{k+1}])\), we have \((y_{\text{M},k}(t),\eta(t;t_{k},\eta_{k},y_{\text{M},k}))\in K\) for all \(t\in[t_{k},t_{k+1}]\) and all \(k\in\mathds{N}_{0}\). The functions \(y_{\text{M},k}\) and \(\eta(\cdot;t_{k},\eta_{k},y_{\text{M},k})\) satisfy the differential equation (5) on the interval \([t_{k},t_{k+1}]\) for all \(k\in\mathds{N}\), thus
\[\operatorname*{ess\,sup}_{t\geq 0}\|\hat{y}_{\text{M}}(t)\| \leq\operatorname*{ess\,sup}_{k\in\mathds{R}_{0}}\operatorname*{ ess\,sup}_{t\in[t_{k},t_{k+1}]}\|\hat{y}_{\text{M},k}(t)\|\] \[=\operatorname*{ess\,sup}_{k\in\mathds{R}_{0}}\operatorname*{ ess\,sup}_{t\in[t_{k},t_{k+1}]}\left\|p(y_{\text{M},k}(t),\eta(t_{k};t_{k},\hat{\eta}_{k},y_{ \text{M},k}))+\Gamma(\Phi^{-1}(y_{\text{M},k
The last inequality holds for all choices of \((\hat{y}_{k})_{k\in\mathds{N}_{0}}\), \(\delta\), \(x_{0}^{\text{pre}}\), and \(\kappa_{\xi}\), which completes the proof.
### Auxiliary funnel control results
Concerning the application of funnel control, we state the following results.
**Lemma 5.2**.: _Let \(N\in\mathcal{C}(\mathds{R}_{\geq 0},\mathds{R})\) be a surjection, \(\alpha\in\mathcal{C}([0,1),[1,\infty))\) be a bijection, and \(\beta\in\mathcal{C}([0,1],[0,\beta^{+}])\) be an activation function with \(\beta^{+}>0\). Then \(\tilde{N}:=(\beta\circ\sqrt{\alpha^{-1}})\cdot N\in\mathcal{C}(\mathds{R}_{ \geq 0},\mathds{R})\) is surjective._
Proof.: \(N\in\mathcal{C}(\mathds{R}_{\geq 0},\mathds{R})\) being a surjection is equivalent to \(\limsup_{s\to\infty}N(s)=\infty\) and \(\liminf_{s\to\infty}N(s)=-\infty\). Since \(\lim_{s\to\infty}(\beta\circ\sqrt{\alpha^{-1}})(s)=\beta^{+}>0\), we have
\[\limsup_{s\to\infty}\tilde{N}(s)=\infty\quad\text{ and }\quad\liminf_{s\to\infty} \tilde{N}(s)=-\infty.\]
This implies that \(\tilde{N}=(\beta\circ\sqrt{\alpha^{-1}})\cdot N\in\mathcal{C}(\mathds{R}_{ \geq 0},\mathds{R})\) is surjective as well.
**Proposition 5.3**.: _Consider a system (14) with \((d,F,\textbf{T})\in\mathcal{N}^{m}\). Let \(y^{0}\in\mathcal{C}([-\sigma,0],\mathds{R}^{m})\), \(\sigma\geq 0\), \(D\in L^{\infty}(\mathds{R}_{\geq 0},\mathds{R}^{m})\), \(\beta\in\mathcal{C}([0,1],[0,\beta^{+}])\) be an activation function with \(\beta^{+}>0\), \(\rho\in W^{1,\infty}(\mathds{R}_{\geq 0},\mathds{R}^{m})\) and \(\varphi\in\mathcal{G}\) be given such that \(\|y^{0}(0)-\rho(0)\|<\varphi(0)\). Then the application of_
\[u(t)=\beta(\|e(t)/\varphi(t)\|)(N\circ\alpha)(\|e(t)/\varphi(t)\|^{2})e(t)/ \varphi(t),\quad e(t):=y(t)-\rho(t),\]
_to the system_
\[\dot{y}(t)=F\big{(}d(t),\textbf{T}(y)(t),D(t)+u(t)\big{)},\quad y|_{[-\sigma, 0]}=y^{0},\]
_yields a closed-loop initial value problem, which has a solution, every solution can be maximally extended, and every maximal solution \(y:[0,\omega)\to\mathds{R}^{m}\) has the following properties_
1. _the solution is global, i.e.,_ \(\omega=\infty\)_,_
2. _all signals are bounded, in particular,_ \(u,\dot{y}\in L^{\infty}(\mathds{R}_{\geq 0},\mathds{R}^{m})\) _and_ \(y\in L^{\infty}([-\sigma,\infty),\mathds{R}^{m})\)_,_
3. _the tracking error evolves within prescribed error bounds, i.e.,_ \[\forall\,t\geq 0\,:\,\,\|y(t)-\rho(t)\|<\varphi(t).\]
Proof.: Using lemma 5.2 and the perturbation high-gain property from definition 3.10 it is a straightforward modification of the proof of [5, Thm. 1.9], using the perturbation high-gain property instead of the high-gain property, similar as in Step four of the proof of Theorem 3.13.
### Proof of the main result
Now we are in the position to present the proof of the main result Theorem 3.13.
Proof of Theorem 3.13.: Let \(\Phi\) be a diffeomorphism associated with the model \((f,g,h)\) according to Assumption 3.1.
_Step one._ We show that the set \(X^{0}\) of initial values for the model is non-empty. To this end, let \(z:=\Phi^{-1}(y^{0}(0),0_{n-m})\). Then, recalling \([I_{m},0]\Phi(\cdot)=h(\cdot)\), we have \(h(z)=y^{0}(0)\). Therefore, \(\|h(z)-y_{\text{ref}}(0)\|=\left\|y^{0}(0)-y_{\text{ref}}(0)\right\|<\psi(0)\) because \(y^{0}(0)\in\mathcal{D}_{0}\). Further, \(\|y(0)-h(z)\|=0\) and \(\|[0,I_{n-m}]\Phi(z)\|=\left\|[0,I_{n-m}]\Phi(\Phi^{-1}(y^{0}(0),0_{n-m})) \right\|=0\leq\xi.\) Thus, \(z\in X^{0}\).
_Step two._ According to Proposition 5.1 there exists \(M>0\) such that, for every \(\hat{x}_{0}\in X^{0}\) and every possible sequence of measurements \((\hat{y}_{k})_{k\in\mathds{N}_{0}}\) with \(\hat{y}_{k}\in\mathcal{D}_{t_{k}}\) for all \(k\in\mathds{N}_{0}\), the OCP (10) has a solution \(u_{k,\text{FMPC}}\) for every \(k\in\mathds{N}_{0}\) and initialization \(\hat{x}_{k}=\kappa_{\xi}(x_{k}^{\text{pre}},\hat{y}_{k})\) of the model (2), where \(x_{k+1}^{\text{pre}}=x(t_{k+1};t_{k},\hat{x}_{k},u_{k,\text{FMPC}})\).
_Step three._ Now we turn towards the part where the funnel controller (12) is involved. On each interval \([t_{k},t_{k+1}]\) the system's dynamics are given by
\[\dot{y}_{k}(t)=F(d(t),\textbf{T}(y_{k})(t),u_{k}(t)),\quad y_{k}|_{[-\sigma,t_{ k}]}=y_{k-1}|_{[-\sigma,t_{k}]}, \tag{17}\]
where \(y_{-1}|_{[-\sigma,0]}:=y^{0}\), and in particular \(y_{k}(t_{k})=y_{k-1}(t_{k})\), i.e., although the model's state is updated at \(t=t_{k}\), the system is not re-initialized at the time instances \(t_{k}\). The funnel control signal is, for \(k\in\mathds{N}_{0}\) and \(t\in[t_{k},t_{k+1}]\), given by
\[u_{k,\mathrm{FC}}(t)=\beta(\|y_{k}(t)-y_{k,\mathrm{M}}(t)\|/\varphi_{k}(t))(N \circ\alpha)(\|(y_{k}(t)-y_{k,\mathrm{M}}(t))/\varphi_{k}(t)\|^{2})(y_{k}(t)-y _{k,\mathrm{M}}(t))/\varphi_{k}(t),\]
where \(y_{k,\mathrm{M}}(t)=h(x(t;t_{k},\kappa_{\xi}(x_{k}^{\mathrm{pre}},y_{k}(t_{k} )),u_{k,\mathrm{FMPC}}))\) for \(t\in[t_{k},t_{k+1}]\) and the funnel function for the funnel control law is piecewise defined by
\[\varphi_{k}:[t_{k},t_{k+1}]\to\mathds{R},\quad t\mapsto\psi(t)-\|y_{k, \mathrm{M}}(t)-y_{\mathrm{ref}}(t)\|,\quad k\in\mathds{N}_{0}.\]
Invoking Proposition 5.1, we have \(y_{k,\mathrm{M}}(t)\in\mathcal{D}_{t}\) for all \(t\in[t_{k},t_{k}+T]\), by which \(\varphi_{k}\) satisfy \(0<\varphi_{k}(t)\leq\psi(t)\) for all \(t\in[t_{k},t_{k+1}]\) and all \(k\in\mathds{N}_{0}\). Every \(\varphi_{k}\) can smoothly be extended to the left and right such that the extension \(\bar{\varphi}_{k}\) satisfies \(\bar{\varphi}_{k}\in\mathcal{G}\) for all \(k\in\mathds{N}_{0}\). We show that the control law
\[u_{k}(t)=u_{k,\mathrm{FMPC}}(t)+u_{k,\mathrm{FC}}(t) \tag{18}\]
applied to the system (17) for \(k\in\mathds{N}_{0}\), leads to a closed-loop system which has a global solution with the properties as in Proposition 5.3. Special attention is required since \(y_{k,\mathrm{M}}(t_{k})\neq y_{k-1,\mathrm{M}}(t_{k})\) and hence also \(\varphi_{k}(t_{k})\neq\varphi_{k-1}(t_{k})\) is possible. We observe that for \(x^{0}\in X^{0}\) we have either
\[\|y_{0,\mathrm{M}}(0)-y_{0}(0)\|=\|h(x^{0})-y_{0}(0)\|<\psi(0)-\|h (x^{0})-y_{\mathrm{ref}}(0)\|=\varphi_{0}(0)\] \[\mathrm{or}\quad\|y_{0,\mathrm{M}}(0)-y_{0}(0)\|=0<\psi(0)-\|h( x^{0})-y_{\mathrm{ref}}(0)\|=\varphi_{0}(0),\]
and so \(y_{0}(0)\in\mathcal{D}_{0}\). Then Proposition 5.1 yields \(\|u_{0,\mathrm{FMPC}}\|_{\infty}\leq M\). Thus, in both cases the feasibility result Proposition 5.3 for the funnel controller is applicable and yields the existence of a solution \(y_{0}:[0,t_{1}]\to\mathds{R}^{m}\) of the closed-loop problem (17), (18) for \(k=0\), with \(\|y_{0,\mathrm{M}}(t)-y_{0}(t)\|<\varphi_{0}(t)\) for all \(t\in[t_{0},t_{1}]\). Then, choosing \(\hat{x}_{1}=\kappa_{\xi}(x_{1}^{\mathrm{pre}},y_{0}(t_{1}))\in\Omega_{\xi}(x_{ 1}^{\mathrm{pre}},y_{0}(t_{1}))\), at \(t=t_{1}\) we have either \(\hat{x}_{1}=x_{1}^{\mathrm{pre}}\), which gives \(y_{1,\mathrm{M}}(t_{1})=y_{0,\mathrm{M}}(t_{1})\) and thus
\[\|y_{1,\mathrm{M}}(t_{1})-y_{1}(t_{1})\|=\|y_{0,\mathrm{M}}(t_{1})-y_{0}(t_{1 })\|<\varphi_{0}(t_{1})=\psi(t_{1})-\|y_{0,\mathrm{M}}(t_{1})-y_{\mathrm{ref}} (t_{1})\|=\varphi_{1}(t_{1}),\]
or \(y_{1,\mathrm{M}}(t_{1})=h(\hat{x}_{1})=y_{0}(t_{1})=y_{1}(t_{1})\) and the estimation above is valid as well, thus \(y_{1}(t_{1})\in\mathcal{D}_{t_{1}}\). Proposition 5.1 yields \(\|y_{1,\mathrm{M}}(t)-y_{\mathrm{ref}}(t)\|<\psi(t)\) for \(t\in[t_{1},t_{2}]\) with \(\|u_{1,\mathrm{FMPC}}\|_{\infty}\leq M\), by which the conditions to reapply Proposition 5.3 are satisfied at \(t=t_{1}\), by which a solution \(y_{1}:[t_{1},t_{2}]\to\mathds{R}^{m}\) of the closed-loop problem (17), (18) exists for \(k=1\), with \(\|y_{1,\mathrm{M}}(t)-y_{1}(t)\|<\varphi_{1}(t)\) for all \(t\in[t_{1},t_{2}]\). Repeating this line of arguments we successively obtain, for each \(k\in\mathds{N}_{0}\), a solution \(y_{k}:[t_{k},t_{k+1}]\to\mathds{R}^{m}\) of the closed-loop problem (17), (18) with \(\|y_{k,\mathrm{M}}(t)-y_{k}(t)\|<\varphi_{k}(t)\) for all \(t\in[t_{k},t_{k+1}]\).
_Step four._ By defining \(y:[-\sigma,\infty)\to\mathds{R}^{m}\) via \(y|_{[-\sigma,0]}=y^{0}\), \(y|_{[t_{k},t_{k+1}]}=y_{k}\) for \(k\in\mathds{N}_{0}\) we obtain a global solution of (14), (13) which satisfies \(\|y_{\mathrm{M}}(t)-y(t)\|<\psi(t)-\|y_{\mathrm{M}}(t)-y(t)\|=\varphi(t)\) for all \(t\geq 0\), where \(y_{\mathrm{M}}(t):=y_{k,\mathrm{M}}(t)\) and \(\varphi(t):=\varphi_{k}(t)\) for \(t\in[t_{k},t_{k+1})\), \(k\in\mathds{N}_{0}\). It remains to show that the overall control
\[u(t):=u_{k}(t),\qquad t\in[t_{k},t_{k+1}),\quad k\in\mathds{N}_{0},\]
is bounded, which we prove by showing that there exists \(\varepsilon\in(0,1)\) such that \(\|y(t)-y_{\mathrm{M}}(t)\|\leq\varepsilon\varphi(t)\) for all \(t\geq 0\). For the sake of better legibility, we introduce the variable \(w(t):=(y(t)-y_{\mathrm{M}}(t))/\varphi(t)\). Choose compact sets \(K_{p}\subset\mathds{R}^{p}\) and \(K_{q}\subset\mathds{R}^{q}\) such that \(d(t)\in K_{p}\) and \(\mathbf{T}(y)(t)\in K_{q}\) for \(t\geq 0\). Further, for \(K_{m}:=\{\ D\in\mathds{R}^{m}\ |\ \|D\|\leq M\ \}\), \(\nu\in(0,1)\) and \(V:=\{\ v\in\mathds{R}^{m}\ |\ \nu\leq\|v\|\leq 1\ \}\) we recall the continuous function from Definition 3.10
\[\chi(s)=\min\left\{\ \langle v,F(\delta,\zeta,\Delta-sv)\rangle\ |\ \delta\in K_{p},\zeta\in K_{q},\Delta\in K_{m},v\in V\ \ \right\}.\]
\(F\) has the perturbation high-gain property and hence the function \(\chi\) is unbounded from above for a suitable \(\nu\in(0,1)\). We note that \(\|w(0)\|<1\) as shown in Step three, and with \(\lambda:=\|\dot{\psi}\|_{\infty}+\|\dot{y}_{\mathrm{ref}}\|_{\infty}\) and \(\bar{\lambda}\geq\|\dot{y}_{\mathrm{M}}\|_{\infty}\) from Proposition 5.1, we choose \(\varepsilon\in(0,1)\) large enough such that \(\varepsilon>\max\{\nu,\|w(0)\|\}\) and
\[\chi(\beta(\varepsilon)(N\circ\alpha)(\varepsilon^{2}))\geq 4\bar{\lambda}+2\lambda,\]
which is possible because of the properties of \(\beta,N,\alpha\) and \(\tilde{\chi}\). We show that
\[\forall\,t\geq 0:\ \|w(t)\|\leq\varepsilon. \tag{19}\]
Unlike the standard funnel control framework, the funnel function \(\varphi\) may have discontinuities at the time instances \(t_{k}\) when the model is re-initialized with \(\hat{x}_{k}\in\Omega_{\xi}(x_{k}^{\text{pre}},y(t_{k}))\) such that \(h(\hat{x}_{k})=y(t_{k})\). This fact requires particular attention when proving (19). We observe that \(\varphi\) is continuous on \([t_{k},t_{k+1}]\) for all \(k\in\mathds{N}_{0}\) and satisfies, by Proposition 5.1,
\[|\dot{\varphi}(t)|\leq|\dot{\psi}(t)|+\|\dot{y}_{\text{M}}\|+\|\dot{y}_{\text{ ref}}(t)\|\leq\lambda+\bar{\lambda}\]
for almost all \(t\geq 0\), independent of \(k\). Now fix an arbitrary \(k\in\mathds{N}_{0}\) and consider two cases.
_Case 1_ : If \(\hat{x}_{k}\in\Omega_{\xi}(x_{k}^{\text{pre}},y_{k-1}(t_{k}))\) is such that \(h(\hat{x}_{k})=y_{k-1}(t_{k})\), then \(y_{\text{M}}(t_{k})=y(t_{k})\) and hence \(\|w(t_{k})\|=0<\varepsilon\). Seeking a contradiction, we suppose that there exists \(t^{*}\in(t_{k},t_{k+1}]\) such that \(\|w(t^{*})\|>\varepsilon\), and invoking continuity of \(w\) on \([t_{k},t_{k+1}]\) we set
\[t_{*}:=\sup\left\{\ t\in[t_{k},t^{*})\ |\ \|w(t)\|=\varepsilon\ \right\}<t^{*}.\]
Then we have \(\|w(t)\|\geq\varepsilon\geq\nu\) (and hence \(w(t)\in V\)) for all \(t\in[t_{*},t^{*}]\) and, since \(\|w(t_{*})\|=\varepsilon\),
\[\chi(\beta(\|w(t_{*})\|)(N\circ\alpha)(\|w(t_{*})\|^{2}))\geq 4\bar{ \lambda}+2\lambda.\]
Therefore, there exists \(t^{**}\in(t_{*},t^{*}]\) such that
\[\forall\,t\in[t_{*},t^{**}]:\ \chi(\beta(\|w(t)\|)(N\circ\alpha)(\|w(t)\|^{2})) \geq 2\bar{\lambda}+\lambda.\]
Then we calculate that, for almost all \(t\in[t_{*},t^{**}]\),
\[\tfrac{\mathrm{d}}{\mathrm{d}t}\tfrac{1}{2}\|w(t)\|^{2} =\langle w(t),\dot{w}(t)\rangle=\left\langle w(t),\frac{-\dot{ \varphi}(t)(y(t)-y_{\text{M}}(t))+\varphi(t)(\dot{y}(t)-\dot{y}_{\text{M}}(t) )}{\varphi(t)^{2}}\right\rangle\] \[=-\frac{\dot{\varphi}(t)}{\varphi(t)}\langle w(t),w(t)\rangle- \frac{1}{\varphi(t)}\langle w(t),\dot{y}_{\text{M}}(t)\rangle+\frac{1}{\varphi (t)}\langle w(t),F(d(t),\mathbf{T}(y)(t),u(t))\rangle\] \[<\frac{1}{\varphi(t)}\Big{(}|\dot{y}(t)|+\|\dot{y}_{\text{M}}(t) \|+\langle w(t),F(d(t),\mathbf{T}(y)(t),u(t))\rangle\Big{)}\] \[\leq\frac{1}{\varphi(t)}(\lambda+2\bar{\lambda})+\frac{1}{\varphi (t)}\langle w(t),F\big{(}d(t),\mathbf{T}(y)(t),u_{k,\text{FMPC}}(t)+u_{k, \text{FC}}(t)\big{)}\rangle\] \[=\frac{1}{\varphi(t)}(\lambda+2\bar{\lambda})+\frac{1}{\varphi(t )}\langle w(t),F\big{(}d(t),\mathbf{T}(y)(t),u_{k,\text{FMPC}}(t)+\beta(\|w(t )\|)(N\circ\alpha)(\|w(t)\|^{2})w(t))\rangle\] \[\leq\frac{1}{\varphi(t)}(\lambda+2\bar{\lambda})-\frac{1}{\varphi (t)}\min\left\{\langle v,F(\delta,\zeta,\Delta-\beta(\|w(t)\|)(N\circ\alpha)( \|w(t)\|^{2})v)\rangle\,\big{|}\,\delta\!\in\!K_{p},\zeta\!\in\!K_{q},\Delta\! \in\!K_{m},v\!\in\!V\right\}\] \[\leq\frac{1}{\varphi(t)}\Big{(}\lambda+2\bar{\lambda}-\chi(\beta( \|w(t)\|)(N\circ\alpha)(\|w(t)\|^{2}))\Big{)}\leq 0,\]
which upon integration, and invoking the definition of \(t^{*}<t^{**}\), gives \(\varepsilon<\|w(t^{**})\|\leq\|w(t_{*})\|=\varepsilon\), a contradiction. Therefore, \(\|w(t)\|\leq\varepsilon\) for all \(t\in[t_{k},t_{k+1}]\).
_Case 2_: If \(\hat{x}_{k}=x_{k}^{\text{pre}}\), then \(\varphi_{k-1}(t_{k})=\varphi_{k}(t_{k})\) and thus the funnel function \(\varphi\) is continuous and weakly differentiable on the interval \([t_{k-1},t_{k+1}]\). In this case, it follows that \(\|w(t)\|\leq\varepsilon\) for all \(t\in[t_{k-1},t_{k+1}]\) with the same arguments as in Case 1.
Overall, we have shown that \(\|w(t)\|\leq\varepsilon\) for all \(t\in[t_{k},t_{k+1}]\) and all \(k\in\mathds{N}_{0}\), independent of the initialization strategy. Therefore, \(\|u\|_{\infty}\leq M+\beta^{+}|(N\circ\alpha)(\varepsilon^{2})|\) and this proves assertion (i).
_Step five._ Finally, a simple calculation yields that for \(t\geq 0\) we have
\[\|y(t)-y_{\text{ref}}(t)\| =\|y(t)-y_{\text{M}}(t)+y_{\text{M}}(t)-y_{\text{ref}}(t)\|\leq \|y(t)-y_{\text{M}}(t)\|+\|y_{\text{M}}(t)-y_{\text{ref}}(t)\|\] \[<\varphi(t)+\|y_{\text{M}}(t)-y_{\text{ref}}(t)\|=\psi(t)-\|y_{ \text{M}}(t)-y_{\text{ref}}(t)\|+\|y_{\text{M}}(t)-y_{\text{ref}}(t)\|=\psi(t),\]
which is assertion (ii). This completes the proof.
|
2306.02972 | Simultaneous or Sequential Training? How Speech Representations
Cooperate in a Multi-Task Self-Supervised Learning System | Speech representation learning with self-supervised algorithms has resulted
in notable performance boosts in many downstream tasks. Recent work combined
self-supervised learning (SSL) and visually grounded speech (VGS) processing
mechanisms for representation learning. The joint training with SSL and VGS
mechanisms provides the opportunity to utilize both unlabeled speech and
speech-related visual information based on data availability. This has shown to
enhance the quality of learned representations, especially at encoding
semantic- and lexical-level knowledge. In this work, we further study the joint
optimization of wav2vec 2.0-based SSL and transformer-based VGS as a multi-task
learning system. We explore a set of training scenarios to understand how
speech representations are shared or transferred between the two tasks, and
what is the optimal training strategy for cross-modal semantic retrieval and
phoneme discrimination performance. As a result, we find that sequential
training with wav2vec 2.0 first and VGS next provides higher performance on
audio-visual retrieval compared to simultaneous optimization of both learning
mechanisms. However, the parallel SSL-VGS training reduces the effects of
catastrophic forgetting when switching between optimization criteria. Moreover,
the results suggest that phonemic representations learned through the VGS
mechanism may generalize better across datasets compared to those learned with
SSL. | Khazar Khorrami, María Andrea Cruz Blandón, Tuomas Virtanen, Okko Räsänen | 2023-06-05T15:35:19Z | http://arxiv.org/abs/2306.02972v1 | Simultaneous or Sequential Training? How Speech Representations Cooperate in a Multi-Task Self-Supervised Learning System +
###### Abstract
Speech representation learning with self-supervised algorithms has resulted in notable performance boosts in many downstream tasks. Recent work combined self-supervised learning (SSL) and visually grounded speech (VGS) processing mechanisms for representation learning. The joint training with SSL and VGS mechanisms provides the opportunity to utilize both unlabeled speech and speech-related visual information based on data availability. This has shown to enhance the quality of learned representations, especially at encoding semantic- and lexical-level knowledge. In this work, we further study the joint optimization of wav2vec 2.0-based SSL and transformer-based VGS as a multi-task learning system. We explore a set of training scenarios to understand how speech representations are shared or transferred between the two tasks, and what is the optimal training strategy for cross-modal semantic retrieval and phoneme discrimination performance. As a result, we find that sequential training with wav2vec 2.0 first and VGS next provides higher performance on audio-visual retrieval compared to simultaneous optimization of both learning mechanisms. However, the parallel SSL-VGS training reduces the effects of catastrophic forgetting when switching between optimization criteria. Moreover, the results suggest that phonemic representations learned through the VGS mechanism may generalize better across datasets compared to those learned with SSL.
speech representation learning, visually grounded speech, multi-task learning, multi-modal neural networks
## I Introduction and related works
Visually grounded speech (VGS) processing refers to algorithms that learn correspondences between image and speech data in an unsupervised manner (see [1] for a review). VGS models are central to the study of autonomous AI systems that could ground their world knowledge to multimodal associations. In addition, they are commonly used to model human infant language learning [2]. The data for training a VGS model comes in the form of images paired with spoken descriptions of the images. In a typical VGS system, speech and image data are processed in parallel neural modules and then mapped together into a shared embedding space, where a similarity score -based contrastive optimization is used for network training (see, e.g., [3]). The system is usually evaluated for its performance on speech-to-image and image-to-speech retrieval tasks (see [1]).
Previous research has shown that hidden layers of trained VGS models reflect linguistic information at, e.g., phonemic and lexical levels, showing that the models can be used for (multimodal) speech representation learning [2, 4, 5, 6]. This is similar to unimodal algorithms for self-supervised learning (SSL), such as wav2vec 2.0 [7], HuBERT [8], and CPC [9], that learn useful speech representations using acoustic speech input as the only data. Similar to large-scale language models (e.g., BERT [10]), the speech representations learned through self-supervised models have shown notable performance boosts in many supervised downstream tasks, such as phoneme or emotion recognition [7, 11, 12], thereby having potential especially in low-resource speech tasks.
Recently, Peng and Harwath [13, 14] introduced a system that jointly learns speech representations using acoustic-level self-supervised learning and semantic-level visual grounding. The use of two learning mechanisms provides a potential advantage over the individual mechanisms: the system can process a combination of speech-only (unlabeled) data through a SSL block and (weakly labeled) speech-image pairs through a VGS block according to data availability in the two cases. This enables potentially synergetic and flexible learning of speech representations from the two previously established representation learning mechanisms, as several layers of the speech encoder module are shared between the SSL and VGS networks. Using this model, Peng and Harwath [13] showed that the joint model performs competitively on phonemic task of ZeroSpeech 2021 challenge [15] and SUPERB benchmark [12] while also outperforming many models at semantic and lexical tasks.
The joint VGS and SSL training, as in [13] and [14], can be seen as a multi-task multi-domain system with the capacity for both incremental and simultaneous learning. While catastrophic forgetting is a main challenge in domain-incremental learning, the major problem with the task-incremental learning is to obtain the knowledge that can be transferred across tasks
[16]. However, the potential synergies of a shared encoder for SSL and VGS tasks were not comprehensively studied in the earlier works ([13, 14]). In addition, most of the experiments by Peng and Harwath used an additional corpus (LibriSpeech [17]) for SSL learning or pre-trained weight initialization, making it difficult to disentangle benefits of mechanism synergies per se from potential benefits of simply having more (and more varied) training data for the SSL. Thereby, it remains unclear in what conditions joint SSL and VGS training facilitates the learning process (e.g., in terms of final representation quality, learning rate, or cross-corpus generalization) compared to what is obtained by the individual mechanisms, and whether benefits of joint training also occur in a case where only the same audio data is available to both learning mechanisms. Also, if the joint training provides learning improvements, what would be the best way to schedule learning with the SSL and VGS mechanisms (e.g., parallel or sequential optimization)?
In this work we try to answer the above questions by investigating a set of combined SSL and VGS training scenarios using the system from [13] for speech-to-image semantic mapping and self-supervised acoustic modeling. We study how temporal sequencing of the learning mechanisms affects phonemic and audiovisual semantic learning when both mechanisms have access to the same audio data.
## II Model description
We adopted the FaST-VGS+ model from [13] with a simplification of using only the "coarse" audio-visual loss of the model for computational feasibility (i.e., the "Fast" transformer version; see [18] for more details). The model (Fig. 1) consists of two main mechanisms for speech-image training (a transformer-based VGS model [18]) and speech SSL training (masking-based speech acoustic modeling with wav2vec 2.0; [7]). Most of the speech encoder is shared between the VGS and SSL mechanisms and optimized for the both tasks (Fig. 1, green block).
In the VGS pipeline, the image and speech inputs are processed in parallel branches where the classification (CLS) tokens of the last transformer layers are used as "semantic" speech and image embeddings. These embeddings are compared using cosine similarity score and optimized for similarity (dissimilarity) in case of matching (mismatching) speech-image pairs. The speech-based SSL uses wav2vec 2.0 (from now on: W2V2) network that randomly masks segments of input speech and learns speech representations by predicting the masked sections from other parts of the same utterance.
The audio waveform encoder shared by SSL and VGS is a 6-layer convolutional neural network (CNN) that maps the input acoustic waveform (in 16 KHz) to embedding of 512-d (calculated every 10 ms). It is followed by an 8-layer transformer block ("speech encoder" in Fig. 1) shared between the two VGS and W2V2 networks, and 4 additional transformer layers dedicated to W2V2 only ("speech decoder"). ResDAVEnet [19] is stack of convolutional and pooling layers applying the down-sampling in time, and the image encoder is a 6-layer transformer block. The dimension of all transformer layers is 768. The VGS network is trained through a masked and marginalized "InfoNCE" loss [20] (here denoted as \(loss_{AV}\)) as a contrastive learning method that tries to minimize the distance between ground-truth speech-image pairs compared to a set of distractor random pairs taken from the same training mini-batch. In W2V2, a contrastive masking loss tries to minimize the distance between the masked speech representations and their ground-truth quantized versions compared to a set of distractors coming from the same utterance. Moreover, a diversity loss is used to encourage the equal use of codebook entries at quantization block.
Following the original work [7], we combined the two masking (\(loss_{AUD,R}\)) and diversity (\(loss_{AUD,D}\)) losses in proportion of 1:0.1, denoting their sum as \(loss_{AU}\). The VGS+ model is trained by combining the \(loss_{AV}\) and \(loss_{AUD}\) with a coefficient \(\alpha\) that controls the emphasis on the two training mechanisms as
\[loss=\alpha loss_{AV}+(1-\alpha)loss_{AUD}. \tag{1}\]
By varying \(\alpha\) at training time, we could manipulate the contribution (and timing) of the auditory and audiovisual learning mechanisms in the overall system training.
## III Experiments
The aim of the experiments was to understand how the SSL and VGS mechanisms interact in different training scenarios with varying emphasis on auditory and audiovisual losses, and what training strategy results in the best representation quality for phonemic discrimination and audiovisual retrieval tasks.
### _Datasets_
We utilized SpokenCOCO (SC) dataset [21] as the training data. It comes with 123k images and 5 spoken English captions per image, resulting a total of 742 h of speech. We used 118k images for model training and 5k images for testing on semantic retrieval tasks. Phoneme discrimination of the representations was measured on LibriSpeech (LS) dev-clean subset (denoted by C) [17].
Fig. 1: VGS+ as a joint model of VGS and SSL training. The green block is optimized for the both tasks.
### _Model variations_
We considered three base model variants: 1) VGS only (\(\alpha=1\)), 2) W2V2 only (\(\alpha=0\)), and 3) VGS+ with an equal emphasis on both losses (\(\alpha=0.5\)). We also defined six training variations of these by relative scheduling of the basic optimization approaches. In each variation, one base model was used as a pretraining for follow-up training using another variant. We denote the scheduled system as (A, B), where A is the base model used at the pretraining phase and B is the main training phase. As the first scenario, we pretrained with VGS or W2V2 and continued with VGS+ (i.e, (W2V2, VGS+) and (VGS, VGS+)). As the second scenario, we pretrained with each individual model of VGS or W2V2 and then continued with the other model ((W2V2, VGS) and (VGS, W2V2)). And finally as the third scenario, we pretrained with VGS+ and continued with one of the individual models of VGS or W2V2 ((VGS+, W2V2) and (VGS+, VGS)).
### _Evaluation_
Representation learning was evaluated in terms of semantic and phonemic performance scores, as tested separately for the training variations to investigate what is the optimal scenario regarding the measured performance metrics. We also qualitatively analyzed loss curves to understand how the optimization strategy affects the dynamics of the training process.
As the metric for phonemic representations, we used the ABX phonemic discrimination score by [22], also used as one of the primary metrics in ZeroSpeech challenges [15]. ABX is measured in both within- (W) and across-speaker (A) conditions, whether the latter reflects cross-speaker generality of the learned phonemic distinctions (see [22]).
For evaluating the semantic knowledge of the model, we used recall@k metric [23] to measure speech-to-image and image-to-speech retrieval performance (see [1] for more details on evaluation of semantic retrieval tasks in VGS models.)
### _Implementation Details_
The base models were trained for 70 epochs, as based on saturation of semantic retrieval performance in pilot experiments. For the scheduling scenarios, the base model trained for 20 epochs was used as the initialization for another 50 epochs with the other loss weighting configuration. We used Adam optimizer with an initial learning rate of 10-4, a warm-up fraction of 0.1, and then a linear decay towards the end of the training. The optimizer state was reset after the pretraining process. For the semantic retrieval score, we used classification tokens of the speech and the image embedding layers. We report the recall@10 on 25k test pairs (5k images each paired with 5 spoken captions) at the final epoch of the training. For measuring the ABX phoneme discrimination error, we saved the model every 5 epochs and measured the ABX-error for representations of all the 12 intermediate layers of the speech encoder and decoder blocks (cf. Fig 1).
## IV Results and Discussion
### _Semantic retrieval_
For semantic retrieval performance, we measured and compared recall@1 and recall@10 scores for both speech-to-image and image-to-speech retrieval tasks. Table I shows the results on test data obtained at the end of the training process.
The base models, VGS and VGS+, have similar performance. This result accords with the previous report [13] and indicates that the semantic retrieval performance does not benefit from simultaneous optimization of speech encoder representations for the auditory SSL (W2V2) task. However, when the W2V2 training precedes VGS training (the (W2V2, VGS+) and (W2V2, VGS) variants), there is a substantial improvement in recall scores. Notably, we observe this improvement when pretraining _on the same data_ as the main training, whereas previous improvements have been reported with pretraining on large-amounts of additional speech data not used for the VGS task (LibriSpeech in [13, 18]). We also tested and observed that using both LibriSpeech and SpokenCOCO at the pretraining step does not improve the retrieval performance of the (W2V2, VGS) and (W2V2, VGS+) variants above what is gained by using either of the datasets. Thus, the improvement is mainly the result of self-supervised initialization, not from using more speech data (see also the results from different data settings in [13]). We also tested if the semantic scores can be further improved by incorporating more epochs to the W2V2 pretraining phase. However, this did not improve the recall scores from those obtained by the initial setting of 20 epochs.
The reason why the same retrieval performance gain is not achieved by simultaneous training from scratch (i.e., VGS+) could be because the VGS loss might be easier to optimize compared to the W2V2 training. This may cause VGS to dominate the training process, resulting in a less optimal overall solution for the W2V2 (see Fig. 3 and the discussion at comparing the two losses curves).
Finally, catastrophic forgetting quickly results in chance-level recall score when switching from VGS pretraining to W2V2 training. In contrast, the performance remains fairly
stable with (VGS+, W2V2). This suggests that synergistic representations between SSL and VGS are possible for audio-visual learning, but require the presence of both mechanisms from the start of the training in order to be robust against later alternation between the two tasks.
### _Phonemic discrimination_
Table II shows the ABX results obtained for the different training variants. In general the VGS outperforms the W2V2 training in all tested variants with the best ABX score obtained at (VGS+, VGS) and then VGS. Bearing in mind that all models are trained on SC that is different domain data from LS used at ABX test, this result suggests that audiovisual training provides phonemic representations that better generalize across data domains. In contrast, pretraining with W2V2 prior to VGS or VGS+ training does not result in equally good cross-dataset generalization in ABX. This is a highly relevant finding that requires further research, considering that SSL models tend to suffer from domain-mismatch problems ([24]). In addition, previous work has not reported any performance benefits for ABX from the use of visual data, but, except for [13], all these studies have used LS data for both SSL training and ABX testing (see a comparison in [13]).
Fig. 2 (left) shows (W-C) ABX error of the different hidden layers of the transformer-based speech encoder and decoder. The first layers of the encoder perform comparably in all tested variants. In the best performing variant of (VGS+, VGS) the score improves slightly for the deeper encoder layers whereas in VGS all hidden layers perform similarly well in the task. Overall, for the variants having VGS+ at one of the training phases, the error decreases with increasing layer depth in the encoder and increases again for deeper layers of the speech decoder. A similar performance pattern across layers is observed when W2V2 is trained on LS data [13].
Fig. 2 (right) illustrates the lowest (W-C) ABX error (among layers) during the training epochs. For all base models, the ABX error improves monotonically during the training. For the scheduled versions, the ABX behavior at transitions between the losses is always smooth when the initial training phase is the acoustic-level W2V2 (blue lines). In contrast, in (VGS, W2V2), and then with a slighter rate at (VGS+, W2V2), the error increases substantially when shifting from VGS/VGS+ to W2V2 training. The behavior of the ABX score at transition points suggest that the phonemic representations learned from acoustics can be further transferred to semantic tasks, but the representations learned during the semantic optimization cannot be initially adopted to acoustic learning. However, having in mind that all our variants are trained on SC data, i.e., different domain from the LS test data, further investigation is required to distinguish the effect of domain change from the sheer role of the learning task.
### _Training loss analysis_
Fig. 3 shows the loss curves of \(loss_{AV}\) (left) and \(loss_{AUD}\) (right) for the base and the scheduled models. Although the general range of the two \(loss_{AV}\) and \(loss_{AUD}\) curves are very close, in overall \(loss_{AV}\) decays with steeper slope especially at the later epochs. Comparing the faster decay rate of the \(loss_{AV}\) in VGS to decay of \(loss_{AUD}\) in W2V2 suggests that audio-visual semantic mapping is an easier task compared to W2V2 acoustic modeling, and the pattern stays same when the two losses are optimized simultaneously (i.e. in VGS+).
Furthermore, similar to the attitude found at the semantic retrieval scores, catastrophic forgetting also leads to rapid decrease of \(loss_{AV}\) to a chance-level when switching from
Fig. 3: Training loss curves for all tested model variants (log-scale): Left: \(loss_{AV}\). Right: \(loss_{AUD}\). The graphs show full training (70 epochs) for the base models (solid lines) and the combinations of 20 epochs of pretraining and 50 epochs of main training for the training schedule variants.
Fig. 2: ABX error (W-C) for the tested training variants. Left: as a function of speech encoder (no. 1–8) and decoder (9–12) layer after the full 70 training epochs. Right: best layer score across the training epochs. Note the logarithmic scale of the y-axes.
VGS to W2V2. An analogous pattern is observed in the behavior of the \(loss_{AUD}\) when switching from W2V2 to VGS. In contrast, the forgetting effect is much milder in the cases where the pretraining phase includes optimization of both losses (i.e., VGS+ pretraining). For example, \(loss_{AV}\) in (VGS+, W2V2) tolerates the absence of audio-visual updates for a few epochs after which it starts to gradually increase.
## V Conclusions
This study set out to investigate the coordination between SSL and VGS mechanisms. We tested a number of training scenarios involving the wav2vec 2.0-based SSL and transformer-based VGS models, and studied the performance of the resulting speech representations in semantic cross-modal retrieval and phoneme discrimination tasks. The results show that simultaneous learning with SSL and VGS mechanisms does not provide performance gains for phonemic or semantic learning compared to the individual mechanisms. However, joint training ensures synergetic representations that are robust against catastrophic forgetting in the individual tasks in follow-up training with just one mechanism. In contrast, acoustic pretraining prior to audiovisual semantic training boosts the performance on the semantic task, even when the SSL-based pretraining takes place on the same dataset.
Notably, our results show that the best phonemic representations, when evaluated in cross-domain conditions, were obtained by visually-grounded learning, and the representations can be further improved if the visual learning is preceded by simultaneous visual and acoustic learning. This is in contrast to previous findings [13, 15]. However, according to our knowledge, this is the first study to compare SSL and VGS when neither of the mechanisms have access to the corpus used in the ABX test for phonemic discrimination. In future, we plan to further investigate if speech representation learning with the help of visual semantics helps to improve generalization across datasets compared to purely SSL-based speech representations.
## Acknowledgment
The authors would like to thank CSC for computational resources and Puyuan Peng for his valuable help with the FaST-VGS+ model.
|
2304.14969 | Exact and approximate simulation of large quantum circuits on a single
GPU | We benchmark the performances of Qrack, an open-source software library for
the high-performance classical simulation of (gate-model) quantum computers.
Qrack simulates, in the Schr\"odinger picture, the exact quantum state of $n$
qubits evolving under the application of a circuit composed of elementary
quantum gates. Moreover, Qrack can also run approximate simulations in which a
tunable reduction of the quantum state fidelity is traded for a significant
reduction of the execution time and memory footprint. In this work, we give an
overview of both simulation methods (exact and approximate), highlighting the
main physics-based and software-based techniques. Moreover, we run
computationally heavy benchmarks on a single GPU, executing large quantum
Fourier transform circuits and large random circuits. Compared with other
classical simulators, we report competitive execution times for the exact
simulation of Fourier transform circuits with up to 27 qubits. We also
demonstrate the approximate simulation of all amplitudes of random circuits
acting on 54 qubits with 7 layers at average fidelity higher than $4\%$, a task
commonly considered hard without super-computing resources. | Daniel Strano, Benn Bollay, Aryan Blaauw, Nathan Shammah, William J. Zeng, Andrea Mari | 2023-04-28T16:45:28Z | http://arxiv.org/abs/2304.14969v2 | # Exact and approximate simulation of large quantum circuits on a single GPU
###### Abstract
We benchmark the performances of Qrack, an open-source software library for the high-performance classical simulation of (gate-model) quantum computers. Qrack simulates, in the Schrodinger picture, the exact quantum state of \(n\) qubits evolving under the application of a circuit composed of elementary quantum gates. Moreover, Qrack can also run approximate simulations in which a tunable reduction of the quantum state fidelity is traded for a significant reduction of the execution time and memory footprint. In this work, we give an overview of both simulation methods (exact and approximate), highlighting the main physics-based and software-based techniques. Moreover, we run computationally heavy benchmarks on a single GPU, executing large quantum Fourier transform circuits and large random circuits. Compared with other classical simulators, we report competitive execution times for the exact simulation of Fourier transform circuits with up to 27 qubits. We also demonstrate the approximate simulation of all amplitudes of random circuits acting on 54 qubits with 7 layers at average fidelity higher \(\approx 4\%\), a task commonly considered hard without super-computing resources.
## I Introduction
The last decade was characterized by significant technological progress in quantum computing. Several prototypes of quantum computers are currently available and are routinely used for research purposes and for proof-of-concept applications [1]. It is not surprising that, at the same time, there has been a parallel progress in the classical simulation of quantum computers [2, 3, 4, 5, 6, 7, 8, 9, 10, 11].
Developing powerful and efficient classical simulators of quantum computers is important for several reasons. A first reason is to numerically test quantum algorithms applied to a limited number of qubits, without the need of using expensive quantum hardware. A second reason is to calibrate and validate noisy quantum computers, since this typically requires a comparison between the noisy results of the real device against the ideal results of an exact classical simulation. A third reason is to run algorithms where quantum processors and simulated processors cooperate in a hybrid computation. A further motivation for classically simulating quantum computers is to provide an empirical baseline of performances (e.g. [12, 13, 14]) which, at least in principle, should be overcome by real devices in order to demonstrate any claim of _quantum advantage_[15, 16, 17, 18, 19, 20] or _quantum supremacy_[21, 22, 23]. Beyond the mentioned applications, one should not underestimate the importance of developing new quantum-inspired computational paradigms from a fully classical computer science perspective. In fact, as we also show in this work, quantum-inspired algorithms might match or exceed the performances of standard classical algorithms, especially if the transparent parallel nature of quantum dynamics is exploited for GPU acceleration or for HPC execution.
Here we focus on a specific library for the simulation of gate-model quantum computers: _Qrack_[4]. Qrack is an open-source framework founded in 2017, undergoing continuous development up to the present days. It is designed to serve, with high performances, all scales of simulation: from single consumer CPUs to an arbitrarily high number of clustered GPUs.
The main contribution of this work is to present the optimization and approximation techniques underlying the Qrack software library and to validate its simulation performance against computationally intensive benchmark problems.
Specifically, we run exact simulations of quantum Fourier transform (QFT) circuits up to \(27\) qubits and we run approximate simulations of random circuits at 54 qubits, up to 10 layers, with average fidelity estimated at \(\approx 4\%\) at 7 circuit layers, on a single A100 GPU in very short execution time. Differently from previous works in the literature, we do not use advanced super-computers or expensive multi-GPU cloud services for running our simulations. All QFT results presented in this work are obtained with a single laptop, and all approximate results are run on a single GPU (NVIDIA A100). These can thus be reproduced with relatively little cost by the scientific community. High performance simulation using limited computational power with Qrack is realized through several ingredients that will be explained in the main sections of this work: circuit simplification techniques, a Schmidt decomposition optimization, a tunable Schmidt-decomposition-rounding-parameter (SDRP) approximation, and other synergistic optimizations that include "hybrid" stabilizer/ket simulation and well-rounded use of proven and novel HPC software engineering techniques with adherence to disciplinary best-practices (see Table 1).
This work is organized in two main sections. In Sec. II we restrict the analysis to the exact simulation of quantum circuits and we benchmark the execution time of quantum Fourier transform circuits. In Sec. III, we instead focus on the approximate simulation of quantum circuits, we describe the Schmidt decomposition rounding parameter (SDRP) technique and we approximately simulate large-scale random circuits.
## II Exact simulation
The first task that we consider is the exact simulation of a quantum circuit in the Schrodinger picture. Let \(|\psi_{C}\rangle=C|\psi_{0}\rangle\) be the _ket_ quantum state obtained by the application of a quantum circuit \(C\) to some given initial state \(|\psi_{0}\rangle\) of \(n\) qubits. Our goal is to compute \(|\psi_{C}\rangle\) given \(|\psi_{0}\rangle\) and a description of \(C\) in terms of a sequence of local gates acting on \(k\) qubits (with \(k\) typically equal to 1, 2 or 3).
The Qrack simulation approach is based on the following four general principles.
### _Keeping ket states as factorized as possible_
During the classical simulation steps that are necessary to compute the state evolution, Qrack keeps the state representation _as factorized as possible_ to increase the simulation efficiency [24].
A generic ket state \(|\psi\rangle\) of \(n\) qubits is characterized by \(\mathcal{O}(2^{n})\) complex amplitudes. However, if the state \(|\psi\rangle\) can be factorized as the tensor product of \(m\) local states
\[|\psi\rangle=|\psi_{S_{1}}\rangle\otimes|\psi_{S_{2}}\rangle\ldots|\psi_{S_{m} }\rangle, \tag{1}\]
where \(S_{1},\ldots S_{m}\) represent disjoint subsets of the \(n\) qubits, the number of complex amplitudes that are necessary to represent \(|\psi\rangle\) can be significantly reduced. For example, in the extreme limit \(m=n\), i.e. when the qubits are not entangled, \(\mathcal{O}(n)\) complex amplitudes are sufficient. A more realistic example, which is quite relevant for Qrack, is the case where all qubits are highly entangled with the exception of a single qubit \(q\) which can be fully factorized, i.e. \(|\psi\rangle=|\psi_{q}\rangle\otimes|\psi^{\prime}\rangle\). In this case, which Qrack is able to detect during the simulation process as shown in Sec. III-A, the representation cost is given by \(\mathcal{O}(2^{n-1})\) complex amplitudes, corresponding to halving the cost associated to a fully entangled state.
### _Any "unobservable" circuit optimization is allowed_
Qrack modifies, simplifies and sometimes completely removes some gates of the simulated circuit whenever this has _unobservable_ consequences.
More precisely, we say that a circuit transformation \(C\to C^{\prime}\) is unobservable with respect to the specific input state \(|\psi_{0}\rangle\), if
\[\left|\langle\psi_{0}|C^{\prime\dagger}C|\psi_{0}\rangle\right|^{2}=1. \tag{2}\]
In other words, replacing the original circuit \(C\) with a simplified circuit \(C^{\prime}\) is always allowed as long as the results of the computation are unaffected.
Note that \(C\) and \(C^{\prime}\) may correspond to different unitary operations but, as long as they are equivalent when applied to the specific input state \(|\psi_{0}\rangle\), we can safely change \(C\) into \(C^{\prime}\). For example, if the control qubit of a CNOT gate is in the state \(|0\rangle\), the CNOT gate can be removed. Similarly, if the control qubit is in the state \(|1\rangle\) a CNOT gate can be replaced by a local bit-flip of the target qubit.
From this simple example it is clear that, in order to determine if a gate optimization is unobservable, it is crucial to have direct access to the ket state at each step of the simulation (Schrodinger picture). This is a specific design feature which is typically not present in standard tensor-networks simulators in which the ket state is not explicitly accessible, but must be computed with a non-negligible cost.
### _Hybridizing ket and stabilizer representations_
It is well known that Clifford circuits can be classically simulated with a polynomial cost [25]. This fact is exploited by Qrack which applies an hybridized stabilizer/ket simulation approach. In practice, one can detect which parts of the computation are more convenient to simulate via stabilizer tableaus and which parts are more convenient to simulate via ket states. Beyond the obvious application in simulating fully Clifford circuits, this hybrid approach is particularly convenient also in circuits with deep Clifford or (near-Clifford) preambles.
Qrack chooses "transparently" (i.e. by default) between stabilizer and ket simulation. The library introspects whether calculations are being carried out as stabilizer, and it enacts Clifford entangling gates with priority if they would be carried out in stabilizer formalism, otherwise attempting to buffer them to the benefit of state factorization. Ultimately, the stabilizer-hybridization layer of Qrack relies on fallback from stabilizer to ket representation in all cases which cannot be reduced to Clifford (as through unobservable circuit optimizations). The stabilizer hybridization layer also assigns a general single-qubit unitary gate buffer respectively to each qubit, which enables further optimization by simple commutation, to maintain underlying Clifford representation. One key factor contributing to the efficiency of Qrack's stabilizer-hybridization layer is aggressive state factorization as a higher and external priority, including for stabilizer subsystems.
### _Optimizing computational resources_
In addition to the previous physics-based optimization techniques, Qrack makes use of software-based techniques to achieve high performances with minimal computational resources. Qrack is written in "pure language" C++11 standard with only optional OpenCL (or CUDA) and Boost library dependencies and licensed header reuse. Because of its minimal number of dependencies, a "full-feature" Linux build of the library might require about 16 MB of disk footprint, or about 4 MB when compressed, and this can be further reduced for custom builds. The comparatively extreme compactness of the compiled library likely also benefits use and management of CPU cache. Additionally, CPU/GPU hybridization is supported through the QHybrid class (see Table 1) allowing for an optimal distribution of computational resources, such that CPU is used for small circuits and GPU is preferred for large circuits. Finally, Qrack also supports single instruction multiple data (SIMD) intrinsics (see Table 1) for obtaining the advantages of data vectorization at the CPU level.
### _Simulation benchmark: quantum Fourier transform_
To test the efficiency of Qrack with respect to the task of exactly simulating quantum circuits, we consider numerical experiments based on the quantum Fourier transform (QFT).
QFT circuits are often used for validating real or simulated quantum computers. Here we use them to benchmark Qrack against similar GPU-based classical simulators [26, 27, 28, 29] and against a fully classical algorithm [30] for computing discrete Fourier transforms.
Let \(|\psi_{0}\rangle\) be an arbitrary initial state of \(n\) qubits. We can represent it in the computational basis as \(|\psi_{0}\rangle=\sum_{j}x_{j}|b_{j}\rangle\) where \(\{|b_{j}\rangle\}\) are the basis elements labelled with bitstrings \(b_{j}\) corresponding to the binary representation of the integers \(j=0,1,2,\ldots N\), with \(N=2^{n}\). The quantum Fourier transform is the unitary operation \(C_{\mathrm{QFT}}\) which acts as follows:
\[|\psi_{0}\rangle=\sum_{j}x_{j}|b_{j}\rangle\to C_{\mathrm{QFT}}|\psi_{0} \rangle=\sum_{j}y_{j}|b_{j}\rangle, \tag{3}\]
where the complex amplitudes \(\{y_{j}\}\) are (up to a different sign convention) the classical discrete Fourier transform (DFT) [31] of the input amplitudes \(\{x_{j}\}\):
\[y_{j}=\frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}x_{k}\omega^{jk},\quad\omega=e^{\frac{ i2\pi}{N}}. \tag{4}\]
Importantly, it is known how to decompose the unitary \(C_{\mathrm{QFT}}\) as a circuit of two elementary gates: the Hadamard gate H and the controlled version of the single qubit phase gate \(R_{Z}(\theta)\). So, given its decomposition as a quantum circuit, we can measure the wall-clock time for simulating \(C_{\mathrm{QFT}}\) with different simulators obtaining simple and reproducible benchmarks.1
Footnote 1: Our choices of convention for the final state of the (inverse) QFT run by all simulators match the DFT up to normalization.
Moreover, since the QFT circuit \(C_{\mathrm{QFT}}\) acts on state amplitudes according to the classical DFT defined in Eq. (4), we also add a fully classical DFT library (pyFFTW [30]) within the set of the benchmarked simulators. We stress that, technically,
pyFFTW is not an actual simulator since it is not able to simulate quantum circuits. On the other hand, pyFFTW is one of the fastest classical algorithms for computing Eq. (4) and therefore it provides a useful benchmark for the performances of quantum simulators acting on QFT circuits.
The results are reported in Figures 1a and 1b.
In Figure 1a, the initial state is \(|0\ldots 0\rangle\), i.e., all qubits are in the configuration \(|0\rangle\), the standard initial state in most quantum computing algorithms. In this case, Qrack is able to exploit the state factorization of the initial state to execute the QFT circuits several order of magnitude faster than other simulators and, within the limits of our numerical analysis, with a better asymptotic scaling. Remarkably, in this case Qrack clearly outperforms even the classical DFT algorithm pyFFTW for \(n>16\). This is an interesting consequence of the fact that Qrack is highly optimized to simulate factorized states.
In Figure 1b we repeat the same numerical experiment with an initial state which is more difficult to handle by circuit simulators because of its large amount of entanglement. Specifically we use the GHZ entangled state:
\[|\psi_{\mathrm{GHZ}}\rangle=\frac{1}{\sqrt{2}}\left(|0\ldots 0\rangle+|1\ldots 1 \rangle\right). \tag{5}\]
Also in this case, Qrack nearly outperforms all the considered classical simulators of quantum circuits, with the exception of a small advantage (less than a factor of 2) by Qiskit cusvaer (from the cuQuantum appliance for systems with NVIDIA GPUs) at high-widths (\(n\geq 23\)). When compared to the classical DFT library pyFFTW, Qrack is slower for circuits having a small number of qubits (\(n\leq 20\)) and is slightly faster for larger circuits (\(n>20\)).
It is worth noting that only the QCGPU and Qrack simulators can run on non-NVIDIA GPUs, as these two libraries use the OpenCL API for general purpose GPU and accelerator programming, while all the other simulators shown in Fig. 1 are based upon the proprietary CUDA API which can be used with NVIDIA GPUs only. Qrack alone also optionally supports CUDA as an alternative to OpenCL. pyFFTW also runs on virtually any system with a CPU, since it does not support GPU acceleration at all.
## III Approximate simulation
In Sec. II, we focused on the _exact_ simulation of quantum circuits. In this section instead, we consider the _approximate_ simulation of quantum circuits.
Limited cases of exact high-width simulation are already possible in Qrack due to its factorization of subsystems and stabilizer tableau capabilities. However, many realistic circuits still require a peak memory footprint which is close to the footprint of a brute-force simulation. For large circuits (\(n>30\)), this limits Qrack's exact simulation capabilities. Compared to state-of-the-art in tensor network simulations,
Fig. 1: (a) Wall-clock execution time of quantum Fourier transform (QFT) circuits applied to the initial state \(|0\ldots 0\rangle\) and executed with different classical simulators. (b) Repetition of the same benchmark but using the GHZ state defined in Eq. (5) as initial condition. Because of its large amount of entanglement, the GHZ state can be considered as the worst-case initialization scenario for Qrack. For both plots, all candidates were executed on the same Alienware m17 laptop, with Alienware BIOS version 1.16.2, (BIOS overrocking features set to off/default), Ubuntu 22.04 LTS, Linux kernel version 5.19.0-35-generic, one “Intel(R) Core(TM) i9-10980HK CPU @ 2.40GHz,” one “NVIDIA GeForce RTX 3080 Laptop GPU,” and 32 GB of SK hynix 3200 MT/s DDR4 in 2x16 GB row configuration, collected on ISO date 2023-03-24. Candidate release versions were PyQrack 1.4.2 [4], qsimcity 0.12.1 [26] (in NVIDIA appliance Docker image), Qiskit Aer 0.12.0 [27], Qiskit Aer 0.11.0 (for cusvaer [28] in NVIDIA appliance Docker container), Qulacs 0.5.7.dev78-gc3b28f13 [29], QCGPU 0.1.1 [10], and pyFFTW 0.13.1. [30]
this is a major limitation of Qrack exact simulation methods. This motivated the development of an approximate simulation method designed to trade minimal fidelity loss for maximum reduction of memory and time complexity.
### _Schmidt decomposition rounding parameter approximation_
If we isolate a single qubit \(q\) from the associated complementary set \(\bar{q}\) of \(n-1\) qubits, we can always express the full ket state of the system with the following Schmidt decomposition:
\[|\psi\rangle=(1-\sqrt{\epsilon})|\varphi\rangle_{q}|\psi\rangle_{\bar{q}}+\sqrt {\epsilon}|\varphi^{\perp}\rangle_{q}|\psi^{\perp}\rangle_{\bar{q}}, \tag{6}\]
where \(|\varphi\rangle|_{q}\) is a quantum state of the qubit and \(|\varphi^{\perp}\rangle|_{q}\) is its _unique_ orthogonal state. Similarly, \(|\psi\rangle_{\bar{q}}\) is a quantum state of all the other qubits \(\bar{q}\) and \(|\psi^{\perp}\rangle_{\bar{q}}\) is an orthogonal state.
Without loss of generality, we can assume \(\epsilon\in[0,0.5]\), such that \(\epsilon\) can be used to quantify the amount of entanglement between the qubit \(q\) and the rest of the system \(\bar{q}\). If \(\epsilon\) is large, the state is highly entangled; if \(\epsilon\) is small, the state is weakly entangled. If \(\epsilon=0\) (up to machine precision) the qubit is fully separable and an exact simulation with a factorized ket representation is possible as discussed in Sec. II. In this section instead we are interested in approximating the state for values of \(\epsilon\) which are small but nonzero.
More precisely, we define the Schmidt decomposition rounding parameter (SDRP) approximation, with threshold parameter \(p\in[0,1]\), as the following non-unitary operation:
\[|\psi\rangle \rightarrow|\varphi\rangle_{q}|\psi\rangle_{\bar{q}},\quad\text {if }\epsilon\leq p/2, \tag{7}\] \[|\psi\rangle \rightarrow|\psi\rangle,\qquad\quad\text{if }\epsilon>p/2.\]
In other words, if the state is weakly entangled, the SDRP approximation projects the state on the closest factorized state corresponding to the dominant term in the Schmidt decomposition. Else, no approximation is applied.
In practice, the way in which Qrack implements the SDRP approximation defined in Eq. (7) is through four geometrical transformations representable in the Bloch sphere (see Fig. 2):
1. Since we have access to the ket state \(|\psi\rangle\), we can easily compute the local expectation values of the \(X_{q}\), \(Y_{q}\) and \(Z_{q}\) Pauli operators associated to the qubit \(q\). Therefore, we have full knowledge of the reduced (mixed) state of \(q\): \[\rho_{q}=\frac{1}{2}[I_{q}+r_{x}X_{q}+r_{y}Y_{q}+r_{z}Z_{q}]\] (8) which is completely characterized by the 3D Bloch-sphere vector \[\mathbf{r}=[r_{x},r_{y},r_{z}]=[\langle X_{q}\rangle,\langle Y_{q}\rangle, \langle Z_{q}\rangle].\] (9) We remark that the Schrodinger-picture simulation approach is crucial for quickly computing \(\mathbf{r}\).
2. Given the Bloch vector \(\mathbf{r}\), we deduce \(\epsilon\) from its modulus, since it is easy to show that \(\epsilon=(1-|\mathbf{r}|)/2\). If \(\epsilon>p/2\), no approximation is applied. If \(\epsilon\leq p/2\), a local unitary is applied to rotate \(\mathbf{r}\) along the direction of the \(|0\rangle\) pole.
3. In this new reference frame, the projection defined in Eq. (7) can be implemented as a simple measurement in the computational basis, post-selected on the \(|0\rangle\) outcome, which yields the normalized state \(|0\rangle_{q}|\psi\rangle_{q}\). This measurement-like step is a fundamental capability which must be obviously present in any classical simulator, including Qrack.
4. The state of the qubit is finally rotated back along the direction of the original Bloch vector \(\mathbf{r}\), obtaining the desired final state \(|\varphi\rangle_{q}|\psi\rangle_{q}\).
Note that, from an abstract point of view, this technique is similar to the core idea of matrix product states (MPS) [32, 33, 34], but it is focused on the particular case in which one of the subsystems in the Schmidt decomposition is a single-qubit. (Qrack has also generalized the technique analogously to at least 2-qubit subsystems, but this does not factor into the particular benchmarks presented.) Within the MPS formalism, a common way to recover efficiency of approximate simulation is to represent states as tensors, perform a singular value decomposition, and discard principle components with small Schmidt coefficients (see e.g. [35] for a recent software implementation). The peculiar aspect of the SDRP technique presented in this work is the possibility of applying the same type of Schmidt projection used in MPS _without_ representing states as MPS, but by performing instead a task which is
Fig. 2: Pictorial representation of the SDRP approximation technique. We represent the reduced state of a qubit as a vector in the Bloch sphere (first image) and we check if its length is longer than \(1-p\), for threshold parameter \(p\). If so, we rotate the state along the direction of the \(|0\rangle\) pole (second image). We then post-select the measurement of the \(|0\rangle\) state, extending the length of the Bloch vector to 1 (third image). We finally reverse the original rotation, such that the Bloch vector points along the original axis (fourth image).
elementary for any classical simulator: rotating and measuring a single qubit in the computational basis.
### _Estimating the simulation fidelity_
Let us denote with \(|\psi_{j}^{(in)}\rangle\) and \(|\psi_{j}^{(out)}\rangle\) the simulated ket states evaluated right before and right after the \(j\)th SDRP projection (with \(\epsilon_{j}\leq p\)) applied during the circuit simulation. It is easy to check that, for each individual approximation, the fidelity of the output state with respect to the input given by:
\[|\langle\psi_{j}^{(out)}|\psi_{j}^{(in)}\rangle|^{2}=1-\epsilon_{j}. \tag{10}\]
This fidelity reduction is due to neglecting the \(\sqrt{\epsilon_{j}}|\varphi_{j}^{\perp}\rangle_{q}|\psi_{j}^{\perp}\rangle_{q}\) branch of Eq. (6). If all the neglected branches associated to multiple SDRP approximations remained orthogonal to the preserved branch along the full simulation, the fidelity of the final approximated state with respect to the ideal exact state would be
\[\mathcal{F}=\prod_{j}(1-\epsilon_{j}). \tag{11}\]
In practice however, the Schmidt branches associated to different SDRP approximations can have a small overlap with the simulated state, such that Eq. (11) is not an exact formula but, nonetheless, is a very good estimator of the actual fidelity.
To validate the model introduced in Eq. (11), we compared it with the exact fidelity which, by definition, can be computed from the exact simulation of the final state:
\[\mathcal{F}_{\mathrm{exact}}=|\langle\psi_{\mathrm{approx}}|\psi_{\mathrm{ exact}}\rangle|^{2}. \tag{12}\]
For random circuits of limited size, we find a very good agreement between the two quantities. (See Appendix A for more details on the statistical validation of the fidelity estimation model). This fact allows us to efficiently estimate the fidelity of large-scale simulations, without the need of computing (12) which instead would require an exact simulation with huge computational resources.
### _Simulation benchmark: random circuits_
Figure 3 shows the final fidelity estimates, for 100 random circuits at each circuit layer depth. In each trial, starting from SDRP value of \(1\) and decrementing by \(0.025\) at each successful completion of a circuit, we used only the fidelity estimate of the minimum attainable rounding parameter \(p\), before out-of-memory failure. These simulations were carried out on a single (80 GB) NVIDIA Tesla A100 GPU. Execution time was not precisely recorded, but the whole of data collection for this plot took less than or about 3 days.
The obtained results demonstrate that Qrack can run approximate simulations of 54-qubit random circuits up to 10 layers (with exponentially decreasing fidelity). We highlight that, at 7 layers, the estimated average fidelity is \(\approx 4\%\), which is a significant result for a single GPU.
Each "circuit layer" is defined by a round of 3-parameter single-qubit general unitary gates,
\[U(\theta,\phi,\lambda)=\begin{pmatrix}\cos\left(\frac{\theta}{2}\right)&-e^{i \lambda}\sin\left(\frac{\theta}{2}\right)\\ e^{i\phi}\sin\left(\frac{\theta}{2}\right)&e^{i(\phi+\lambda)}\cos\left(\frac {\theta}{2}\right)\end{pmatrix} \tag{13}\]
[36], with variational parameters randomly generated on their full period, acted on every qubit, followed by nearest-neighbor coupler gates from the set [CX/CY/CZ/AX/AY/AZ] (where "A" opposed to "C" indicates that the \(|0\rangle\) control state activates the gate, as opposed to \(|1\rangle\)) applied on random qubits according to the ABCDCDAB pattern deemed to be hard to simulate in the Sycamore quantum supremacy experiment [23]. It should be noted that our median and mode fidelity appear significantly lower than our reported mean fidelity, to basic human inspection of our data supplement (at [https://github.com/unitaryfund/qrack-report](https://github.com/unitaryfund/qrack-report)).
## IV Conclusions
We presented and numerically tested the optimization techniques which are at the basis of the Qrack simulator, many of which appear to be novel among the available set of major quantum computer simulator libraries and frameworks. We run numerical experiments for benchmarking the simulation performances on large circuits with limited computational power.
Our results show that Qrack reaches approximate parity (or better performances) relative to all other simulator candidates on exact QFT simulation, even with its hardest initial conditions (see Fig. 1). At high qubit widths, Qrack even outperforms the popular pyFFTW bindings for the (CPU-based) FFTW library, which is historically-notable for its DFT
Fig. 3: Achievable fidelity for the simulation of random circuits acting on \(54\) qubits with \(d\) layers on a single GPU. The fidelity is estimated by the empirically-validated model discussed in Sec. III-B averaged over 100 random circuits for each data point. The cloud-compute virtual machine was a Paperspace A100 80GB instance, \(92669188\) KB (\(>88\) GB) general RAM.
performance. An interesting research question, suggested by our QFT benchmarks, is whether quantum-inspired (classical) algorithms for DFT could outperform standard _fast Fourier transform_ (FFT) methods [31, 37], such as the _Cooley-Tukey_[31] algorithm.
For what concerns the task of approximate simulation, we gave a detailed description of the SDRP technique in which a rounding parameter \(p\) can be tuned to increase the simulation efficiency at the cost of reducing the simulation fidelity. By using the SDRP technique, we have achieved at \(\approx 4\%\) average fidelity on random circuits acting on 54 qubits with a depth of 7 layers, a performance which is worse than Sycamore quantum supremacy experiment [23] (\(\mathcal{F}=0.2\%\) with 20 layers), but remarkable considering that it was obtained with a single GPU device.
## Acknowledgements
This work was supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Accelerated Research in Quantum Computing under Award Number DE-SC0020266 as well as by IBM under Sponsored Research Agreement No. W1975810. AM acknowledges support from the PNRR MUR project PE0000023-NQSTI.
|
2307.10087 | An integro-differential model for the spread of diseases | In this study, we present an integro-differential model to simulate the local
spread of infections. The model incorporates a standard
susceptible-infected-recovered (\textit{SIR}-) model enhanced by an integral
kernel, allowing for non-homogeneous mixing between susceptibles and
infectives. We define requirements for the kernel function and derive
analytical results for both the \textit{SIR}- and a reduced
susceptible-infected-susceptible (\textit{SIS}-) model, especially the
uniqueness of solutions.
In order to optimize the balance between disease containment and the social
and political costs associated with lockdown measures, we set up requirements
for the implementation of control function, and show examples for three
different formulations for the control: continuous and time-dependent,
continuous and space- and time-dependent, and piecewise constant space- and
time-dependent. Latter represent reality more closely as the control cannot be
updated for every time and location. We found the optimal control values for
all of those setups, which are by nature best for a continuous and space-and
time dependent control, yet found reasonable results for the discrete setting
as well.
To validate the numerical results of the integro-differential model, we
compare them to an established agent-based model that incorporates social and
other microscopical factors more accurately and thus acts as a benchmark for
the validity of the integro-differential approach. A close match between the
results of both models validates the integro-differential model as an efficient
macroscopic proxy. Since computing an optimal control strategy for agent-based
models is computationally very expensive, yet comparatively cheap for the
integro-differential model, using the proxy model might have interesting
implications for future research. | Moritz Schäfer, Karol Niedzielewski, Thomas Götz, Tyll Krüger | 2023-07-19T15:57:03Z | http://arxiv.org/abs/2307.10087v1 | # An integro-differential model for the spread of diseases
###### Abstract
In this study, we present an integro-differential model to simulate the local spread of infections. The model incorporates a standard susceptible-infected-recovered (_SIR-_) model enhanced by an integral kernel, allowing for non-homogeneous mixing between susceptibles and infectives. We define requirements for the kernel function and derive analytical results for both the _SIR-_ and a reduced susceptible-infected-susceptible (_SIS-_) model, especially the uniqueness of solutions.
In order to optimize the balance between disease containment and the social and political costs associated with lockdown measures, we set up requirements for the implementation of control function, and show examples for three different formulations for the control: continuous and time-dependent, continuous and space- and time-dependent, and piecewise constant space- and time-dependent. Latter represent reality more closely as the control cannot be updated for every time and location. We found the optimal control values for all of those setups, which are by nature best for a continuous and space-and time dependent control, yet found reasonable results for the discrete setting as well.
To validate the numerical results of the integro-differential model, we compare them to an established agent-based model that incorporates social and other microscopical factors more accurately and thus acts as a benchmark for the validity of the integro-differential approach. A close match between the results of both models validates the integro-differential model as an efficient macroscopic proxy. Since computing an optimal control strategy for agent-based models is computationally very expensive, yet comparatively cheap for the integro-differential model, using the proxy model might have interesting implications for future research.
## 1 Introduction
During the last three years, COVID-19 has shown that there is an increasing value in accurate models for the local and global spread of diseases, also giving advice to policy makers on how to deal with them, cf. e.g. Bracher et al. [1, 2], Sherratt et al. [3] and Priesemann et al. [4].
Country and region based statistics from many countries, e.g. Germany and Poland, show that regionally contained cases can spread throughout the country in a short period of time, especially during the initial phase(s) of the disease in spring and summer 2020. Lockdowns and other social restrictions were imposed as a result in order to contain the infection numbers, to relieve the strain on the health system, and to reduce the amount of severely ill or dead. How the spread of infections can be explained and how well measures actually work is still an open or widely discussed question in many countries.
The local or regional spread of infections has been adressed in many previous works. Kuehn and Molter [5] investigate transport effects on epidemics using two coupled models, a static epidemic network and a dynamical transport network, also with non-local, fractional transport dynamics. They find that transport processes induce additional spreading ways and that way lowers the epidemic threshold; generalising the process to fractional or non-local dynamics, however, raises the epidemic threshold. In several papers, the local spread of infections is modelled by PDE (partial differential equation) models. Viguerie et al. [6] argue that their geographical model simulations could be used to inform authorities to design effective measures and anticipate the allocation of important medical resources. Wang and Yamamoto [7] provide a forecasting model for COVID-19 using Google mobility data and PDE models, as well as find acceptable validity results of their model by comparison with COVID-19 data. A fractional PDE modelling of the spatial spread in Corona can be found in Logeshwari et al. [8], where they designed a system to predict the outcome of viral spreading in India. Harris and Bodman [9] investigate the spread through a country with different regions of different densities. A diffusion-based and non-interational approach can be found in Berestycki et al. [10]. The authors find that fast diffusion effects along major roads are an important factor of the spread of epidemics like COVID-19 in Italy and HIV in the Democratic Republic of Congo. In another upcoming paper, Schafer and Heidrich [11] analyse the local spread of COVID-19 infections in a German district by another susceptible-exposed-infected-recovered (SEIR) model including PDEs.
Structure of the paper.In this paper, we model the local spread of infections by an integro-differential model. Instead of the typical homogeneous mixing between susceptibles and infectives, a classical susceptible-infected-recovered (_SIR_) model is enhanced by an integral kernel. The kernel function should depend on a space-. We present a proof for the uniqueness of solutions for the model. Lockdowns and other measures are included in our model(s) by a control function, which can be optimised under various assumptions: On the one side, it is aimed to contain the disease as much as possible, on the other side, we also consider the social and political costs of a lockdown, especially when case numbers are (comparatively) low, while attention has to be paid to not overload the health capacities and other problems like Long-COVID or economic problems given large infection numbers. We make use of three different control functions: a time-dependent, a continuous space- and time-dependent, and a piecewise constant space- and time-dependent control. In the following, we define the required target function for the optimisation of the "lockdown" control and present the corresponding Forward-Backward method. In order to validate the numerical results of the integro-differential model, we compare them to those of an established agent-based model in which social factors can be implemented more accurately. The macroscopical outcome of our model is compared to those of the microscopic agent-based model, which we interprete as a kind of 'ground truth'. If the results of both match well enough, we can see our integro-differential model as a macroscopic proxy model for the computatively expensive agent-based model.
## 2 Integro-differential SIR model
### Model formulation
The basis of our model is the SIR model introduced by Kermack and McKendrick [12] consisting of the compartments \(S\), \(I\), \(R\), which have the following meanings:
* Susceptibles \(S\): Depending on the transmission route, these individuals can become infected with the infectious disease when contact occurs.
* Infected \(I\): These individuals are infected with the disease and infectious. Contact with a susceptible individual can therefore lead to transmission of the disease.
* Recovered \(R\): After surviving an infection, individuals are considered recovered. These individuals can no longer transmit the disease or get infected.
The total number of individuals \(N=S+I+R\) is assumed to be constant. We normalize the three compartments \(S\), \(I\), and \(R\) by dividing all rows by \(N\), resulting in \(s:=\frac{S}{N}\), \(z:=\frac{I}{N}\), \(r:=\frac{R}{N}\) with \(s+z+r=1\) (in order to avoid confusion, we use a different lower case letter \(z\) instead of \(i\)). Following the model by Kermack and McKendrick, we assume the pathogen is transmitted from infected persons to susceptible persons at a time-independent rate \(\beta>0\) and a recovery rate of \(\gamma>0\) so that loss of infectivity is gained after \(\gamma^{-1}\) days. Then, replacing \(s=1-z-r\), the relative _sir_-model for each time point \(t\in[0,T]\subset\mathbb{R}\) and point in space \(x\in[0,1]^{n}\subset\mathbb{R}^{n}\) as follows:
\[\frac{d}{dt}z(t,x) =\beta(1-z-r)z-\gamma z z(t=0,x) =z_{0}(x) \tag{1a}\] \[\frac{d}{dt}r(t,x) =\gamma z(t=0,x) =r_{0}(x) \tag{1b}\]
This means that the disease dynamics in a certain point \(x\) would entirely depend on the initial relations \(z_{0}\) and \(r_{0}\) and the parameters \(\beta\) and \(\gamma\). To include interaction between the spatial points, we replace the factor \(z\) in the term \(\beta(1-z-r)z\) by an integral kernel function \(k(t,x-y)\) which depends on the time and the distance between \(x\) and \(y\):
\[\frac{d}{dt}z(t,x) =\beta(1-z-r)\int_{0}^{1}z(t,y)\,k(t,x-y)\,dy-\gamma z z(t=0,x) =z_{0}(x) \tag{2a}\] \[\frac{d}{dt}r(t,x) =\gamma z r(t=0,x) =r_{0}(x) \tag{2b}\]
For the purpose of reasonable modelling of scenarios, the kernel \(k\) should consist of three terms as follows:
* an _space-dependent_ part \(a(x-y)\) which is monotonously decreasing wrt \(|x-y|\), e.g., an exponential function decreasing with the distance, i.e., \(a(x-y)=ce^{-\beta|x-y|}\). This part can be controlled with
* a _control function_\(u(t)\in\mathcal{U}=C([0,1])\) which represents the effectiveness of non-pharmaceutical interventions (lockdown, school closings, obligation of wearing masks etc.). Here, \(u(t)\equiv 0\) implies no regulations and \(u(t)\equiv 1\) implies total lockdown.
* a _non-adjustable_ part \(k_{0}\) which represents the fraction of transmission or a kind of 'background noise' you cannot control, e.g. household related infections. We also assume that this fraction does not depend on the spatial distance as interactions between distances can be prevented by political or social measures. For a more detailed view on the importance of households, cf. Donges et al. [13].
These considerations lead to this formula for the kernel \(k\):
\[k(t,x-y)=(1-u(t))\cdot a(x-y)+k_{0} \tag{3}\]
For the following, assume that the kernel is independent of time \(t\), i.e. \(u(t)\) is constant over time. No loss of generality is effected when considering the case, reducing \(k(t,x-y)\) to \(k(x-y)=a(x-y)+k_{0}\) for the upcoming. The following assumptions regarding the interaction kernel \(k\) should be met:
1. \(k\) is continuous.
2. \(k\) is non-negative.
3. \(k(0)=k_{0}>k>0\).
4. \(k\) is monotonically decreasing wrt \(|x-y|\).
5. \(k_{1}:=\left\lVert k\right\rVert_{1}=\int_{0}^{1}k(r)\,dr>0\)
6. \(k_{1}<K=\max_{x\in[0,1]}\int_{0}^{1}k(|x-y|)\,dy\)
Note, that in case of a strict monotonically decreasing kernel, we get \(K=2\int_{0}^{1/2}k(r)\,dr\).
### Uniqueness of solutions
The existence and quality of equilibria of the integro-differential model is the main question in this subsection. Even for the comparatively'simple' _SIS_-model in the previous section, it is not possible to proof the uniqueness of equilibria using classical fixpoint theory (cf. App. A). However, we can find satisfying results for uniqueness even for the _SIR_-model using the prevalence. Again, wlog, we will consider the time-independent kernel function, i.e., \(k(x-y)\).
**Lemma 1**.: _For the SIR-model (2), there exists exactly one equilibrium._
Proof.: To proof those numerical findings for the integro-differential SIR-model (1), we will compute its prevalence for \(s_{0}=s(0,x)\approx 1\) and \(r_{0}=r(0,x)\equiv 0\). Reconsider the equation for the susceptibles, i.e.,
\[\frac{ds}{dt}=-s\int_{0}^{1}k(t,x-y)z(t,y)dy. \tag{4}\]
By substituting of the equation \(\frac{dr}{dt}=\gamma z\), we obtain
\[\frac{ds}{dt}=-\frac{s}{\gamma}\int_{0}^{1}k(t,x-y)\,\frac{dr}{dt}\,dy=-\, \frac{s}{\gamma}\,\frac{d}{dt}\int_{0}^{1}k(t,x-y)\,r(y)\,dy. \tag{5}\]
Integrating this over \(t\), it follows
\[\ln s_{\infty}=-\frac{1}{\gamma}\int_{0}^{1}k_{\infty}(x-y)r_{\infty}(y)\,dy. \tag{6}\]
Now let \(T_{k}\) be the integral operator on a (generalized) ground space \((S,\mu)\), whereas \(S=[0,1]\) and \(\mu(y)=y\), be defined by
\[(T_{k}f)(x)=\int_{0}^{1}k_{\infty}(x-y)\,f(y)\,dy. \tag{7}\]
Together with the necessary condition \(1=s_{\infty}(x)+r_{\infty}(x)\) for all \(x\in[0,1]\), we find
\[r_{\infty}(x)=1-\exp\left(-\frac{1}{\gamma}\int_{y}k_{\infty}(x-y)\,r_{\infty} (y)\right)\,dy=1-\exp(-T_{k}r_{\infty}(x)). \tag{8}\]
This system of nonlinear equations can e.g. be solved numerically. Uniqueness can be shown using the paper of Bollobas-Janson-Riordan [14]: Following their Theorem 6.1, if
\[\left\|T_{k}\right\|:=\sup\{\left\|T_{k}f\right\|_{2}:f\geq 0,\left\|f\right\|_{ 2}\leq 1\}<1, \tag{9}\]
the equation only has the zero solution; if \(1\leq\left\|T_{k}\right\|<\infty\), and \(k_{\infty}\) is irreducible, then the equation has a unique non-zero solution for the prevalence.
This uniqueness result for the prevalence can be transferred to the uniquess of the solution: Assume there exist two solutions \((s_{1},z_{1},r_{1})\) and \((s_{2},z_{2},r_{2})\) with the same initial conditions, which have the same prevalence \(r_{\infty}\). Consider the difference function \(\tilde{z}(t):=(z_{1}-z_{2})(t)\) which must satisfy \(\tilde{z}(0)=0\). Then the solution is \(z(t,x)\equiv r(t,x)\equiv 0\), and the two solutions are equal.
As an addition, this provides a nice definition for basic reproductive number \(\mathcal{R}_{0}\): By plugging this ansatz using the next-generation method [15], we find \(\mathcal{R}_{0}=\frac{\beta}{\gamma}\left\|k\right\|_{2}\), so that it also depends on the kernel function \(k:[0,1]\to\mathbb{R}\).
## 3 Optimisation
### Time-dependent control
In this article, we restrict our research on the case \(n=1\); higher-dimensional models will be introduced in future research. In a first formulation of the optimal control problem, we aim to minimize the total amount of infectives, along keeping the costs, i.e. the control \(u(t)\), as low as possible. In order to maintain convexity of the problem and avoid bang-bang controls due to linearity in \(u\), the cost function term is squared. Also, case numbers should be kept under a certain threshold \(z_{\max}\), otherwise the capacities of the medical infrastructure can be exceeded. This could be either modelled locally, i.e., \(z(t,x)\leq z_{\max,1}\) or globally, i.e., \(\int_{0}^{1}z(t,x)\,dx\leq z_{\max,2}\). If \(z_{\max,1}\leq z_{\max,2}\) for all \(x\), the two terms fall together, which is also assumed here for the sake of simplicity. Defining \(\mathcal{U}\) as the set of continuous functions \(u\in\mathcal{U}=\mathcal{C}([0,T])\), we find the following minimization problem:
\[\min_{u(t)\in\mathcal{U}}J(u,z) =\min_{u(t)\in\mathcal{U}}\int_{0}^{T}\int_{0}^{1}z(t,x)\,dxdt+ \frac{\eta}{2}\int_{0}^{T}u^{2}(t)\,dt\] (10) subject to \[0\leq u(t)\leq 1,\] \[z(t,x)\leq z_{\max}.\]
For implementation of the constraint \(z<z_{\max}\), we add a sigmoidal term \(\psi:[0,1]\to\mathbb{R}^{+}\) in the cost functional, which holds \(\psi(z>0)\approx 0\) and \(\psi(z<0)\gg 1\). Using the function \(\psi\left(z-z_{\max}\right)\), case numbers \(z>z_{\max}\) are 'punished' severely. Also, if case numbers are low, we aim to 'punish' larger values for \(u(t)\) because social restrictions are less accepted by the population and the political costs to implement harder restrictions increase. We therefore also define a threshold \(z_{\min}\) under which political costs are assumed to be high, i.e., \(\psi(z>z_{\min})\), such that case numbers \(z<z_{\min}\) are also punished severely. This results in the following maximization problem:
\[\min_{u(t)\in\mathcal{U}}J(u,z)= \int_{0}^{T}\int_{0}^{1}z(t,x)\,dxdt+\frac{\eta}{2}\int_{0}^{T}u^ {2}(t)\int_{0}^{1}\left[1+\frac{c_{1}}{2}\psi\left(z_{\min}-z(t,x)\right) \right]\,dx\,dt \tag{11}\] \[+\frac{\omega}{2}\int_{0}^{T}\int_{0}^{1}c_{2}\psi\left(z(t,x)-z _{\max}\right)\,dx\,dt\] subject to \[0\leq u(t)\leq 1.\]
Alternatively, the control \(u\) can also depend on space, i.e., let \(\mathcal{\tilde{U}}\) be the set of continuous functions \(u\in\mathcal{U}=\mathcal{C}([0,T]\times[0,1]\to[0,1])\). Then the minimization problem reads as follows:
\[\min_{u(t)\in\mathcal{U}}J(u,z)= \int_{0}^{T}\int_{0}^{1}z(t,x)\,dxdt+\frac{\eta}{2}\int_{0}^{T} \int_{0}^{1}u^{2}(t,x)\left[1+\frac{c_{1}}{2}\psi\left(z_{\min}-z(t,x)\right) \right]\,dx\,dt \tag{12}\] \[+\frac{\omega}{2}\int_{0}^{T}\int_{0}^{1}c_{2}\psi\left(z(t,x)-z _{\max}\right)\,dx\,dt\] subject to \[0\leq u(t,x)\leq 1.\]
On a discrete level, solving the above minimisation problems might be complicated. On a continuous level, we can introduce the Lagrangian function (see also Lenhart and Workman [16] for further information):
\[\mathcal{L}(z,u,r) = \int_{0}^{T}\int_{0}^{1}\lambda_{1}(t,x)\left[z^{\prime}(t,x)-(1-z( t,x)-r(t,x))\int_{0}^{1}z(t,y)\,k(t,x-y)\,dy+\gamma z(t,x)\right]\,dxdt \tag{13}\] \[+ \int_{0}^{T}\int_{0}^{1}\lambda_{2}(t,x)\,\left[r^{\prime}(t,x)- \gamma z(t,x)\right]dxdt\] \[- J(z,u)\]
We now want to find the stationary points of the partial derivatives of \(\mathcal{L}\) with respect to \(u\), \(z\), and \(r\):
\[\frac{\partial\mathcal{L}}{\partial u} = \int_{0}^{T}\left\{\int_{0}^{1}\lambda_{1}(t,x)\left(1-z(t,x)-r(t,x)\right)\int_{0}^{1}z(t,y)\,a(x-y)\,dy\right. \tag{14a}\] \[\left.-\eta\,u(t)\int_{0}^{1}\left[1+\frac{c_{1}}{2}\psi\left(z_{ \min}-z(t,x)\right)\right]\,dx\right\}\,dt \stackrel{{!}}{{=}}0\] \[\frac{\partial\mathcal{L}}{\partial z} = \int_{0}^{T}\int_{0}^{1}\left\{-\lambda_{1}^{\prime}(t,x)+ \lambda_{1}(t,x)\int_{0}^{1}z(t,y)\,k(t,x-y)\,dy\right.\] (14b) \[\left.-\lambda_{1}(t,x)\left(1-z(t,x)-r(t,x)\right)\int_{0}^{1}k (t,x-y)\,dy+\gamma\lambda_{1}(t,x)\right.\] \[\left.-\gamma\lambda_{2}(t,x)\right.\] \[\left.-1+\frac{c_{1}\eta}{4}u^{2}(t)\cdot\psi^{\prime}\left(z_{ \min}-z(t,x)\right)\,-\frac{c_{2}\,\omega}{2}\psi^{\prime}\left(z(t,x)-z_{ \max}\right)\right\}\,dx\,dt \stackrel{{!}}{{=}}0\] \[\frac{\partial\mathcal{L}}{\partial r} = \int_{0}^{T}\int_{0}^{1}\left\{\lambda_{1}(t,x)\int_{0}^{1}z(t,y) \,k(t,x-y)\,dy\right.\] (14c) \[\left.-\lambda_{2}^{\prime}(t,x)\right\}\,dx\,dt \stackrel{{!}}{{=}}0\]
For the second and third equation we swapped the integrals and performed partial integration with respect to time \(t\). This leads us to the following system:
\[z^{\prime}(t,x) = (1-z(t,x)-r(t,x))\int_{0}^{1}z(t,y)\,k(t,x-y)\,dy-\gamma z(t,x) z(t=0,x) = z_{0}(x) \tag{15a}\] \[r^{\prime}(t,x) = \gamma z(t,x) r(t=0,x) = r_{0}(x)\] (15b) \[\lambda_{1}^{\prime}(t,x) = \lambda_{1}(t,x)\left[\int_{0}^{1}z(t,y)\,k(t,x-y)\,dy-(1-z(t,x)- r(t,x))\int_{0}^{1}k(t,x-y)\,dy+\gamma\right]\] (15c) \[- \gamma\lambda_{2}(t,x)\] \[- 1+\frac{c_{1}\,\eta}{4}\,u^{2}(t)\cdot\psi^{\prime}\left(z_{\min} -z(t,x)\right)-\frac{c_{2}\,\omega}{2}\psi^{\prime}\left(z(t,x)-z_{\max} \right)\right\} \lambda_{1}(T,x) = 0\] \[\lambda_{2}^{\prime}(t,x) = \lambda_{1}(t,x)\int_{0}^{1}z(t,y)\,k(t,x-y)\,dy \lambda_{2}(T,x) = 0\] \[u(t) = \frac{\int_{0}^{1}\lambda_{1}(t,x)\left(1-z(t,x)-r(t,x)\right) \left(\int_{0}^{1}z(t,y)\,a(x-y)\,dy\right)\,dx}{\eta\int_{0}^{1}\left[1+\frac {c_{1}}{2}\psi\left(z_{\min}-z(t,x)\right)\right]\,dx} \tag{15e}\]
This is the so-called Forward-Backward sweep method according to the method described in Lenhart and Workman [16]. For convergence and stability results, also see Hackbusch [17]. Starting with an initial guess of the control \(u\) over the entire interval, e.g., \(u(t)\equiv 0.5\) or \(u(t,x)\equiv 0.5\), the forward problem is solved according to the differential equations for first solution of \(z\) and \(r\). The transversality conditions \(\lambda_{1}(T)=\lambda_{2}(T)=0\) and the values for \(u\), \(z\) and \(r\) are used to solve the backward problem for \(\lambda_{1}\) and \(\lambda_{2}\). Using the results for \(\lambda_{1}\), \(\lambda_{2}\), \(z\), and \(r\), we calculate an update \(\hat{u}\) on the time-dependent control function. The update of \(u(t)\) is done by moving only a fraction \(\sigma\) of the previous \(u_{\mathrm{old}}\) towards \(\hat{u}(t)\):
\[u(t)=(1-\sigma)\,u_{\mathrm{old}}(t)+\sigma\,\hat{u}(t)\qquad\text{for all }t\in[0,T] \tag{16}\]
This procedure will be repeated until the norm of two subsequent controls is 'close enough', i.e. \(\|u-u_{\mathrm{old}}\|<\mathrm{TOL}\). By numerical experiments a choice of \(\sigma=0.1\) provided decent results which were convergent and the target function is monotonously decreasing with respect to the iteration.
### Space- and time-dependent control
Assume now that \(u\) is also dependent on space, i.e., the kernel function reads as
\[k(t,x-y,x)=(1-u(t,x))\cdot a(x-y)+k_{0}. \tag{17}\]
For this space-dependent formulation, we replace \(u(t)\) by \(u(t,x)\) in the previous equations. Regarding \(\frac{\partial\mathcal{L}}{\partial u}\), this leads to the following formulation:
\[\frac{\partial\mathcal{L}}{\partial u}=\int_{0}^{T}\int_{0}^{1} \left\{\lambda_{1}(t,x)\left(1-z(t,x)-r(t,x)\right)\int_{0}^{1}z(t,y)\,a(x-y )\,dy\right.\] \[\left.-\eta\,u(t,x)\left[1+\frac{c_{1}}{2}\,\psi\left(z_{\min}-z( t,x)\right)\right]\right\}\,dx\,dt \stackrel{{!}}{{=}}0 \tag{18}\]
and while the formulas for \(z\) and \(r\) remain the same, we find
\[\lambda_{1}^{\prime}(t,x) =\lambda_{1}(t,x)\left[\int_{0}^{1}z(t,y)\,k(t,x-y)\,dy-(1-z(t,x) -r(t,x))\int_{0}^{1}k(t,x-y)\,dy+\gamma\right]\] \[-\gamma\lambda_{2}(t,x) \tag{19a}\] \[-1+\frac{c_{1}\eta}{4}\,\mu^{2}(t,x)\cdot\psi^{\prime}\left(-z_ {\min}-z(t,x)\right)-\frac{c_{2}\,\omega}{2}\psi^{\prime}\left(z(t,x)-z_{\max }\right)\right\} \lambda_{1}(T,x)=0\] \[\lambda_{2}^{\prime}(t,x) =\lambda_{1}(t,x)\int_{0}^{1}z(t,y)\,k(t,x-y)\,dy \lambda_{2}(T,x)=0\] (19b) \[u(t,x) =\frac{\lambda_{1}(t,x)\left(1-z(t,x)-r(t,x)\right)\int_{0}^{1}z( t,y)\,a(x-y)\,dy}{\eta\left[1+\frac{c_{1}}{2}\,\psi\left(z_{\min}-z(t,x)\right) \right]} \tag{19c}\]
### Space- and time-dependent, discretized control
While the concept of a space- and time-dependent control is certainly reasonable, it is not realistic to design the control in a continuous way. We therefore consider a control function \(u(t,x)\) that is designed as a piecewise constant function in both time and space. The control function \(u(t,x)\) takes different constant values over different rectangular regions. Let's denote the control value within each rectangle as \(u_{ij}\), where \(i\) represents the time interval number and \(j\) represents the spatial interval number.
Mathematically, we can express the piecewise constant control function as follows: Let \(t_{0}<t_{1}<\ldots<t_{n-1}<t_{n}\) be the time instants that define the intervals, and \(x_{0}<x_{1}<\ldots<x_{m-1}<x_{m}\) be the spatial locations that define the intervals. Then
\[u(t,x)= u_{ij}\qquad\text{ if }t\in[t_{i-1},t_{i})\text{ and }x\in[x_{j-1},x_{j}) \tag{20}\]
represents the control value within the corresponding rectangular region for \(i=1\ldots n\) and \(j=1\ldots m\). For the piecewise constant functions, we use the _starting value_ for the time interval \([t_{i-1},t_{i})\), i.e., \(t_{i-1}\), and the _average_ of the space interval \([x_{j-1},x_{j})\). This is then plugged into the spatial model as described in section 3.2.
## 4 Agent-based model
In order to validate the model, we compare the above described integro-differential model to a stochastic, microscopic agent-based model developed at the Interdisciplinary Centre for Mathematical and Computational Modelling at the University of Warsaw. Complete details of this model are given in Niedzielewski et al. [18, 19]. In this model, agents have certain states (susceptible, infected, recovered, hospitalized, deceased, etc.) and infection events occur in certain contexts, i.e., on the streets, workplaces, and several more. A similar comparison can be found in Donges et al. [13], but to ignore in-household transmission we use only single household contexts. Location space is one dimensional with 100 location points that are available. The single households are distributed uniformly in space. The agents are assigned individually to households and corresponding street context (only these two types of contexts are in use). When probability of infection is computed every street context infectivity is taken into account. To allow for control of diffusion of infected throughout the whole space, we also use the transmission kernel function \(k(t,x-y)\). As a result, the infectivity decreases with distance between location of agent and street context. Since the ODE-model (1) is a variant of an SIR-model, the agent-based model also just uses the SIR-states and ignores all other states. The agent-based model uses a recovery time for each infected individual that is sampled from an exponential distribution with mean 10 days.
## 5 Numerical Simulations
In this section, the Lagrangian optimization of the integro-differential model as of eqns. (11) and (12) is presented and the numeric results are shown. We denote the initial condition function as follows:
\[z_{0}^{1}(x) \equiv 2\cdot 10^{-5},\] \[z_{0}^{2}(x) =\begin{cases}1\cdot 10^{-5}&x<0.9\\ 1\cdot 10^{-4}&x\geq 0.9\end{cases}.\]
For reasons of comparability, at a choice of 100 spatial grid points almost the same mass is used for the initial infected in both variants (as the average of \(z_{0}^{2}\) is equal to \(\overline{z}_{0}^{2}=1.9\cdot 10^{-5}\)). We listed all six model simulations in Tab. 1, including parameter values for the optimal control as of system (11). Also, we choose parameter values of \(c=\delta=50\) (resulting in a kernel-based reproductive number of \(\mathcal{R}_{0}\approx 2\)), \(\beta=\gamma=0.1\), \(c_{1}=c_{2}=1000\), \(z_{\text{min}}=1\cdot 10^{-5}\), and \(z_{\text{max}}=5\cdot 10^{-3}\). For both \(\eta\) and \(\omega\), we choose two different values which are described in Tab. 1. Lastly, wrt the penalization, we use the function \(\psi(z)=1+\tanh\left(1000\,z\right)\) with its derivative \(\psi^{\prime}(z)=1000\,\text{sech}^{2}(z)\).
In Sims. A, no space-dependent control is included in the model and the initial values are constant. In Sims. B, again the control is only time-dependent, but we induce an 'infection wave' at one of the boundaries. In Sims. C, we also induce this infection wave at the boundary, but additionally allow a control depending on both space and time. Two different choices for the parameter values \(\eta\) and \(\omega\) as well as a different maximal duration \(T\) are imposed on all of these simulations. In Sims. D, we use a space-time-dependent control, but use 10-days resp. 10-cells average to account for a more realistic representation of the control. The results are compared to 10 ABM runs for each scenario and also their mean.
Choosing an arbitrary starting value for \(u(x)\) or \(u(t,x)\), we use the Forward-Backward sweep method as of eqns. (15) and (19) and evaluate the target function in each step. As an example, the convergence of the target function value is presented in Fig. 1.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} Variant & space-dependent \(u\)? & piecewise constant \(u\)? & \(T\) & \(z_{0}\) & \(\eta\) & \(\omega\) \\ \hline A1 & no & no & 400 & \(z_{0}^{1}(x)\) & 0.02 & 1 \\ A2 & no & no & 800 & \(z_{0}^{1}(x)\) & 0.005 & 0.2 \\ B1 & no & no & 400 & \(z_{0}^{2}(x)\) & 0.02 & 1 \\ B2 & no & no & 800 & \(z_{0}^{2}(x)\) & 0.005 & 0.2 \\ C1 & yes & no & 400 & \(z_{0}^{2}(x)\) & 0.02 & 1 \\ C2 & yes & no & 800 & \(z_{0}^{2}(x)\) & 0.005 & 0.2 \\ D1 & yes & yes & 400 & \(z_{0}^{2}(x)\) & 0.02 & 1 \\ D2 & yes & yes & 800 & \(z_{0}^{2}(x)\) & 0.005 & 0.2 \\ \end{tabular}
\end{table}
Table 1: Listing of all simulations and different parameter values used for optimization of the integro-differential model.
Figure 1: Exemplary behavior of the target function for model A1 and \(u(t)\equiv 1\).
### Simulation Results
#### 5.1.2 Simulation A2
#### 5.1.3 Simulation B1
Figure 8: Evolution of the control in Simulation B1.
Figure 10: Difference between the _SIR_-model to the ABM model mean (left) and temporal evolution of the spatial mean in the _SIR_-model and all single runs of the ABM model, as well as their mean (right) in Simulation B1.
Figure 9: Spatio-temporal evolution of the infected in Simulation B1, on the left for the integro-differential _SIR_-model, on the right for the ABM model.
#### 5.1.4 Simulation B2
Figure 11: Evolution of the control in Simulation B2.
Figure 12: Spatio-temporal evolution of the infected in Simulation B2, on the left for the integro-differential _SIR_-model, on the right for the ABM model.
Figure 13: Difference between the _SIR_-model to the ABM model mean (left) and temporal evolution of the spatial mean in the _SIR_-model and all single runs of the ABM model, as well as their mean (right) in Simulation B2.
#### 5.1.5 Simulation C1
Figure 16: Difference between the _SIR_-model to the ABM model mean (left) and temporal evolution of the spatial mean in the _SIR_-model and all single runs of the ABM model, as well as their mean (right) in Simulation C1.
Figure 14: Evolution of the control in Simulation C1.
Figure 15: Spatio-temporal evolution of the infected in Simulation C1, on the left for the integro-differential _SIR_-model, on the right for the ABM model.
#### 5.1.6 Simulation C2
Figure 19: Difference between the _SIR_-model to the ABM model mean (left) and temporal evolution of the spatial mean in the _SIR_-model and all single runs of the ABM model, as well as their mean (right) in Simulation C2.
Figure 17: Evolution of the control in Simulation C2.
Figure 18: Spatio-temporal evolution of the infected in Simulation C2, on the left for the integro-differential _SIR_-model, on the right for the ABM model.
#### 5.1.7 Simulation D1
Figure 21: Spatio-temporal evolution of the infected in Simulation D1, on the left for the integro-differential _SIR_-model, on the right for the ABM model.
Figure 22: Difference between the _SIR_-model to the ABM model mean (left) and temporal evolution of the spatial mean in the _SIR_-model and all single runs of the ABM model, as well as their mean (right) in Simulation D1.
Figure 20: Evolution of the control in Simulation D1.
#### 5.1.8 Simulation D2
Figure 23: Evolution of the control in Simulation D2.
Figure 24: Spatio-temporal evolution of the infected in Simulation D2, on the left for the integro-differential _SIR_-model, on the right for the ABM model.
Figure 25: Difference between the _SIR_-model to the ABM model mean (left) and temporal evolution of the spatial mean in the _SIR_-model and all single runs of the ABM model, as well as their mean (right) in Simulation D2.
### Observations for the integro-differential model
In simulation A1 (cf. Figs. 2,3,4), using an only time-dependent control function and homogeneous initial conditions, we see that after light restrictions causing raising numbers of infectives, a sudden increase in lockdown restrictions causes falling numbers. The control only rises as late as possible in order not to surpass or get close to \(z_{\max}\), and thereon remains relatively constant. Raising to levels at or slightly above 0.5, it manages to contain the disease spread due to \(\mathcal{R}_{0}=2\). Later on, the effective reproduction rate lowers as there is a higher amount of recovered which are assumed to not get infected again in this simplified model. At the end, roughly 10 % of the total population is infected or recovered, homogeneously spread on the whole spatial domain. The drop of the control function towards the end can be explained as the system regulates the epidemics such that the maximally allowed rates are just slightly missed at the end. This observation is in fact independent of the chosen duration or simulation. The convergence of the target function \(J(u)\) is exemplary shown for Sim. A1 (Fig. 1) and looks similar for all other simulations. The results of Sim. A2 (cf. Figs. 5,6,7) are comparable to Sim. A1, yet due to different values of \(\eta\) and \(\omega\), the control reduces more quickly in a quite linear fashion after rising above 0.5. As a result, the total amount of recovered and infectives is roughly equal to 20 % at the end.
In simulations B1 (cf. Figs. 8,9,10) and B2 (cf. Figs. 11,12,13), an only time-dependent control function and an inhomogeneous initial condition are used. We observe similar optimal control functions in Sims. B1 and B2 as in A1 and A2, respectively, resulting in a spatially delimited peak of infections slightly propagating in time. Due to generally higher infection cases in Sim. B2, there is another peak at the end of the observed time interval. The share of recovered again reaches values around 10 % for B1, with a peak close to the boundary from which the disease was important, and 20 % for B2, with homogeneously distributed values across the spatial domain.
Simulations C1 (cf. Figs. 14,15,16) and C2 (cf. Figs. 17,18,19) feature a space-time-dependent control function and an inhomogeneous initial condition. While the spatially averaged behaviour of the control function in C1 and C2 is similar as in B1 and B2, respectively, a more or less slight spatial 'propagation' of the control is visible. This adaptive behaviour allows for the control term to never surpass 0.5, resulting in less effort and thus a lower target function. The share of recovered again reaches values around 10 % for C1, and 20 % for C2, both with homogeneously distributed values across the spatial domain.
Finally, simulations D1 (cf. Figs. 20,21,22) and D2 (cf. Figs. 23,24,25) feature a piecewise constant space-time-dependent control function and an inhomogeneous initial condition. In those simulations, as to be expected, the control is similar to the one in the continuous simulations C1 and C2. However, using the starting value as the control for the next (10) days, causes higher infection rates in the initial phase of the disease, such that it can be said that globally the control has to be slightly larger as in the continuous simulations. The share of recovered again reaches values around 10 % for D1, and 20 % for D2, yet features significant peaks at both boundaries.
### Comparison with the agent-based model
Tab. 2 lists the values of the target function of all simulations according to eqns. (11) and (12). Additionally, the target function of a model without any control measures, i.e. \(u(t)\equiv 0\) or \(u(t,x)\equiv 0\), is listed, showing significant improvement in the target function for simulations C1 and C2, while D1 and D2 were still reasonably good in reducing the cost function values despite their restrictions.
If we compare the results of the target function, it shows that by using the optimal control, it was possible to reduce the target function compared to \(u\equiv 0\) by a factor depending on the chosen simulation and parameter values of \(\eta\) and \(\omega\). As to be expected, the best results for \(J(u)\) were found in the space-dependent, yet continuous control. Comparing the results for the target function for the _SIR_-model and the ABM model, we see that there are only minor differences in the outcome, mainly in C1 and C2 and there especially close to the boundaries. Those can be explained by the stochastic nature of the ABM model of which also not all features can be adapted to the integro-differential model.
\begin{table}
\begin{tabular}{c|c|c|c} Simulation & \(J(u\equiv 0)\) & \(J(u^{*})_{\mathrm{sir}}\) & \(J(u^{*})_{\mathrm{abm}}\) \\ \hline A1 & 132.4 & 31.9 & 29.5 \\ A2 & 32.9 & 13.4 & 12.6 \\ B1 & 132.5 & 40.6 & 39.0 \\ B2 & 32.9 & 19.9 & 19.3 \\ C1 & 132.5 & 10.1 & 12.6 \\ C2 & 32.9 & 8.5 & 9.0 \\ D1 & 132.5 & 19.8 & 21.5 \\ D2 & 32.9 & 12.2 & 12.5 \\ \end{tabular}
\end{table}
Table 2: Target function values for the various simulations, according to eqns. (11) and (12).
## 6 Discussion and Outlook
In this work we have presented an integro-differential _SIR_-model, have proved several theoretical properties (including uniqueness of the solution) and provided setups in order to apply optimal control on the transmission of the disease. The results were compared to the ABM model and are overall very much accordant. This has an interesting consequence; the _SIR_-model is 'cheap' to compute, compared to the computationally 'expensive' ABM model, for which an optimal control is hardly possible. Thus, by optimization of the _SIR_-model we are now able to find a good proxy for the ABM model. While it will not be possible to reproduce the results perfectly, the averages of both models are very similar and match very well in most simulations except the space-and time-dependent continuous control. The results always remain within the designed range \([z_{\text{min}},z_{\text{max}}]\), such that the healthcare capacities are not overloaded, even in the model with piecewise constant values for \(u\) that is less flexible to quickly raising infection numbers. The system still remains in the range of stochastic fluctuations of the ABM model.
As future work in this context aims to apply the optimal control proxy on ABM models, we aim to use multidimensional problems, i.e., a spatially 2D-problem which actually represents a more realistic approach for entire countries (like Poland in the Warsaw model). Another interesting application of this integro-differential model lies in models of age-structure, where the parameter \(x\) is interpreted as the age, and we can transform a discrete contact matrix for age cohorts into a kernel function. While the integro-differential model can be enhanced and made more realistic by adding more compartments and parameters, e.g., vaccination and household interactions (corresponding to the previously unused \(k_{0}\)) or an _SEIR_-model, it is advisable to keep the model as simple as possible in order to maintain computational effectivity. However, using the parameters and knowledge we have gained from this work, we can implement the ABM model including households, and perform parameter estimation to find reasonable values for \(k_{0}\) in the integro-differential _SIR_-model.
|
2310.15530 | Constraining exotic dark matter models with the dark ages 21-cm signal | The dark ages 21-cm signal is a powerful tool for precision cosmology and
probing new physics. We study two non-standard models: an excess radio
background (ERB) model (possibly generated by dark matter decay) and the
millicharged dark matter (mDM) model. These models were inspired by the
possible EDGES detection of a strong global 21-cm absorption during cosmic
dawn, but more generally they provide a way to anticipate the potential
discovery space. During the dark ages the 21-cm global signal in the ERB model
reaches a saturated form for an amplitude $A_{\rm r}=0.4$, where $A_{\rm r}$ is
the radio background intensity at cosmic dawn relative to the cosmic microwave
background. This amplitude is one-fifth of the minimum required to explain the
EDGES signal, and corresponds to just 0.1% of the observed extragalactic
background; it would give a signal that can be detected at 5.9$\sigma$
significance (compared to $4.1\,\sigma$ for the standard signal) and can be
distinguished from the standard (no ERB) signal at $8.5\,\sigma$, all with a
1,000 hr global signal measurement. The 21-cm power spectrum has potentially
more information, but far greater resources would be required for comparable
constraints. For the mDM model, over a range of viable parameters, the global
signal detection significance would be $4.7-7.2\,\sigma$, and it could be
distinguished from the standard at $2.2-9.3\,\sigma$. With an array of global
signal antennas achieving an effective 100,000 hr integration, the significance
would be $10\,\times$ better. Our analysis helps motivate the development of
lunar and space-based dark ages experiments. | Rajesh Mondal, Rennan Barkana, Anastasia Fialkov | 2023-10-24T05:28:55Z | http://arxiv.org/abs/2310.15530v2 | # Constraining exotic dark matter models with the dark ages 21-cm signal
###### Abstract
The dark ages 21-cm signal is a powerful tool for precision cosmology and probing new physics. We study two non-standard models: an excess radio background (ERB) model (possibly generated by dark matter decay) and the millicharged dark matter (mDM) model. These models were inspired by the possible EDGES detection of a strong global 21-cm absorption during cosmic dawn, but more generally they provide a way to anticipate the potential discovery space. During the dark ages the 21-cm global signal in the ERB model reaches a saturated form for an amplitude \(A_{\rm r}=0.4\), where \(A_{\rm r}\) is the radio background intensity at cosmic dawn relative to the cosmic microwave background. This amplitude is one fifth of the minimum required to explain the EDGES signal, and corresponds to just 0.1% of the observed extragalactic background; it would give a signal that can be detected at 5.9\(\sigma\) significance (compared to 4.1\(\sigma\) for the standard signal) and can be distinguished from the standard (no ERB) signal at 8.5\(\sigma\), all with a 1,000 hour global signal measurement. The 21-cm power spectrum has potentially more information, but far greater resources would be required for comparable constraints. For the mDM model, over a range of viable parameters, the global signal detection significance would be 4.7 - 7.2 \(\sigma\), and it could be distinguished from standard at \(2.2-9.3\,\sigma\). With an array of global signal antennas achieving an effective 100,000 hr integration, the significance would be 10\(\times\) better. Our analysis helps motivate the development of lunar and space-based dark ages experiments.
keywords: methods: statistical - techniques: interferometric - dark ages, reionization, first stars - large-scale structure of Universe - cosmology: observations - cosmology: theory.
## 1 Introduction
The 21-cm signal from neutral hydrogen (H i) is the most promising method for studying a significant time period in the early Universe, from shortly after recombination through cosmic dawn and reionization. (Sunyaev & Zeldovich, 1972; Hogan & Rees, 1979; Scott & Rees, 1990). In particular, an era that remains observationally unexplored is the dark ages, the period of time between the epoch of recombination (redshift \(z\sim 1100\)) and the formation of the first luminous objects (\(z\sim 30\)). After recombination, the temperature of the cosmic microwave background (CMB) (\(T_{\rm\gamma}\)) fell as \((1+z)\), while the (kinetic) temperature of the gas (\(T_{\rm K}\)) declined faster, eventually adiabatically as \((1+z)^{2}\). The spin temperature (\(T_{\rm S}\)) was strongly coupled to \(T_{\rm K}\) through collisional coupling until \(z\sim 70\)(Madau et al., 1997). After this time, the collisional coupling of the 21-cm transition became less effective, and \(T_{\rm S}\) began to approach \(T_{\rm\gamma}\). Therefore, during the dark ages, the 21-cm signal from H i is expected to be observable in the CMB spectrum over a wide redshift range of \(200\lesssim z\lesssim 30\). This is because \(T_{\rm S}\) was significantly lower than \(T_{\rm\gamma}\) during this time, which caused the H i to absorb CMB photons.
The dark ages are a critical window into the history of the Universe. Unlike the present Universe, which is characterized by complex astrophysical processes, the dark ages offer a probe of fundamental cosmology. This is because the dark ages were a time when the Universe was still largely homogeneous and isotropic, and the only (currently known) source of radiation was the CMB. The dark ages can be probed effectively using the redshifted H i 21-cm signal, produced when neutral hydrogen atoms absorb or emit photons at a (local) frequency of 1420 MHz. The 21-cm signal can be observed over a range of cosmic times, as each frequency corresponds to a different look-back time. This means that the 21-cm signal can be used to map the evolution of the Universe over time. This signal is also naturally three-dimensional (3D), meaning that it can in principle be used to measure the full spatial distribution of neutral hydrogen. This is in contrast to the CMB, which is a 2D signal. The dark ages can be probed by measuring the global (or mean) signal as well as by measuring the 3D power spectrum over the relevant redshift range. Therefore, the 21-cm signal from the dark ages contains in principle more cosmological information than the CMB (Loeb & Zaldarriaga, 2004; Mondal & Barkana, 2023).
The 21-cm power spectrum during the dark ages is sensitive to the \(\Lambda\)CDM cosmological parameters, particularly on the scale of the baryon acoustic oscillations (BAOs) (Barkana & Loeb, 2005; Mondal & Barkana, 2023). In addition to the absence of complicating astrophysics, the fluctuations are still rather linear during the dark ages, so modeling and interpreting them is simpler than for probes in the more recent Universe. The 21-cm fluctuations probe fluctuations of the baryon density, peculiar velocity (Bharadwaj & Ali, 2004;
Barkana & Loeb 2005b), and baryon temperature (Naoz & Barkana 2005; Barkana & Loeb 2005a). A number of smaller contributions must be included in a precise calculation (Lewis & Challinor 2007; Ali-Haimoud et al. 2014).
The 21-cm signal from the dark ages is a faint radio signal that can only be detected at very low frequencies, below 45 MHz. The Earth's ionosphere heavily distorts and eventually blocks radio waves at these frequencies. This means that it is impossible to observe the 21-cm signal from the dark ages using radio telescopes on Earth. To overcome this challenge, scientists are developing lunar and space-based experiments to observe the 21-cm signal from the dark ages. These experiments are being rapidly developed as part of the international race to return to the moon. Some such experiments include: NCLE1 (Netherlands-China), DAPPER (USA) (Burns et al. 2021), FASBIDE (USA) (Burns et al. 2019), DSL2 (China), Far/View (USA) (Burns et al. 2021), SEAMS (India) (Borade et al. 2021), PRTUSH3 (India), LaSee Night (USA) (Bale et al. 2023), ALO4 (Europe), and ROLSE5 (USA). We note that going to the Moon could present substantial practical advantages beyond just avoiding the Earth's ionosphere: this would offer a potentially benign environment that is extremely dry and stable and could block out terrestrial radiofrequency interference (on the lunar farside).
Footnote 1: [https://doi.org/10.1126/science.aau2004](https://doi.org/10.1126/science.aau2004)
Footnote 2: [https://www.astron.nl/dsl2015](https://www.astron.nl/dsl2015)
Footnote 3: [https://www.rri.res.in/DISTORTION/pratush.html](https://www.rri.res.in/DISTORTION/pratush.html)
Footnote 4: [https://www.astron.nl/dailyimage/main.php?date=20220131](https://www.astron.nl/dailyimage/main.php?date=20220131)
Footnote 5: [https://www.colorado.edu/ness/ness-projects](https://www.colorado.edu/ness/ness-projects)
Given the great potential for precision cosmology and the rapid observational developments, we recently (Mondal & Barkana 2023) studied the use of the 21-cm signal (both the global signal and power spectrum) during the dark ages to constrain cosmological parameters. We showed that measuring the global 21-cm signal for 1,000 hours would allow us to measure a combination of cosmological parameters to within 10% accuracy. A longer integration time would improve this accuracy even further, potentially down to 1% or even better, with a measurement of the cosmic Helium fraction that could exceed CMB measurements by the Planck satellite (Planck Collaboration et al. 2020). It would be significantly harder to achieve precision cosmology with 21-cm fluctuations, as it would require a large collecting area of order 10 km\({}^{2}\). This is much larger than the collecting area of any existing radio telescope, but it is possible for future instruments. With 10 km\({}^{2}\), we would be able to achieve a measurement accuracy that is twice as good as the global case (with a 1,000 hour integration for both). Increasing the collecting area of integration time further could eventually beat the Planck accuracy in some cosmological parameter combinations, the Helium fraction, and the total mass of neutrinos. Thus, if we assume standard cosmology, 21-cm observations from the dark ages could potentially lead to major advances in our understanding of the Universe.
Whenever considering a new window on the Universe, it is important to also anticipate the possible discovery space that lies beyond just standard cosmology. Indeed, there may be exotic (non-standard) models that are allowed by other astrophysical probes and could be detected with 21-cm observations of the dark ages. In general, given the array of observational constraints on cosmic history and on the properties of dark matter, it is not obvious how to construct such models. Fortunately, a number of such models were stimulated by the possible EDGES detection of a strong 21-cm signal during cosmic dawn (Bowman et al. 2018). While disputed at 95% significance by the SARAS experiment (Singh et al. 2022), with further measurements expected to resolve this tension, the tentative EDGES signal has inspired theories that can be probed over a wide range of possible parameters, independently of whether EDGES turns out to be correct.
Specifically, the anomalously strong EDGES trough has two main categories of explanations. One category of explanation is the presence of an excess radio background (ERB) at high redshifts, with an intensity significantly higher than the CMB (Bowman et al. 2018; Feng & Ruder 2018; Ewall-Wice et al. 2018; Fialkov & Barkana 2019; Mirocha & Furlanetto 2019; Ewall-Wice et al. 2020). One possibility for such an ERB is an astrophysically-produced radio background, from sources such as active galactic nuclei (AGN, Biermann et al. 2014; Bolgar et al. 2018; Ewall-Wice et al. 2018, 2020; Mebane et al. 2020), or star-forming galaxies (Condon 1992; Jana et al. 2019; Reis et al. 2020) at high redshift. However, in order to have an effect on the signal from the dark ages (as opposed to only cosmic dawn), the ERB must have been formed in the early Universe, during or before the dark ages. In Fialkov & Barkana (2019) we showed that the depth and steepness of the EDGES signal could be explained by such a homogeneous ERB with a synchrotron spectrum. Exotic processes such as dark matter annihilation or superconducting cosmic strings (Fraser et al. 2018; Pospelov et al. 2018; Brandenberger et al. 2019) could give rise to this kind of homogeneous early ERB. Models of an ERB are also motivated by the observation at low frequencies of an excess radio background over that CMB by ARCADE2 (Fixsen et al. 2011; Seiffert et al. 2011), confirmed by LWA1 (Dowell & Taylor 2018) in the frequency range \(40-80\) MHz. This observed excess radio may be extragalactic, but it is unclear what fraction of the observed excess originates from Galactic compared to extragalactic sources (e.g., Subrahmanyan & Cowsik 2013). In any case, the observed excess serves as an upper limit for an extragalactic ERB.
The second category of explanations for EDGES is that an additional cooling mechanism cooled the gas faster than just adiabatic cooling due to the cosmic expansion. An additional cooling mechanism has been suggested (Barkana 2018; Berlin et al. 2018; Barkana et al. 2018; Munoz & Loeb 2018; Liu et al. 2019) that involves a non-gravitational interaction between the ordinary matter and the dark matter particles (e.g., via Rutherford-like scattering); this drives down the temperature of the gas leading to the strong observed absorption.
In this paper we consider these two categories, first an homogeneous ERB with a synchrotron spectrum (Fialkov & Barkana 2019) (where this spectrum is motivated by the possibility that the ERB explains part or all of the observed extragalactic radio background), and then the millicharged dark matter (mDM) model (Munoz & Loeb 2018) in which a small fraction of the dark matter particles have a tiny electric charge. Throughout the paper, we assume the Planck+BAO best fit values of cosmological parameters (Planck Collaboration et al. 2020, table 2, last column).
## 2 The 21-cm signal
The brightness temperature of the 21-cm line, relative to the CMB, is given by the following equation:
\[T_{\rm b}=(T_{\rm S}-T_{\gamma})\,\frac{1-e^{-\tau_{21}}}{1+z}\, \tag{1}\]
where \(T_{\gamma}=2.725\) K (\(1+z\)). Assuming that the optical depth of the 21-cm transition \(\tau_{21}\ll 1\), this can be written as:
\[T_{\rm b}=54.0\,{\rm mK}\,\frac{\rho_{\rm HI}}{\bar{\rho}_{\rm H} }\left(\frac{\Omega_{\rm b}n^{2}}{0.02242}\right)\left(\frac{\Omega_{\rm m}n^{2 }}{0.1424}\right)^{-\frac{1}{2}}\left(\frac{1+z}{40}\right)^{\frac{1}{2}}\\ \frac{x_{\rm c}}{1+x_{\rm c}}\left(1-\frac{T_{\gamma}}{T_{\rm K} }\right)\, \tag{2}\]
where \(\rho_{\rm HI}\) is the neutral hydrogen density, \(\bar{\rho}_{\rm H}\) is the cosmic mean density of hydrogen, and \(x_{\rm c}\) is the collisional coupling coefficient (Zygelman, 2005). In Fig. 1 we show the evolution of \(T_{\gamma}\) and \(T_{\rm K}\) (cyan-blue and light-orange lines, respectively) as a function of \(\nu\) (and \(z\)). Note that \(T_{\gamma}\) falls as \(1/\nu\) (or \([1+z]\)), while \(T_{\rm K}\) falls faster, eventually (at the lower redshifts) as \(1/\nu^{2}\) (or \([1+z]^{2}\)).
During the dark ages, the spin temperature of the hydrogen atoms is pulled towards the temperature of the gas (\(T_{\rm S}\to T_{\rm K}\)) by atomic collisions, while it is pulled towards the temperature of the CMB (\(T_{\rm S}\to T_{\gamma}\)) by CMB scattering. The relative importance of these two effects depends on the density of the gas. Fig. 2 shows the evolution of \(x_{\rm c}\) as a function of \(\nu\) (or \(z\)). The value of \(x_{\rm c}\) is a measure of the efficiency with which collisions between hydrogen atoms can couple the spin states of the H i atoms into equilibrium with the regular (kinetic) gas temperature. The coupling is strong (and so \(T_{\rm S}\approx T_{\rm K}\)) roughly until the value \(x_{\rm c}=1\) is reached at \(z=72.4\), after which \(T_{\rm S}\) (which is shown in Fig. 1) begins to approach the CMB temperature.
The sky-averaged 21-cm brightness temperature, as a function of \(\nu\) (or \(z\)), is referred to the 21-cm global (or mean) signal. Experiments measuring the global signal require a single, well-calibrated antenna. Therefore, they are relatively simple and advantageous to consider as the first step toward detecting the dark ages signal. For the standard set of cosmological parameters, we use the CAMB6(Lewis & Challinor, 2007; Lewis & Bridle, 2002) cosmological perturbation code to precisely generate the 21-cm global signal7. The 21-cm global signal during the dark ages is always negative (corresponding to absorption relative to the CMB). The 21-cm global signal from the dark ages in the standard cosmological model is shown in comparison with other cases later in this paper (e.g., the black dashed line in Fig. 3). The peak of the signal is \(40.2\,{\rm mK}\) at \(\nu=16.3\,{\rm MHz}\) (\(z=86\)).
Footnote 6: [http://camb.info](http://camb.info)
Footnote 7: To extract the 21-cm global signal from CAMB, we run CAMB twice, once with temperature units on and once with temperature units off, and take the ratio of the transfer functions in the two cases.
In addition, the dark ages can be probed by measuring the fluctuations in the 21-cm signal at various length scales, i.e., the power spectrum. These fluctuations are mainly due to the fluctuations in the gas density, temperature, and \(x_{\rm c}\). To accurately predict the 21-cm power spectrum, we use CAMB [which includes small additional effects (Lewis & Challinor, 2007; Ali-Haimoud et al., 2014) not included in eq. (2)] and add to it redshift space distortions caused by the line-of-sight component of the peculiar velocity of the gas (Kaiser, 1987; Bharadwaj & Ali, 2004; Barkana & Loeb, 2005b) and the light-cone effect (Barkana & Loeb, 2006; Mondal et al., 2018), as detailed in our previous paper (Mondal & Barkana, 2023) [Note that the Alcock-Paczynski effect (Alcock & Paczynski, 1979; Ali et al., 2005; Nusser, 2005; Barkana, 2006) is not relevant since we do not vary the cosmological parameters in this paper]. The 21-cm power spectrum from the dark ages for the standard cosmological model is shown in comparison with other cases later in this paper (e.g., the dashed lines in Figs. 8 and 9). The power increases initially as the adiabatic expansion cools the gas faster than the CMB, and density fluctuations grow due to gravity. However, eventually the power decreases as the declining density reduces \(x_{\rm c}\). For example, the maximum squared fluctuation \(\Delta^{2}\) at \(k=0.1\,{\rm Mpc}^{-1}\) is \(0.44\,{\rm mK}^{2}\) at \(z=51\). Measuring the dark ages power spectrum is substantially more difficult than measuring the global signal, but it contains potentially much more information (Loeb & Zaldarriaga, 2004). As we have recently shown (Mondal & Barkana, 2023), for standard cosmology the global signal offers a relatively accessible first step to observing the dark ages, with the power spectrum requiring a much greater investment to get started, but offering far greater potential returns. More specifically, a single lunar global antenna can make a novel test of the standard cosmological model, showing whether it can describe the dark ages or if instead there is some surprise in cosmic history. An array of antennas (either global antennas for increased integration time, or an interferometric array) can yield some cosmological parameters (the overall baryon density and the Helium fraction) at an accuracy competitive with Planck, and a very large interferometer can outperform Planck on these parameters as well as the total mass of neutrinos.
Figure 1: The evolution of the CMB temperature \(T_{\gamma}\), gas temperature \(T_{\rm K}\) and the spin temperature \(T_{\rm S}\) (in units of K), as a function of \(\nu\) (or \(z\) as the top \(x\)-axis). The total radio background temperature \(T_{\rm B}\) for the ERB model with \(A_{\rm r}=0.4\) is also shown, along with the corresponding \(T_{\rm S}\).
Figure 2: The evolution of the collisional coupling coefficient \(x_{\rm c}\) as a function of \(\nu\) (or \(z\) as the top \(x\)-axis). This also shows \(x_{\rm c}T_{\gamma}/T_{\rm K}\), which gives the effective coupling in the case with \(T_{\rm R}\) for \(A_{\rm r}=0.4\).
## 3 The excess radio background model
### Global signal
To calculate the global signal for the excess radio background (ERB) model, we use eq. (2) together with the value of \(T_{\rm b}\) from CAMB, in order to extract \(x_{\rm c}\) (which we show in Fig. 2; note that this calculation neglects the residual ionized fraction and other tiny effects). Now, in the presence of a radio background, we change the final factors in eq. (2) from
\[\frac{x_{\rm c}}{1+x_{\rm c}}\left(1-\frac{T_{\gamma}}{T_{\rm K}}\right)\]
to
\[\frac{x_{\rm c}T_{\gamma}/T_{\rm R}}{1+x_{\rm c}T_{\gamma}/T_{\rm R}}\left(1- \frac{T_{\rm R}}{T_{\rm K}}\right)\.\]
Here, \(x_{\rm c}\) is the same value as before, i.e., we use it to denote the value in the absence of the ERB; note that this differs from the notation used in some previous papers. The effective coupling constant in the ERB case is \(x_{\rm c}T_{\gamma}/T_{\rm R}\) (which has often been denoted just \(x_{\rm c}\) in the ERB case). We use this notation in order to show simply and clearly how the ERB changes the 21-cm signal (as opposed to the previous notation which hides part of the ERB effect within the change in \(x_{\rm c}\)). We note that an additional CMB heating mechanism suggested by Venumadhav et al. (2018) would be even more important in the presence of an ERB; however, Meiksin (2021) showed that this mechanism corresponds to a balanced internal energy exchange that does not significantly heat the gas and thus should not be included.
The total radio background at 21 cm at redshift \(z\) (including the CMB plus the ERB) is assumed to be (as in Fialkov & Barkana, 2019):
\[T_{\rm R}=T_{\gamma}\left[1+A_{\rm r}\left(\frac{v_{\rm obs}}{78\ {\rm MHz}} \right)^{\alpha}\right]\, \tag{3}\]
where \(v_{\rm obs}=1420\ {\rm MHz}/(1+z)\). Here the amplitude \(A_{\rm r}\) of the ERB is measured relative to the CMB at an observed frequency of 78 MHz, approximately the center of the tentative EDGES absorption feature. We assume \(\alpha=-2.6\) to match the spectrum of the extragalactic radio background. Fialkov & Barkana (2019) showed that a minimum value of \(A_{\rm r}=1.9\) is required in order to match the EDGES feature, when combined with models covering a wide range of possible astrophysical parameters. The level of the extragalactic radio background (its \(2\sigma\) upper limit) gives an upper limit of 375 on the possible value of \(A_{\rm r}\).
Fig. 3 shows the size of the global 21-cm signal from the dark ages as a function of \(\nu\) (and \(z\)), for the excess radio background model with various values of \(A_{\rm r}=[0.001,\,0.01,\,0.1,\,0.4,\,375]\). Also shown is the standard case which corresponds to \(A_{\rm r}=0\), i.e., CMB-only and no ERB. The absorption signal increases sharply with \(z\) for the ERB models (hence the \(y\)-axis of the bottom panel is logarithmic, which is unusual for plots of the global signal). The signal also increases with \(A_{\rm r}\), but even \(A_{\rm r}=0.1\) nearly saturates the dark ages signal, i.e., the signal becomes independent of \(A_{\rm r}\) only slightly beyond that value. In Fig. 3, we also show the instrumental noise for integration time \(t_{\rm int}=1,\!000\,{\rm hrs}\) for a bin around each \(\nu\) of width \(\Delta({\rm ln}\,v)=1\). This lets us illustrate the overall signal-to-noise ratio (S/N) in the figure, using a bin size of order the central value. For the Fisher matrix predictions, though, we used 40 frequency (or redshift) bins in the range \(6.56\leq\nu\leq 46.56\), with a bin width of \(\Delta\nu=1\ {\rm MHz}\). We chose the upper end of the frequency range to be \(z\approx 30\), which is the typical redshift where galaxies at cosmic dawn first form in sufficient numbers to significantly affect the 21-cm signal (Reis et al., 2021). As expected, the noise increases sharply with redshift. Indeed, the redshift dependence of the thermal noise and of the saturated radio signal are somewhat similar, by coincidence (see below).
We now label as the saturated ERB signal the case with the maximum \(A_{\rm r}=375\), and refer to the 21-cm brightness temperature in this case \(T_{\rm b}^{\rm max}\). We then examine the approach to saturation by showing the fractional difference \([1-T_{\rm b}(A_{\rm r})/T_{\rm b}^{\rm sat}]\), which is always positive, as a function of \(\nu\). Fig. 4 shows this for \(A_{\rm r}=0,\,0.001,\,0.01,\,0.4\), and 1. We find that the maximum value of the fractional difference is 0.1 for \(A_{\rm r}=0.4\), so that this is the value that gives at least 90% saturation throughout the dark ages. Thus, it is a minimum \(A_{\rm r}\) value for being near saturation, which we label \(A_{\rm r}\).
Before continuing, we use the \(A_{\rm r}=0.4\) case to illustrate how the ERB affects the 21-cm signal. Fig. 1 shows \(T_{\rm S}\) for this ERB case, and Fig. 2 shows the effective collisional coupling coefficient in the same case. On the one hand, \(T_{\rm R}\) in this ERB case is higher than \(T_{\gamma}\), with a relative factor that rises rapidly towards high redshift. On the other hand, the effective coupling is suppressed by the high radio background, which makes it actually decrease with redshift at the high end. As a result, the effective coupling coefficient never
Figure 3: The size of the global 21-cm signal as a function of \(\nu\) for the standard \(\Lambda\)CDM model (black dashed line) and the excess radio model (solid lines) with \(A_{\rm r}=0.001,\,0.01,\,0.1,\,0.4\) and 375. We also show the expected thermal noise for a global signal experiment observing for integration time 1,000 hrs (grey dotted line) for a bin around each \(\nu\) of width \(\Delta({\rm ln}\,v)=1\). The same results are shown twice, with either a standard linear \(y\)-axis (top panel) or a logarithmic \(y\)-axis (bottom panel) for an easier comparison among the models.
even reaches as high as 0.1. The balance between the high radio background and the low effective coupling keeps \(T_{\rm S}\) from coming too close to \(T_{\rm R}\) at high redshifts, and leads to a saturated signal (in the limit of \(T_{\rm R}\rightarrow\infty\), or more specifically \(T_{\rm R}\gg x_{\rm c}T_{\gamma}\) and also \(T_{\rm R}\gg T_{\rm K}\)) with a value of
\[T_{\rm b}^{\rm sat}=-54.0\,{\rm mK}\,\left(\frac{\Omega_{\rm b}h^{2}}{0.02242 }\right)\left(\frac{\Omega_{\rm m}h^{2}}{0.1424}\right)^{-\frac{1}{2}}\left( \frac{1+z}{40}\right)^{\frac{1}{2}}x_{\rm c}\frac{T_{\gamma}}{T_{\rm K}}\, \tag{4}\]
consistent with the dark ages section in Fialkov & Barkana (2019). This global signal turns out to have a spectral shape that is somewhat similar to the foreground, which induces a partial degeneracy between them (see below). We emphasize that this is a coincidence. Indeed, the saturated signal spectrum is unrelated to the ERB spectrum, and is driven mainly by the collisional coupling coefficient \(x_{\rm c}\), which rises roughly as \((1+z)^{3}\) (driven largely by the increasing cosmic density).
When we consider the ability of lunar or space-based global experiments to measure the dark ages signal, we account for foreground removal (in an optimistic scenario) by adding a term in the shape of the synchrotron foreground (i.e., \(Av^{-2.6}\) with the amplitude a free parameter). Here, the sky brightness temperature \(T_{\rm sky}=180\times(\nu/180\,{\rm MHz})^{-2.6}\) K (Furlanetto et al., 2006). A contributing component of this shape in the model cannot be distinguished from the foreground. Then, for any signal, we can determine the statistical significance of its detection (i.e., the detection of the difference between the expected signal and a zero signal) as follows. We define the signal as a parameter \(\beta\) times the expected signal (i.e., the expected signal corresponds to \(\beta\)=1, and the absence of the signal to \(\beta=0\). We then fit to the data, using a Fisher analysis to extract the error \(\delta\beta\) in the measurement of \(\beta\) (assuming all cosmological parameters are fixed at their fiducial values as determined by Planck). This tells us the significance of the detection of the signal relative to zero, i.e., assuming Gaussian thermal noise, the detection significance is a number of \(\sigma\) equal to \(1/\delta\beta\). For the estimation of noise in the global signal measurement, we assume a redshift range of 30-200 with a bandwidth of \(\Delta\nu=1\,{\rm MHz}\) and explore three different integration times of \(t_{\rm int}=1,\!000\,{\rm hrs}\), 10,000 hrs and 100,000 hrs. We note that in practice, an array of global antennas can be used to increase the total effective integration time.
We first consider the significance of the detection of the standard global signal, without an ERB. We find that is would be distinguishable from zero at \(4.12\sigma\) for \(t_{\rm int}\) of 1,000 hrs, and \(41.2\sigma\) for 100,000 hrs (see the upper panel of Table 1).
For the ERB models, there are two interesting cases to ask: how well they can be detected (i.e., distinguished from a zero signal), and how well they can be distinguished from the standard signal (i.e., showing that the signal is anomalous and must correspond to exotic physics); in the latter case, the signal model corresponds to the standard signal plus \(\beta\) times the difference between the ERB model and the standard model. Fig. 5 shows the significance of the detection (i.e., \(1/\delta\beta\)) of the signal relative to the standard signal (the solid curves), and the significance of a detection in general (i.e., relative to zero; dashed curves). Depending on the value of \(A_{\rm r}\), either significance can be higher. The significance of the detection relative to the standard signal increases with increasing \(A_{\rm r}\) (as the ERB signal differs more and more from the standard case), initially going as \(A_{\rm r}\) until it saturates roughly beyond \(A_{\rm 600}\). The significance of the detection relative to zero signal goes between that for the standard signal (at small \(A_{\rm r}\)) to that for the saturated ERB signal (at large \(A_{\rm r}\)), with a small trough at \(A_{\rm r}=0.0204\); this is the value of \(A_{\rm r}\) with the smallest significance of detecting the ERB signal (relative to zero signal), the significance being \(2.04\sigma\), \(6.46\sigma\), and \(20.4\sigma\), for \(t_{\rm int}=1,\!000\,{\rm hrs}\), 10,000 hrs, and 100,000 hrs, respectively.
The significance of detecting the ERB global signal in these two ways is also listed in Table 2, for various values of \(A_{\rm r}\) and \(t_{\rm int}\). A 1,000hr global experiment can detect the saturated ERB signal at \(6.38\sigma\) significance, and distinguish it from the standard signal at \(9.04\sigma\). We note that both of these are substantially stronger statistical results than the detection of the standard signal itself (\(4.12\sigma\) in this case), due to the greater amplitude of the signal in the ERB case. With \(t_{\rm int}=100,\!000\,{\rm hrs}\), the significance levels would increase \(\times 10\).
There are some subtleties in these results. First, there is the issue
\begin{table}
\begin{tabular}{c c c c c c} \hline & \multicolumn{4}{c}{Integration time} \\ \hline & 1,000 hrs & 10,000 hrs & 100,000 hrs \\ \hline Global signal & 4.12 & 13.0 & 41.2 \\ \hline & \multicolumn{4}{c}{Configuration} \\ \hline G & A & B & C & D \\ \hline Power spectrum & 3.01 & 6.71 & 66.6 & 81.6 & 690 \\ \hline \end{tabular}
\end{table}
Table 1: The significance (# of \(\sigma\)) of the detection of the standard signal.
Figure 5: The significance (# of \(\sigma\)) of the detection (i.e., \(1/\delta\beta\)) as a function of \(A_{\rm r}\) (minimized over \(z_{\rm c}\); see the text). We show two detection scenarios: distinguishing the ERB global signal from the standard case (solid lines), and detecting it relative to zero signal (dashed lines).
of how it is possible that distinguishing the ERB signal from the standard signal is easier than from zero, in some cases. The answer is the degeneracy with the foreground term; since the ERB signal (most clearly in the saturated case) has a shape versus frequency that is similar to the foreground, it is more difficult to detect it than would be expected just based on its amplitude, while the standard global signal has a shape that differs more clearly from the shape of the foreground. A related subtlety has to do with the redshift range of the fitting. When measuring the ERB signal relative to the standard signal, including the highest redshifts adds more information, but this goes into determining more accurately the foreground term (i.e., the signal component that is degenerate with the foreground) rather than the ERB signal itself; in other words, \(\delta\beta\) is higher due to the stronger degeneracy with the foreground term. Fig. 6 shows the significance of the detection (for ERB detection relative to the standard model) versus the maximum redshift \(z_{\rm r}\), for the saturated case (\(A_{\rm r}=375\)). The maximum for all the curves occurs at \(z_{\rm r}=126\). For example, for \(t_{\rm int}=1,\)000 hrs, the maximum significance is 9.0 \(\sigma\) while the value for \(z_{\rm r}=200\) is 8.4 \(\sigma\). Thus, actually Fig. 5 shows the significance not for \(z_{\rm r}=200\) but rather for the value of \(z_{\rm r}\) that gives the maximum significance in each case. We note that Fig. 6 also doubles as showing the case when the excess background was produced only at redshift \(z_{\rm r}\) (and not before), and we fit to observations only up to that \(z_{\rm r}\). For example, a measurement of the dark ages global 21-cm signal between redshifts 30 and 56 to the precision of thermal noise from a 1,000 hour integration would be able to distinguish the saturated signal from the standard signal at 5\(\sigma\).
Looking again at Fig. 5, we note that the significance for detecting the signal (relative to zero) varies over a limited range. However, the significance for distinguishing the ERB model from the standard cosmological model varies over a wide range, which leads to the question of how small a value of \(A_{\rm r}\) can be distinguished. Table 3 shows the \(A_{\rm r}\) values for which the ERB model can be distinguished from the standard model at various confidence levels. For example, a measurement of the global 21-cm signal to the precision of thermal noise from a 1,000 hour integration would be able to detect a minimum value of \(A_{\rm r}=0.0389\) at 5\(\sigma\). This is lower than the minimum value required to explain EDGES by a factor of 49; it also corresponds to only 0.0104% of the value that would explain the entire extragalactic radio background (what we took as the saturated ERB case). A 100,000 hour integration would detect an \(A_{\rm r}\) as low as 0.00175 at 5\(\sigma\).
### Power spectrum
As discussed earlier, measuring the power spectrum is a much more challenging task than measuring the global signal. However, while the global 21-cm signal from the dark ages is a powerful cosmological probe, it is limited in what it can tell us. The power spectrum, on the other hand, is a much richer dataset that has the potential to reveal a wealth of information about the early Universe. To calculate the power spectrum signal for the ERB model, we note that if the brightness temperature is written as a product of various factors, then taking the logarithm and then the derivative shows that, at linear order, the fractional (relative) perturbation in the temperature is the sum of the fractional perturbation in each of the factors. In particular, using eq. (2), we get (again neglecting fluctuations in the residual ionized fraction and other tiny effects within CAMB):
\[\delta_{\rm Tb}=\delta+\frac{1}{1+x_{\rm c}}\delta_{xc}+\frac{T_{\gamma}}{T_ {\rm K}-T_{\gamma}}\delta_{\rm TK}\, \tag{5}\]
where each \(\delta\) denotes a dimensionless (fractional) perturbations, in the baryon density (\(\delta\)), the collisional coupling coefficient (\(\delta_{xc}\)), and the gas temperature (\(\delta_{\rm TK}\)). At each redshift we extract \(\delta_{xc}\) from this expression using CAMB to get all the other quantities (including \(\delta_{\rm Tb}\)).
Now, in the presence of a radio background, eq. (5) becomes instead:
\[\delta_{\rm Tb}=\delta+\frac{T_{\rm R}}{T_{\rm R}+x_{\rm c}T_{\gamma}}\delta_ {xc}+\frac{T_{\rm R}}{T_{\rm K}-T_{\rm R}}\delta_{\rm TK}. \tag{6}\]
From this we get the monopole 21-cm perturbation in the ERB model. As mentioned in the introduction for the standard case, we then add the effect of line-of-sight velocity gradients, the light-cone effect, and also the effect of the angular resolution of radio interferometers, as in our previous work (Mondal & Barkana, 2023). Henceforth we
\begin{table}
\begin{tabular}{l c c c c} \hline & & \multicolumn{3}{c}{Integration time} \\ \hline & \(A_{\rm r}\) & 1,000 hrs & 10,000 hrs & 100,000 hrs \\ \hline & 0.001 & 3.86 & 12.2 & 38.6 \\ & 0.01 & 2.38 & 7.50 & 23.7 \\ Relative to zero & 0.1 & 4.40 & 13.9 & 44.0 \\ & 0.4 & 5.89 & 18.6 & 58.9 \\ & 375 & 6.38 & 20.2 & 63.8 \\ \hline & 0.001 & 0.292 & 0.922 & 2.92 \\ & 0.01 & 2.22 & 7.02 & 22.2 \\ Relative to standard & 0.1 & 6.91 & 21.8 & 69.1 \\ & 0.4 & 8.45 & 26.7 & 84.5 \\ & 375 & 9.04 & 28.6 & 90.4 \\ \hline \end{tabular}
\end{table}
Table 2: The significance (# of \(\sigma\)) of detecting the ERB global signal relative to zero signal or the standard signal.
\begin{table}
\begin{tabular}{l c c c c c} \hline & & \multicolumn{3}{c}{Detection limit} \\ \hline & Integration time & 1\(\sigma\) & 2\(\sigma\) & 3\(\sigma\) & 5\(\sigma\) \\ \hline & 1,000 hrs & 0.375 & 0.866 & 1.53 & 3.89 \\ \(A_{\rm r}\) [\(10^{-2}\)] & 10,000 hrs & 0.108 & 0.225 & 0.353 & 0.640 \\ & 100,000 hrs & 0.0333 & 0.0673 & 0.102 & 0.175 \\ \hline \end{tabular}
\end{table}
Table 3: The minimum value of \(A_{\rm r}\) (in units of \(10^{-2}\)) that allows the ERB global signal to be distinguished from the standard (\(A_{\rm r}=0\)) case, at various levels of statistical significance.
Figure 6: The significance (# of \(\sigma\)) of the detection (i.e., \(1/\delta\beta\)) for distinguishing the ERB global signal from the standard case, shown as a function of the maximum redshift \(z_{\rm r}\), for the saturated \(A_{\rm r}=375\) case.
consider only the total, spherically-averaged power spectrum of 21-cm brightness fluctuations.
Fig. 7 shows the 21-cm power spectrum as a function of \(\nu\) (or \(z\)), for the standard \(\Lambda\)CDM model and for the ERB model at various values of \(A_{\rm r}\), at \(k=0.1\,{\rm Mpc}^{-1}\). The power spectrum rises with \(A_{\rm r}\) until it saturates for \(A_{\rm r}\geq A_{\rm 90}\). This is similar to the global case (compare Fig. 3), except that the rise is tempered at the highest redshifts (compared to the increasing steepness for the global signal), since the relative fluctuations (in density as well as the other quantities) were smaller at early times. To illustrate plausible measurements of the 21-cm power spectrum from the dark ages, we follow Mondal & Barkana (2023) and assume five observational configurations, which are listed in Table 4. Here G is meant to roughly correspond to the statistical power of the global case with for \(t_{\rm int}=1,\!000\,{\rm hrs}\) (in terms of the measurement accuracy of a combination of cosmological parameters; Mondal & Barkana 2023), A doubles the collecting area, B and C increase \(\times 10\) one of the array parameters from A, and D increases both parameters. In Fig. 7, we also show the \(1\sigma\) noise (thermal plus cosmic variance, as in Mondal & Barkana (2023)) for our minimal G configuration (for bins with \(\Delta(\ln\nu)=1\) and \(\Delta(\ln k)=1\)); for the power spectrum, the noise increases with redshift significantly faster than the signal, even in the ERB cases. For the Fisher matrix predictions using the power spectrum, we used 8 frequency (or redshift) bins in the range \(5.81\leq\nu\leq 45.81\) with a bin width of \(\Delta\nu=5\,{\rm MHz}\) and 11 logarithmic \(k\) bins covering the range \(0.00779\leq k<1.91\,{\rm Mpc}^{-1}\) with bin width \(\Delta(\ln k)=0.5\).
Fig. 8 also shows the 21-cm power spectrum, now as a function of wavenumber \(k\) at various redshifts during the dark ages, for the standard \(\Lambda\)CDM model and for the saturated ERB model with \(A_{\rm r}=375\). The shapes of the power spectra are almost the same for the standard and ERB models (roughly following the shape of the density power spectrum), but the amplitude behaves quite differently. We also show in Fig. 8 the \(1\sigma\) noise (thermal plus cosmic variance) for the G configuration, at \(z=75\) and 40. As the noise increases rapidly with redshift, and the maximum signal-to-noise ratio (S/N) occurs at the minimum redshift we consider, i.e., \(z=30\).
Fig. 9 again shows the 21-cm power spectrum but now in the other cut in terms of the two variables, i.e., as a function of \(\nu\) (or \(z\)) at various wavenumbers, for the standard \(\Lambda\)CDM model and for the saturated ERB model. Here it is easier to see the difference between the two models. For the standard case, as discussed above, the power spectrum increases initially with time as the amplitude of the signal increases due to the gas cooling faster than the CMB, and as fluctuations increase due to gravity. However, the power spectrum then decreases as the declining density reduces \(x_{\rm c}\). In contrast, for the ERB model, the strong radio background at high redshifts results in the power spectrum decreasing monotonically with time over most of the redshift range. As expected, the power spectrum also increases as we go from large scales to small scales. We also show the \(1\sigma\) noise (thermal plus cosmic variance) for the G configuration, at \(k=0.1\,{\rm Mpc}^{-1}\) and \(1\,{\rm Mpc}^{-1}\).
Before we consider the power spectrum measurements for the ERB case, as in the global case we first consider the detection significance of the standard power spectrum signal (relative to zero signal). The standard power spectra would be distinguishable from zero at \(3.01\sigma\) for the G configuration, going up to \(690\sigma\) for configuration D (see the lower panel of Table 1). In terms of the detection significance, each configuration is comparable to a certain integration time for a global experiment: G [534 hrs], A [2,650 hrs], B [261,000 hrs], C [392,000 hrs], and D [28.0 million hrs]. This demonstrates our conclusion from Mondal & Barkana (2023) that it is more difficult to start with the 21-cm power spectrum (as even the G configuration requires quite a large collecting area), but eventually interferometers can gather far more cosmological information than is plausible for global experiments.
Next we calculate the significance with which the power spectrum of ERB models can be distinguished from the standard cosmological model, or detected (distinguished from zero signal). Fig. 10 shows the significance of the detection (for these two scenarios) as a function of \(A_{\rm r}\), for the various observational configurations. Here the significance in both scenarios monotonically increases with \(A_{\rm r}\), and it is always easier to detect an ERB signal than to distinguish it from the standard
\begin{table}
\begin{tabular}{l c c c c c} \hline & \multicolumn{5}{c}{Configuration} \\ \hline & D & C & B & A & G \\ \hline \(A_{\rm coll}\) [km\({}^{2}\)] & 100 & 100 & 10 & 10 & 5 \\ \(t_{\rm int}\) [hrs] & 10,000 & 1,000 & 10,000 & 1,000 & 1,000 \\ \hline \end{tabular}
\end{table}
Table 4: The 21-cm power spectrum observational configurations in terms of the collecting area \(A_{\rm coll}\) and integration time \(t_{\rm int}\).
Figure 8: The 21-cm power spectrum as a function of wavenumber \(k\) during the dark ages, for the standard \(\Lambda\)CDM model (long dashed lines) and the ERB model (solid lines) with \(A_{\rm r}=375\), at redshifts \(z=\) [150, 125, 75, 50, 40, 30]. We also show the \(1\sigma\) noise (thermal plus cosmic variance) for our G configuration (dotted lines), at \(z=75\) and 40 (for bins with \(\Delta(\ln\nu)=1\) and \(\Delta(\ln k)=1\)).
Figure 7: The 21-cm power spectrum at \(k=0.1\,{\rm Mpc}^{-1}\) as a function of \(\nu\) (or \(z\) as the top \(x\)-axis) for the standard \(\Lambda\)CDM model (black dashed line) and the excess radio model (solid lines) at various \(A_{\rm r}\). We also show the \(1\sigma\) noise (thermal plus cosmic variance) for our G configuration (grey dotted line).
case. For detecting the signal, the significance increases smoothly from the value for the standard case (at low \(A_{\rm r}\)) to that for the saturated ERB case (at high \(A_{\rm r}\)), with a transition occurring over the range of \(A_{\rm r}\sim 0.1-1\). For distinguishing the signal from the standard case, similarly to the global signal, the significance increases roughly as \(A_{\rm r}\) until it saturates beyond \(\sim A_{\rm r}\)90.
The significance of detecting the ERB power spectrum in these two ways is also listed in Table 5, for various values of \(A_{\rm r}\) and the various configurations. With the minimal G configuration, the saturated ERB signal can be detected at 8.73\(\sigma\) significance, and distinguished from the standard case at 5.94\(\sigma\). These values are comparable to the 1,000 hour global case (Table 2), and again significantly stronger statistically than the detection of the standard case itself (3.01\(\sigma\) in the same configuration). In the A configuration, the significance of the two types is nearly as good as the 10,000 hour global case, while the B and C configurations exceed the 100,000 hour global case (and D is much better still). Note that configuration C somewhat outperforms B, due to its higher angular resolution. These comparisons between the statistical strengths of the global signal and power spectrum for the ERB model are generally similar to the comparison for the standard cosmological model (Table 1).
For distinguishing the ERB model from the standard case, Table 6 lists the minimum \(A_{\rm r}\) values for various levels of significance, for the various observational configurations. For example, configuration G can detect a minimum value of \(A_{\rm r}=1.06\) at 5\(\sigma\), and the detection threshold can be improved by an order of magnitude in each step of going to A, to B or C, and then to D, which can go down to \(A_{\rm r}=0.000708\). Generally, the global signal is relatively more sensitive to low values of \(A_{\rm r}\) (relative to high \(A_{\rm r}\)) compared to the power spectrum. This is likely due to the different redshift dependence. Since the thermal noise for the power spectrum measurement goes as the square of system temperature \(T_{\rm sys}^{2}\), it increases much faster with redshift than the signal, unlike the global signal case. Therefore, the power spectrum mostly measures lower \(z\), while the global signal can take advantage of higher redshifts, where the ERB signal depends most strongly on the amplitude \(A_{\rm r}\). Of course, the global and fluctuation measurements are observationally independent, so ideal would be to have both measurements provide a useful cross-check.
acoustic oscillations on the 21-cm signal (Barkana, 2018), but this signature is erased by drag at early times throughout the mDM parameter space that remains consistent with observational constraints, particularly from the CMB (Kovetz et al., 2018). It is possible to restore this signature in an interacting millicharged dark matter model (Liu et al., 2019; Barkana et al., 2022), which is more elaborate (adding a long-range interaction between the millicharged part and the rest of the DM) but also viable over a much wider range of parameters.
Here we consider the dark ages global 21-cm signal from the simpler, non-interacting mDM model. The parameters of this model are the fraction of the DM mass density that is millicharged (\(f_{\rm X}\)), the electric charge of the millicharged particles (\(\epsilon\), a fraction of the electron charge \(e\)), and the mass of the millicharged particles (\(m_{\rm X}\)). We consider five different models with parameter values (see Table 7) that roughly span the range that is allowed by current constraints and that can explain the EDGES result (Kovetz et al., 2018).
Fig. 11 shows the size of the global 21-cm signal from the dark ages as a function of \(\nu\) (and \(z\)), for the mDM models considered in this this work. We also show the standard case for comparison, and the instrumental noise for integration time \(t_{\rm int}=\)1,000 hrs for a bin around each \(\nu\) of width \(\Delta(\ln\nu)=1\). Unlike the ERB model, the mDM models have a shape versus frequency that is generally similar to that of the standard model, except that the variation with redshift is stronger. In the mDM model, the colder gas has two 21-cm effects: on the one hand, the colder gas (relative to the CMB) tends to produce stronger 21-cm absorption, but on the other hand, colder gas has weaker collisional coupling. At the high-redshift end, where the collisional coupling is above unity (Fig. 2), even lowering it by a small factor has a limited effect, so that the 21-cm global absorption is stronger in the mDM cases than in the standard case. However, at the low-redshift end of the dark ages, the effect on the coupling becomes dominant, and the absorption is actually weaker in mDM than in the standard model (this is reversed later on at cosmic dawn, when the coupling comes from Lyman-\(\alpha\) photons).
As for the ERB model, we calculate two types of statistical significance, for how well each mDM model can be detected (i.e., distinguished from a zero signal), and how well it can be distinguished from the standard signal (i.e., showing that the signal is anomalous and must correspond to exotic physics). As before, we assume observations spanning the redshift range \(30-200\) with bins of \(\Delta\nu=1\). Table 8 lists the results. While the global signal in the mDM models varies faster with frequency than the standard signal, this also brings the mDM models closer at high redshifts to the slope of the thermal noise. As a result, the order in terms of which type of detection is easier, varies among the models. Overall, with \(t_{\rm int}=\) 1,000 hrs the mDM models can be detected at \(4.68-7.16\sigma\) (all higher than the \(4.12\sigma\) for detecting the standard model), and can be distinguished from the standard model at \(2.22-9.26\sigma\).
## 5 Discussion and Conclusions
The redshifted 21-cm signal from the dark ages is a powerful cosmological probe with the potential to constrain cosmology. This has been previously shown under the assumption of the standard cosmology. However, there are various studies on non-standard possibilities during the dark ages, which suggest that exotic models could be easier to detect than the standard case. In this paper, we have studied two non-standard models that are consistent with current constraints and that correspond to exotic dark matter properties that go beyond the cold dark matter model: the excess radio background (ERB) model and the millicharged dark matter (mDM) model. We have investigated the effects of these two non-standard models on the redshifted 21-cm signal from dark ages. Both of these models have been mainly motivated by the tentative EDGES detection, but more generally they provide a useful range of possibilities for anticipating potential discoveries once the window of the dark ages is opened up observationally.
First, we quantified the effects of an ERB on the redshifted 21-cm global signal from the dark ages. We found that the ERB can substantially increase the amplitude of the global signal, depending on the parameter \(A_{\rm r}\) (defined as the \(z=0\) intensity of the ERB relative to the CMB at 78 MHz). We found that the signal becomes saturated (independent of \(A_{\rm r}\)) at high \(A_{\rm r}\), with 90% saturation (during
\begin{table}
\begin{tabular}{l c c c c c} \hline & & \multicolumn{5}{c}{Integration time} \\ \hline & Model & 1,000 hrs & 10,000 hrs & 100,000 hrs \\ \hline & A & 4.90 & 15.5 & 49.0 \\ & B & 5.79 & 18.3 & 57.9 \\ Relative to zero & C & 7.16 & 22.6 & 71.6 \\ & D & 4.68 & 14.8 & 46.8 \\ & E & 5.98 & 18.9 & 59.8 \\ \hline & A & 3.55 & 11.2 & 35.5 \\ & B & 5.45 & 17.2 & 54.5 \\ Relative to standard & C & 9.26 & 29.3 & 92.6 \\ & D & 2.22 & 7.01 & 22.2 \\ & E & 6.07 & 19.2 & 60.7 \\ \hline \end{tabular}
\end{table}
Table 8: The significance (# of \(\sigma\)) of detecting the global signal in the mDM models, relative to zero signal or the standard signal.
Figure 11: The size of the global 21-cm signal as a function of \(\nu\) for the standard \(\Lambda\)CDM model and the mDM models (Table 7) considered in this work. We also show the expected thermal noise for a global signal experiment observing for integration time 1,000 hrs (grey dotted line) for a bin around each \(\nu\) of width \(\Delta(\ln\nu)=1\).
\begin{table}
\begin{tabular}{l c c c c} \hline & & \multicolumn{4}{c}{Model} \\ \hline & A & B & C & D & E \\ \hline \(f_{\rm X}\) & 0.004 & 0.004 & 0.004 & 0.001 & 0.001 \\ \(\epsilon\) [\(10^{-4}\) e] & 1.0 & 0.1 & 0.1 & 0.3 & 0.1 \\ \(m_{\rm X}\) [MeV] & 10 & 3 & 1 & 5 & 1 \\ \hline \end{tabular}
\end{table}
Table 7: The mDM models that we use to illustrate our results, in terms of the model parameters \(f_{\rm X}\) (millicharged fraction of the DM), \(\epsilon\) (millicharged electric charge) and \(m_{\rm X}\) (millicharged particle mass).
the dark ages) reached at \(A_{\rm r}=0.4\); this corresponds to 21% of the minimum value (\(A_{\rm r}=1.9\)) required to explain EDGES, and 0.11% of the value that would explain the entire observed extragalactic radio background.
Using Fisher analysis, we forecast the detection significance of the ERB signal. For 1,000 hrs of integration of the global signal, the 90% saturation case can be detected at 5.89\(\sigma\) significance, and it can be distinguished from the standard signal at 8.45\(\sigma\); these are substantially stronger results than the detection of the standard signal itself (4.12\(\sigma\) in this case), due to the greater amplitude of the ERB signal, and accounting (optimistically) for degeneracy with the foreground. A much smaller value of \(A_{\rm r}\) can be distinguished from the standard signal at 5\(\sigma\): \(A_{\rm r}=0.0389\), which is only 2.0% of the minimum value for explaining EDGES, and just 0.010% of the extragalactic radio background. All these results get much stronger if the integration time is increased significantly beyond 1,000 hrs, so that the number of \(\sigma\) is an order of magnitude larger for \(t_{\rm int}=100,\)000 hrs, which is feasible to achieve with an array of global antennas; with this larger integration time, the amplitude that can be distinguished from the standard signal at 5\(\sigma\) is \(A_{\rm r}=0.00175\), which is \(9.2\times 10^{-4}\) of the minimum value for EDGES, and \(4.7\times 10^{-6}\) of the extragalactic radio background.
We also studied the 21-cm power spectrum in the ERB model. As in the case of the standard model, compared to the global signal it would take a much larger effort to reach significant results for the ERB model with 21-cm fluctuations from the dark ages, but in the long-run, much better results are possible. Similarly to the global signal, the 21-cm power spectrum rises with \(A_{\rm r}\) and becomes saturated at \(A_{\rm r}\gtrsim 0.4\). Here, though, the thermal noise rises with redshift as the square of system temperature, which is significantly faster than the signal, while the rates are similar for the global signal, allowing the latter to benefit more from the highest redshifts. For our minimal G configuration of a dark ages interferometric array, the 90% saturation case can be detected at 6.73\(\sigma\) significance, and it can be distinguished from the standard signal at 3.98\(\sigma\); these are again substantially stronger results than the detection of the standard signal itself (3.01\(\sigma\) in this case). The value of \(A_{\rm r}\) that can be distinguished from the standard signal at 5\(\sigma\) is \(A_{\rm r}=1.06\), as the power spectrum is less sensitive to low values of \(A_{\rm r}\) compared to the global signal (due to the different redshift behavior). This constraint for the B and C configurations would be comparable to that for a 10,000 hr global experiment, and the D configuration would go down to \(A_{\rm r}=0.000708\), lower by a factor of 2.47 than the value achievable with a 100,000 hr global experiment. The power spectrum is much less effective on this measure compared to the significance of detection (of either the standard signal or the 90% saturated ERB model relative to zero signal or the standard signal); for the latter, the B and C configurations perform better than 100,000 hr global, and D gives a further order of magnitude improvement in the number of \(\sigma\). Of course, it would be best to pursue both global and power spectrum measurements, as they would be observationally independent and thus provides complementary information and a powerful cross-check.
Finally, we investigated the global signal of the redshifted 21-cm line from the dark ages in the model of gas-DM cooling with mDM. We considered five different models with parameter values that span the region allowed by current constraints and that can also explain the EDGES anomaly. We found that the shape and amplitude of the global signal are significantly different in these models compared to the standard case. As a result, we showed that the detection of the mDM signal is feasible with future dark ages global signal observations, with greater significance than the detection of the standard signal. Also, the mDM models can be distinguished from the standard signal with comparable significance in most cases. For example, the mDM Model C, which has the strongest signal among the models considered, can be distinguished from zero signal with a significance of 7.16\(\sigma\) and from the standard signal with a significance of 9.26\(\sigma\) for 1,000 hrs of integration time.
Overall, we have shown that the exotic dark matter models can have a significant impact on the 21-cm global signal and power spectrum from the dark ages. Our results suggest that future observations of the 21-cm signal will be able to detect or constrain these exotic models, and thus provide valuable insights on fundamental cosmology.
## Acknowledgements
We greatly acknowledge discussions with David Neufeld and Jens Chluba regarding CMB heating. RM is supported by the Israel Academy of Sciences and Humanities & Council for Higher Education Excellence Fellowship Program for International Postdoctoral Researchers. RM and RB acknowledge the support of the Israel Science Foundation (grant No. 2359/20). AF is supported by Royal Society University Research Fellowship #180523.
## Data Availability
The data underlying this article are available upon request from the corresponding author.
|
2303.12367 | AIIPot: Adaptive Intelligent-Interaction Honeypot for IoT Devices | The proliferation of the Internet of Things (IoT) has raised concerns about
the security of connected devices. There is a need to develop suitable and
cost-efficient methods to identify vulnerabilities in IoT devices in order to
address them before attackers seize opportunities to compromise them. The
deception technique is a prominent approach to improving the security posture
of IoT systems. Honeypot is a popular deception technique that mimics
interaction in real fashion and encourages unauthorised users (attackers) to
launch attacks. Due to the large number and the heterogeneity of IoT devices,
manually crafting the low and high-interaction honeypots is not affordable.
This has forced researchers to seek innovative ways to build honeypots for IoT
devices. In this paper, we propose a honeypot for IoT devices that uses machine
learning techniques to learn and interact with attackers automatically. The
evaluation of the proposed model indicates that our system can improve the
session length with attackers and capture more attacks on the IoT network. | Volviane Saphir Mfogo, Alain Zemkoho, Laurent Njilla, Marcellin Nkenlifack, Charles Kamhoua | 2023-03-22T08:06:41Z | http://arxiv.org/abs/2303.12367v1 | # AIIPot: Adaptive Intelligent-Interaction Honeypot for IoT Devices
###### Abstract
The proliferation of the Internet of Things (IoT) has raised concerns about the security of connected devices. There is a need to develop suitable and cost-efficient methods to identify vulnerabilities in IoT devices in order to address them before attackers seize opportunities to compromise them. The deception technique is a prominent approach to improving the security posture of IoT systems. Honeypot is a popular deception technique that mimics interaction in real fashion and encourages unauthorised users (attackers) to launch attacks. Due to the large number and the heterogeneity of IoT devices, manually crafting the low and high-interaction honeypots is not affordable. This has forced researchers to seek innovative ways to build honeypots for IoT devices. In this paper, we propose a honeypot for IoT devices that uses machine learning techniques to learn and interact with attackers automatically. The evaluation of the proposed model indicates that our system can improve the session length with attackers and capture more attacks on the IoT network.
Honeypot, Internet of Things (IoT) Devices, Machine Learning, Reinforcement Learning.
## I Introduction
The Internet of Things (IoT) has captured the interest of significant service providers, enterprises, and industries in recent years, including Healthcare, Smart Homes, Autonomous Vehicles, Digital Agriculture, and many others. IoT has transformed the way we live and work by allowing physical devices to communicate with one another and with us via the internet. However, with this increased connectivity comes the need for effective communication and security measures to ensure the safety and privacy of sensitive data transmitted through these systems. Unlike traditional computers, IoT devices typically have network interfaces that allow interaction between the physical and virtual worlds. However, they suffer from various vulnerabilities such as weak or hard-coded passwords. Many passwords are easy to guess, publicly available, or cannot be changed, putting them at risk of being compromised easily. Deception technology is a useful approach to improving the security posture of IoT systems. Honeypot is one of the deception methods to discover vulnerabilities that is commonly used by security practitioners. In general, a honeypot mimics interaction in real fashion and encourages unauthorised users (attackers) to launch attacks.
However, IoT vulnerabilities are usually highly dependent on the nature of the device, firmware version, or even the vendor. This leads to the fact that after scanning the network, attackers observe the network vulnerabilities (like open ports), and tend to perform several checks on the remote target device to gather more information about the specific device before launching the code of attack (exploit-code). This phase is called the pre-check attack step. Figure 1 summarises the life cycle of an IoT attack. So a honeypot with a limited level of interaction is not enough to pass the check and will fail to capture the real attack.
Our goal is to build a honeypot that is able to interact with attackers during the phase of checking to observe attacks targeting IoT devices effectively. This is to get access to the exploit code of the attacker. In order to achieve our goal, honeypots must be able to pass the pre-check step of the attacker. Due to the large number and the heterogeneity of IoT devices, manually crafting the low and high-interaction honeypots are not affordable. However, some device checks are simple, such as verifying that the target device's response
Fig. 1: Life cycle of IoT attack.
is correct. Thus, a honeypot that returns a correct response to the received request may bypass their checks. Other ones can be more complex and require more steps before the exploit code is launched.
In this paper, we proposed a honeypot for IoT devices based on a machine learning concept that we label as _adaptive intelligence-interaction: AIIPot_. AIIPot is a chatbot that is based on a transformer and reinforcement learning models. The chatbot is trained on a dataset of requests/responses in order to choose a response with a high probability to be expected by an attacker to a specific request at the early stage of the attack. However, our honeypot uses the reinforcement learning concept to model the future direction of the conversation with the attacker. With this, AIIPot can respond appropriately to requests, with no need to learn enough to interact with attackers continuously. AIIPot adapts its interaction with the attacker using the fact that all new requests arriving at the system are broadcast to the IoT device network, which gives a high expected response to the attacker. This leads to the collection of more datasets. Then, the AIIPot chatbot model chooses the best response. With these methods, our honeypots return expected responses to the attacker even if initially the request is not present on the database, therefore increasing the likelihood that the attacker will launch the attack on the honeypot believing that it is an IoT device. This allows us to extend the session length, detect and monitor attacks, collect information about the tactics and techniques used by attackers, and ultimately help organizations improve their security defenses.
To the best of our knowledge, this paper is the first to build a honeypot for IoT devices based on a transformer chatbot, which uses reinforcement learning to model the future direction of the conversation. In summary, the contributions of this work are as follows:
1. A honeypot based on a transformer model is proposed to capture vulnerabilities on IoT devices.
2. Reinforcement learning concepts are used to model the future direction of the interaction between the honeypot and the attacker.
3. A novel technique to collect a new dataset of the interaction between attackers and IoT devices is proposed.
The rest of the paper is structured as follows: Section II provides the background and related works on IoT honeypots, and Section III presents our approach. Next, we present the evaluation of the proposed honeypot in Section IV and conclude in Section V while discussing possible ideas for future research.
## II Background and Related Works
### _Internet of Things_
The IoT is the vast network of connected physical objects (i.e., things) that exchange data with other devices and systems via the internet. It's characterized by heterogeneous identifiable networking objects (sensors or actuators) advertising their services to assemble semantic-rich applications. The heterogeneity of IoT devices and networks is mainly caused by various manufacturers and (communication) protocols. As a result, they suffer from various vulnerabilities such as weak or hard-coded passwords making them easy to be compromised.
### _Machine learning for Cybersecurity_
Machine Learning (ML) is an area of artificial intelligence and computer science that uses data and algorithms to mimic how humans learn. It's used in research to create security systems like intrusion detection systems (IDS). Deep Learning is a subfield of machine learning that improves fields such as Natural Language Processing (NLP). Chatbots are NLP apps that deliver automatic responses to consumer enquiries. They can be used to create security mechanisms for IoT cyber deception.
### _Honeypot for Cybersecurity_
Deception falls into six different categories: perturbation, obfuscation, moving target defence, mixing, honey-x, and attacker engagement [1]. A honeypot is a variant of honey-x and is mainly used to deceive attackers from their actual target or to collect information regarding attack patterns. Honeypot is a technology used to capture attacks on IoT devices. Mainly two categories of honeypots exist: _high_ and _low_ interaction. In between low and high, hybrid/medium interaction also exists. In addition to these categories, this paper describes another interaction level, intelligent-interaction honeypots based on machine learning.
### _Related Works_
#### Ii-D1 Low-Interaction Honeypots
Low-interaction honeypots are just emulated services and give the attacker a very limited level of interaction, such as a popular one called honeyd [2]. IoT low-interaction honeypot emulates a single protocol, and/or emulates a specific device such as U-PoT [3] and ThingPot [4], respectively. Low-interaction honeypots support only some functions of the system, not the entire system; therefore, their fixed behaviour when receiving a non-emulated request makes them limited and easily detectable by the attacker.
#### Ii-D2 High-Interaction Honeypots
High-interaction honey-pots are fully-fledged operating systems and use real systems for attackers to interact with. They collect advanced information on cyber attacks by providing systems over which attackers have complete control. SIPHON [8] is an example of a high-interaction IoT honeypot which deploys a physical device as a honeypot. The problems of high interaction are their complexity and the needed time for deployment and maintenance. So as the number of honeypots increases, scalability decreases. The reason is that physical devices and virtual machines consume computational resources.
#### Ii-D3 Intelligent-Interaction Honeypots
The concept of intelligent-interaction honeypots is an interaction with attackers that maximize the likelihood of catching the attacks instead of accurately emulating the behaviour of a specific service or device as in high, low, or hybrid interaction. However,
there are few research models that stand out as the most versatile as they emulate full devices and are self-adaptive: IoTCandyJar [5], Chameleon [6], and FirmPot [7]. Intelligent-interaction honeypot was introduced for the first time in 2017 on IoTCandyJar [5].
IoTCandyJar [5] proposed an intelligent-interaction IoT honeypot that can emulate all kinds of IoT devices. For a specific request, IoTCandyJar selects the response the attacker expects from many responses of IoT devices collected by internet scanning. If the selected response is the expected one, the attackers assume that the honeypot is their target device and send an exploit code. They used a Markov Decision Process model (MDP) to learn from scratch what response an attacker expects. Thus, the problem is that their honeypots take some time until they can respond appropriately to requests. As a result, it took two weeks for the model to learn enough to interact with attackers continuously. Another problem is that the responses collected from different IoT devices by internet scanning are not 100% sure that they are from IoT devices as they could be from some honeypot on the internet as well.
FirmPot [7] proposed a framework that automatically generates intelligent-interaction honeypots using firmware. This framework collects web interactions by emulating firmware launched on a docker container and learns the correspondence of requests and responses by machine learning. The generated honeypots return the best from the emulated responses to the received request.
* Firstly, the honeypots generated by this framework do not capture any advanced attacks such as configuration changes. This is due to the fact that (1) either there is the issue of the honeypot observation location and periods; (2) or the Seq2Seq learning model used by the authors does not converge or the learning model is unable to interact; (3) or in the worse case, the generated honeypots have been detected by attackers, which is a general problem with most existing honeypots.
* Secondly, during the scanning process, the responses collected could be from fake web applications (same as a honeypot).
* Finally, this approach highly depends on the firmware images; so the vulnerability can be discovered when generating a honeypot depending on the vulnerability of the target vendor.
To overcome all the above limits, we proposed an effective honeypot based on a machine learning concept that responds correctly to attackers at an early stage of the attack. The proposed honeypot for IoT devices uses machine learning techniques to learn and interact with attackers automatically.
## III AIIpot
In this section, we describe in detail our approach to building a honeypot for IoT devices.
### _Overview_
The high-level overview of our approach is shown in figure 2.
Our framework has three components: the _honey-chatbot, req/res database_, and _request evaluator_. The _honey-chatbot_ module is used to reply to the request of an attacker on the honeypot. The dataset is saved on the req/res database, which is a database of possible requests that an attacker can send to an IoT device and corresponding responses that the IoT device can reply to the attacker. If the request is not part of the req/res database, this is considered as an _out-of-vocabulary_ (OOV) request, and then we saved that request to a new line of our _req/res database_ and send the request to the _evaluator_ module for security evaluation. The evaluator module evaluates the request's trust and either broadcasts it to all the devices of the local network in the case where the request is trustable1 or sends it to the IoT device on the internet for the untrusted ones.
Footnote 1: The request doesn’t contain any exploit code
### _Honey-Chatbot_
The module Honey-Chatbot corresponds to a server where the response model selection is deployed. With the help of the req/res database, our honeypot is enabled to reply with a valid response to the client based on the received request instead of responding to the fixed one. In this section, we discuss how to leverage the transformer and the MDP model to optimize the response selection with the maximal possibility to capture attacks.
#### Iii-B1 Honey-Chatbot Overview
For each individual request, the req/res database module could contain at least hundreds or even thousands of responses. All of them are valid responses, but only a few of them are the correct and expected ones. This is because, for a given request, various IoT devices can respond to it under their own logic to process it and generate the response. The most straightforward example is the request to access the root path of their web service: some devices may reply by the login portal page, others may redirect it to another resource, and the rest may respond with different types of error pages. Therefore, all of the responses in the req/res database are potential candidates as the response to the attacker, but the challenge is to find the one that is expected by the attacker.
**Our approach**: the idea behind our approach is to first train a transformer-based architecture (BERT) model on the
Fig. 2: High-level overview of AIIPot.
dataset present in the req/res database and record the possible response candidate that is likely to be expected by an attacker, then choose the expected one with the high probability with an MDP model and record the next move from the attacker's side. We assume if we happen to select the correct one, attackers will believe our honeypot is the vulnerable target IoT device and continue to extend the session length and eventually send the malicious payload. Every incoming request to the honeypot will be forwarded to this module, and the selected response will be returned to the client. The core part of the module is the selection engine, which passes the request into the BERT model and fetches the potential responses list from the model output of all the transactions for the MDP model selection. In MDP selection mode, it first locates the state in the graph from the normalized request and is followed by the model to select the best one.
#### Iii-B2 Model Formulation
We discuss how we formulate the response selection problem with the transformer-based architecture (BERT) and MDP model. We assume whether the client continues the session or performs the attack is simply determined by the response to the previous request. This is a reasonable assumption based on our best knowledge of the existing malware samples.
**Transformer-based architecture (BERT)**: Bidirectional Encoder Representations from Transformers (BERT) [13] belongs to the family of the so-called transfer learning methods, where a model is first pre-trained on general tasks and then fine-tuned the final target tasks. Transfer learning has been shown beneficial in many different tasks. BERT is a very deep model that is pre-trained over large corpora of raw texts and then fine-tuned on target annotated data. The building block of BERT is the Transformer [11], an attention-based mechanism that learns contextual relations between words (or sub-words, i.e., word pieces) in a text. BERT provides contextualized embeddings of the words composing a sentence as well as a sentence embedding capturing sentence-level semantics: the pre-training of BERT is designed to capture such information by relying on very large corpora. These can be used in input to further layers to solve sentence classification: this is achieved by adding task-specific layers and by fine-tuning the entire architecture on annotated data. Initially, we created a dictionary that maps the words of the request to numerical values. Next, a word embedding is trained from all the numerical requests and responses vector \(x_{t}\) and \(y_{t}\), respectively. The vectorized requests and responses are fed into the BERT model. We extend BERT by using task-specific layers, as in the usual BERT fine-tuning. This outputs \(\pi_{\theta}(y_{t}|x_{t})\), the probability of choosing the response \(y_{t}\) given the input request \(x_{t}\). By choosing the response corresponding to \(\pi_{\theta}(y_{t}|x_{t})\) there is no guarantee that this predicted response will push the attacker to send a new request. So in order to model the future of the interaction encouraging the attacker each time to send a new request until the exploit code is sent, we used an MDP to model the next move of the attacker.
**Markov decision process (MDP)**: MDP is an extension of the standard (unhidden) Markov model. It is a model for sequential decision-making when outcomes are uncertain, such as computing a policy of actions that maximize utility with respect to expected rewards. At each decision epoch, the next state will be determined based on the chosen action through a transition probability function. The mechanism is collectively referred to as reinforcement learning. Reinforcement learning is a mechanism to control and adjust policy when the reward of the current state space is uncertain.
**Problem formulation**: in the standard reinforcement learning model, an agent (attacker) interacts with its environment (honeypot). This interaction takes the form of the agent sensing the environment and based on input choosing an action to perform in the environment. Every reinforcement learning model learns a mapping from situations to actions by trial-and-error interactions with a dynamic environment. The model consists of multiple variables, including decision epochs (\(t\)), states (\(x,s\)), transitions probabilities (\(T\)), rewards (\(r\)), actions (\(y\)), value function (\(V\)), discount (\(\gamma\)), and estimation error (e). The basic rule of a reinforcement learning task is the Bellman equation:
\[V^{*}(x_{t})=r(x_{t})+\gamma V^{*}(x_{t+1}). \tag{1}\]
The general update policy can be expressed as
\[\Delta w_{t}=\max_{y}~{}~{}[r(x_{t},y)+\gamma V(x_{t+1})]-V(x_{t}). \tag{2}\]
Our problem is essentially a non-deterministic Markov Decision Process, which means at each state, there exists a transition probability function T to determine the next state. In other words, our learning policy is a probabilistic trade-off between _exploration, reply with responses which have not been used before_, and _exploitation reply with the responses which have known high rewards_. To apply general valuation iteration is impossible to calculate the necessary integrals without added knowledge or some decision modification. Therefore, we apply Q-learning to solve the problem of having to take the max over a set of integrals. Rather than finding a mapping from states to state values, Q-learning finds a mapping from state/action pairs to values (called Q-values). Instead of having an associated value function, Q-learning makes use of the Q-function. In each state, there is a Q-value associated with each action. The definition of a Q-value is the sum of the reinforcements received when performing the associated action and then following the given policy thereafter. Therefore, in our problem of using Q-learning, the equivalent of the Bellman equation is formalized as
\[Q(x_{t},y_{t})=r(x_{t},y_{t})+\gamma\max_{y_{t+1}}Q(x_{t+1},y_{t+1}), \tag{3}\]
and the updated rule of direct Q-learning is formalized as follows where \(\alpha\) is the learning rate
\[\begin{split}\Delta w_{t}=\alpha[(r(x_{t},y_{t})+& \gamma\max_{y_{t+1}}Q(x_{t+1},y_{t+1},w_{t})\\ &-Q(x_{t},y_{t},w_{t})]\frac{\partial Q(x_{t},y_{t},w_{t})}{ \partial w_{t}}.\end{split} \tag{4}\]
**Reward function**: reward function \(r:(x_{t},y_{t})\longrightarrow r\) assigns some value \(r\) to be in the state and action pair \((x_{t},y_{t})\)
The goal of the reward is to define the preference of each pair and maximize the final rewards (optimal policy). In our context, the immediate reward \(r(x_{t},y_{t})\) reflects the progress we have made during the interaction process when we choose response \(y_{t}\) to request \(x_{t}\) and move to the next state \(x_{t+1}\). Since the progress can be either negative or positive, the reward function can be negative or positive as well. The heuristics of defining reward is that if the response is the target device type expected by the attacker and the attacker launches the attack by sending the exploit code in the next request, the reward must be positive and huge. On the contrary, if the response is not an expected one (e.g., reflects a not-vulnerable device version), the attacker may stop the attack and end the session. It leads to a dead-end state and causes a negative reward. In other words, we reward the responses that could lead us to the final attack packet and punish the ones that lead to the dead-end session. One of our designs is to assign a reward as a value equal to the length of the final sessions since we believe the longer request sent by the attackers, the higher chance the malicious payload is contained. The standard session is 2, which means after we send our response, there is at least another incoming request from the same IP at the same port. If no further transition is observed, we assign a negative reward for that response. Another alternative reward assignment could be based on whether we receive some known exploits packets or not.
**State and action**: in our case, the state \(x_{t}\) corresponds to the requests sent by the attacker with all the similar existing requests on the req/res database. The actions are characterised by the output response of the transformer model such that \(\pi(y_{t}|x_{t})\geqslant threshold\). We would like to fix the threshold at \(0.5\). This assumption is realistic due to the fact that if the transformer predicts a response with \(0.5\) probability that means there is a high chance that such a response belongs to the set of responses of a vulnerable IoT device.
**Transition probabilities**: the transition probabilities can be described by the transition function \(T(s,y,s^{\prime})\), where \(y\) is an action moving taken during the current state \(s\), and \(s^{\prime}\) is some new state. More formally, the transition function \(T(s,y,s^{\prime})\) can be described by the formula
\[P(S_{t}=s^{\prime}|S_{t-1}=s,y_{t}=y)=T(s,y,s^{\prime}). \tag{5}\]
To measure the probability of each combination of \((s,a,s^{\prime})\), we deployed the trained BERT model that returns a response from the candidate set and saved the session information to the session table. After running a period of time, we are able to collect lots of sessions, and we parse each of them to count the occurrence of each combination \((s,a,s^{\prime})\), which is denoted as \(C(s,a,s^{\prime})\). The transition function \(T(s,y,s^{\prime})\) are defined as follows:
\[T(s,a,s^{\prime})=\frac{C(s,a,s^{\prime})}{\sum_{x\in S}(s,a,x)}. \tag{6}\]
**Online Q-learning algorithm**: we apply the online Q-learning algorithm to select the expected response from the candidate response output by the BERT model. Based on the Q-learning model, our learning process starts from receiving a request at the \(t_{0}\) decision epoch. Given the request, we passed it into the trained BERT model to select a set of candidate responses. We adopt the \(\epsilon\)-greedy policy for action selection. In particular, we consider probabilities output by the BERT model for each candidate response as the initial transaction functions. Using this policy, we can select random action with \(\epsilon\) probability or an action with \(1-\epsilon\) probability that gives maximum reward in a given state. Then we start our Q-learning iteration and update the Q-learning table. When we learn to reinforce for one state and action pair, \(r(x_{t},y_{t})\), we first back-propagate and update the Q lookup table. According to this, we can make the adjustment by removing the responses that end with negative rewards and updating the \(epsilon\) value. The iteration runs until the model converges. In practice, our model is running online and updated in real time. Therefore, thanks to the trained BERT model our model has a high chance to converge.
### _Req/Res Database and Request Evaluator_
In this section, we describe in detail how we collect the req/res database component of the proposed approach.
The Honey-Chatbot component used the database during the offline training of the transformer model.2
Footnote 2: In an offline machine learning model, the weights and parameters of the model are updated while simultaneously attempting to lower the global cost function using the data used to train the model
We used the dataset provided by [7] to form the baseline of our database. In our database, each entry corresponds to a specific request sent by an attacker and the corresponding response from the IoT device. For new requests, we used the _Request Evaluator_ to evaluate the trusted request before collecting the response corresponding to that specific request. All new entries request/response(s) are saved on the database as shown in Figure 3.
The request evaluator checks whether the request is an exploit code or not. This module is based on a Super Vector Machine (SVM) classifier. We trained an SVM model on the
Fig. 3: Req/res database entries.
NSL-KDD dataset [12] to determine whether a request is an attack or not. Classified an attack is considered an untrusted request and those classified as normal are considered to be a trusted one. If the request is trusted, then we forward it to our IoT local network as shown in figure 2. Otherwise, the request is forwarded to some IoT devices on the internet that is accessible. We received many responses from real IoT devices for one request, and we saved them as many entries to the database. Figure 4 shows how the new entry is saved on the database.
## IV Evaluation
**Datasets**: for the offline training of the transformer model, we used the HTTP protocol dataset. Besides HTTP protocol, a preliminary check happens on even customised IoT protocols. Home Network Administration Protocol (HNAP) is one example. The experimentation was carried out only on the HTTP protocol dataset due to the lack of a dataset on the other existing protocol. The initial dataset was provided by [7]. The dataset contains about 17,604 entries of requests and the corresponding response. We also use the NSL-KDD dataset [12] for the request evaluator module.
**Training schedule of the BERT model**: we use the AdamW optimizer for training for 300 epochs with a cosine decay learning rate schedule. The initial learning rate is set to 0.001 and the batch size is 512.
**Evaluation metrics**: we control the performance of our proposed Honey-Chatbot on four metrics; the number of requests captured by the honeypot, the session length, the volume of information sent by an attacker in a single session time, and the number of attack types captured by the honeypot.
**Honey-Chatbot deployment**: to evaluate the performance of our proposal, we set up our honeypot on the public server on the Google Cloud Platform (GCP). Due to resources constraint, each method was deployed on the server for \(20\) days only. Table I presents the total number of requests that the server received during that period and the total number of IP addresses observed. For the sake of comparison to our proposal, we consider the model where the response is chosen randomly among all responses and the honeypots proposed by [5] and [7].
The proposed Honey-Chatbot in 20 days captures 6,235 requests from 987 different IP addresses. In Figure 5 the amount of session length of 1 has been reduced while the session length greater or equal to 7 has increased compared to the existing honeypot [5] and [7]. This is because the transformer model helps the MDP model to converge quickly and capture the attention of the attacker with the expected response at each session time.
Measuring the interaction length is not enough to understand whether this approach is more effective than others for understanding the attackers' behaviours. The information collected from these interactions counts more than their length. So, measuring the extraction of useful information from attackers is more important. We measured the volume of information sent by the attacker at a session time. Figure 6 shows that the higher the interaction length is the greater the volume of the information sent by the attacker. This is reasonable because during a session time the more specific the request is the greater the size of the information sent.
Based on the classifier model we trained for the request evaluator module, we were able to evaluate the percentage of attack capture by our proposed honeypot. Figure 7 shows that our honeypot captured about 30% of Denial of Service (DoS) attacks 60% of Remote to Locale (R2L) attacks which correspond to attacks such as password guessing for login attempts, query to the database, configuration changes, and more. User to Root (U2R) and probing attacks have also been captured. This gives us a great overview of the behaviour of attackers on our system.
## V Conclusion
Due to the large number and heterogeneous variety of IoT devices, building honeypots for IoT devices is very challenging when using traditional methods. However, an attacker tends to
Fig. 4: Requests/response multi-entries.
Fig. 5: Comparison of session length with attackers’ between honeypots (in percentage). If an attacker sends one request and the honeypot returns one response, and the communication is terminated, the session length is 1. The longer the session length, the more likely an attacker believes the honeypot is a natural IoT device. Hence the session length is considered one of the critical indicators of a honeypot’s deception performance, the effectiveness of the learning process.
perform preliminary checks on the device information before launching an attack. So, if a honeypot does not have a proper interaction mechanism with the attacker at the preliminary check stage, it is extremely hard to capture the complete exploit code. We have proposed an intelligent-interaction honeypot based on machine learning concepts to learn and interact with attackers automatically. Our evaluation indicates that the system can improve the session length with attackers and capture more attacks such as requests on the database, configuration changes, and login attempts. Our method helps us to collect more data for future training. However, we observe very few configuration change requests among the interactions with session lengths less than 7. This could be due to the fact that the honeypot has been discovered as a non-IoT device. It could also be that the attacker has corrupted our machine-learning model by poisoning the dataset on which the model is trained to train. In practice, our model is running online and updated in real time. Therefore, it may not converge and reach the global optimal which corresponds to other reasons why we observe very few configuration change requests among the interactions with session lengths less than 7.
## Acknowledgment
This research was sponsored by the Army Research Office and was accomplished under Grant Number W911NF-21-1-0326. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
The work of the second author is partly supported by the EPSRC grant EP/V049038/1 and the Alan Turing Institute under the EPSRC grant EP/N510129/1.
The first author would like to thank Moeka Yamamoto for providing the first part of the dataset used in the numerical experiments of this work, and also for the useful discussion in relation to the numerical work in this work.
|
2310.05723 | Planning to Go Out-of-Distribution in Offline-to-Online Reinforcement
Learning | Offline pretraining with a static dataset followed by online fine-tuning
(offline-to-online, or OtO) is a paradigm well matched to a real-world RL
deployment process. In this scenario, we aim to find the best-performing policy
within a limited budget of online interactions. Previous work in the OtO
setting has focused on correcting for bias introduced by the policy-constraint
mechanisms of offline RL algorithms. Such constraints keep the learned policy
close to the behavior policy that collected the dataset, but we show this can
unnecessarily limit policy performance if the behavior policy is far from
optimal. Instead, we forgo constraints and frame OtO RL as an exploration
problem that aims to maximize the benefit of online data-collection. We first
study the major online RL exploration methods based on intrinsic rewards and
UCB in the OtO setting, showing that intrinsic rewards add training instability
through reward-function modification, and UCB methods are myopic and it is
unclear which learned-component's ensemble to use for action selection. We then
introduce an algorithm for planning to go out-of-distribution (PTGOOD) that
avoids these issues. PTGOOD uses a non-myopic planning procedure that targets
exploration in relatively high-reward regions of the state-action space
unlikely to be visited by the behavior policy. By leveraging concepts from the
Conditional Entropy Bottleneck, PTGOOD encourages data collected online to
provide new information relevant to improving the final deployment policy
without altering rewards. We show empirically in several continuous control
tasks that PTGOOD significantly improves agent returns during online
fine-tuning and avoids the suboptimal policy convergence that many of our
baselines exhibit in several environments. | Trevor McInroe, Adam Jelley, Stefano V. Albrecht, Amos Storkey | 2023-10-09T13:47:05Z | http://arxiv.org/abs/2310.05723v3 | # Planning to Go Out-of-Distribution in
###### Abstract
Offline pretraining with a static dataset followed by online fine-tuning (offline-to-online, or OtO) is a paradigm that is well matched to a real-world RL deployment process: in few real settings would one deploy an offline policy with no test runs and tuning. In this scenario, we aim to find the best-performing policy within a limited budget of online interactions. Previous work in the OtO setting has focused on correcting for bias introduced by the policy-constraint mechanisms of offline RL algorithms. Such constraints keep the learned policy close to the behavior policy that collected the dataset, but this unnecessarily limits policy performance if the behavior policy is far from optimal. Instead, we forgo policy constraints and frame OtO RL as an exploration problem: we must maximize the benefit of the online data-collection. We study major online RL exploration paradigms, adapting them to work well with the OtO setting. These adapted methods contribute several strong baselines. Also, we introduce an algorithm for **planning to go out of** distribution (PTGOOD), which targets online exploration in relatively high-reward regions of the state-action space unlikely to be visited by the behavior policy. By leveraging concepts from the Conditional Entropy Bottleneck, PTGOOD encourages data collected online to provide new information relevant to improving the final deployment policy. In that way the limited interaction budget is used effectively. We show that PTGOOD significantly improves agent returns during online fine-tuning and finds the optimal policy in as few as 10k online steps in Walker and in as few as 50k in complex control tasks like Humanoid. Also, we find that PTGOOD avoids the suboptimal policy convergence that many of our baselines exhibit in several environments.
## 1 Introduction
In real-world reinforcement learning (RL), there is great value in being able to train an agent offline with a static dataset. But fine-tuning the agent over at least a small number of agent-environment interactions is also key, given the risks in real-world deployment. This offline-to-online (OtO) scenario extends offline RL (also called batch RL (Ernst et al., 2005; Reidmiller, 2005)), which has garnered attention as a framework for learning control from datasets without online interactions Levine et al. (2020). While offline RL removes the potentially costly data-collection step of traditional RL, the resulting policy may well be suboptimal. This could occur if the offline dataset does not cover all areas of the state-action space relevant to our task or if the policy that collected the dataset was itself suboptimal. Given this risk, those deploying an RL agent in the real world would likely invest in fine-tuning the agent, at least over a small budget of online interactions.
In this study, we view OtO RL as an exploration problem. Because the agent has a limit on its environment interactions, it must choose carefully which state-action pairs to collect during online fine-tuning. This contrasts starkly with prior work in OtO RL, which has focused on correcting for
bias introduced by the policy-constraint mechanisms used in existing offline RL algorithms (Beeson and Montana, 2022; Nakamoto et al., 2023; Luo et al., 2023). Such policy-constraint mechanisms are used during offline training to keep the learned policy close to the behavior policy that collected the offline dataset (e.g., the inclusion of a behavior-cloning term). While these methods can work well offline, they can cause detrimental learning instabilities during online fine-tuning, due to overly-conservative value functions (Nakamoto et al., 2023). Instead, **we do not use these policy-constraint mechanisms at any point**. In doing so, we shift the problem set away from bias correction to data-collection strategy during the online fine-tuning phase.
While exploration is widely studied in the online RL literature, the OtO problem differs from the standard online learning setup in two unique ways. First, the OtO setting greatly constrains the number of online data-collection steps. Second, the online phase in OtO RL benefits from information available from offline pretraining. We wish to leverage the offline dataset and pretraining phase to optimize a deployment policy using a limited number of agent-environment interactions. Given exploration methods have not generally featured in the OtO RL literature, we evaluate the compatibility of major online RL exploration paradigms with the OtO setting. In particular, we analyze intrinsic motivation and upper confidence bound (UCB) exploration. We find that intrinsic-motivation methods can unlearn initializations from offline pretraining, and note that the implementation details of UCB-style methods can affect exploration behavior. Ultimately, we address the issues with these methods leading to several strong baselines for exploration in OtO RL.
From the exploration perspective, the most basic OtO question is: _Which experiences should the agent collect during online fine-tuning such that its returns improve the most in the fewest agent-environment interactions?_ To address this question, we propose an algorithm for **p**lanning **to go out of distribution (PTGOOD) that can be exploited by any existing model-based RL algorithm. PTGOOD uses a learned density of state-action pairs in the offline dataset to collect transitions during online fine-tuning that are out-of-distribution relative to the data in the offline dataset. By targeting such state-action pairs, PTGOOD continually increases the diversity of the information available in the total (offline plus online) data. PTGOOD also targets high-reward state-action pairs by ensuring that exploration guidance does not stray too far from the current-best policy, to ensure _relevance_ of the collected data. We also note that PTGOOD uses the learned density in a non-myopic planning procedure, thereby considering exploration fruitfulness in future steps.
Our experiments demonstrate that PTGOOD consistently and significantly outperforms other exploration baselines in terms of evaluation returns and avoids suboptimal policy convergence, a problem we find with many exploration methods in several environments. In addition, we find that PTGOOD often finds the optimal policy in simpler environments such as Walker in as few as 10k online steps and in as few as 50k in more complex control tasks like Humanoid. Our contributions can be summarized as follows:
* We propose PTGOOD, a non-myopic planning algorithm for OtO exploration that targets high-reward out-of-distribution transitions via an estimate of the regions of the state-action space represented in the offline dataset.
* We systematically study online RL exploration methods, identify compatibility issues with the OtO setting, and propose well-performing baselines that overcome those issues.
* We collect, benchmark, and open-source new offline datasets in addition to the usual D4RL (Fu et al., 2020) datasets, and evaluate PTGOOD and other baselines on these.
## 2 Background
The RL problem usually studies an agent acting within a Markov decision process (MDP) parameterized by the tuple \((\mathcal{S},\mathcal{A},\mathcal{T},R,\gamma)\). \(\mathcal{S},\mathcal{A}\) are the state- and action-spaces, respectively, \(\mathcal{T}(s^{\prime}|s,a)\) is the transition function that describes the distribution over next-states conditioned on the current state and action, \(R(s,a)\) is the reward function, and \(\gamma\in(0,1)\) is the discount factor. The agent acts within the MDP according to its policy \(\pi(a|s)\), which maps states to a distribution over actions. An agent's policy \(\pi\) induces a (discounted) occupancy measure \(\rho_{\pi}(s,a)\), which is the stationary distribution over the \(\mathcal{S}\times\mathcal{A}\) space unique to policy \(\pi\)(Syed et al., 2008; Kang et al., 2018). After executing an action \(a_{t}\) in state \(s_{t}\) at timestep \(t\), the next state is sampled \(s_{t+1}\sim\mathcal{T}(\cdot|s_{t},a_{t})\), the agent receives a reward \(r_{t}=R(s_{t},a_{t})\), and the interaction loop continues. The agent's learning objective is to find a policy that maximizes cumulative discounted returns \(\pi^{*}=\arg\max_{\pi}\mathbb{E}_{\pi}[\sum_{t=1}^{\infty}\gamma^{t-1}R(s_{t},a_{t})]\)
Model-based RL approaches learn a model of the MDP's transition function \(\hat{\mathcal{T}}\) and reward function \(\hat{R}\), which can then be used to generate rollouts of "imagined" trajectories from a given state \(s_{t}\): \(\tau=(s_{t},a_{t},\hat{r}_{t},\hat{s}_{t+1},\dots)\).
OtO RL assumes access to a dataset of transition tuples \(D_{\pi_{b}}=\{(s,a,r,s^{\prime})_{i}\}_{i=1}^{\left|D_{\pi_{b}}\right|}\) collected by some (potentially) unknown behavior policy \(\pi_{b}\). In addition, the behavior policy's performance can range from that of a random agent to an expert agent, which means that \(D_{\pi_{b}}\) may contain trajectories of highly-suboptimal behavior. The goal in OtO RL is to leverage offline data \(D_{\pi_{b}}\) to determine a policy \(\pi_{o}\) to collect another dataset \(D_{\pi_{o}}\) over a fixed-budget of agent-environment interactions, which are then used together \(D_{\pi_{b}}\cup D_{\pi_{o}}\) to try to find optimal policy \(\pi^{*}\). We need to optimize over both the choice of final policy and the data collection process that leads to that final policy.
## 3 Related Work
**Exploration in RL.** Exploration is a key problem in RL and has been studied extensively in the online setting. Exploration algorithms cover many strategies such as dithering methods like \(\epsilon\)-greedy or randomized value functions (Osband et al., 2016). Intrinsic reward methods leverage prediction error (Pathak et al., 2017; Burda et al., 2019) and count-based rewards (Andoni and Indyk, 2008; Ostrovski et al., 2017) to guide agents towards unseen regions of the state-action space. Upper confidence bound (UCB) methods use uncertainty to guide agent exploration. For example, some algorithms measure uncertainty as disagreement within ensembles of Q-functions (Chen et al., 2017; Lee et al., 2021; Schafer et al., 2023) or transition functions (Shyam et al., 2019; Henaff, 2019; Sekar et al., 2020). In contrast to these methods, PTGOOD uses prior information explicitly by estimating a density of already-collected data and uses this density to plan exploration.
**Offline RL.** Many offline RL methods are designed to constrain the learned policy to be similar to the behavior policy. For example, conservative methods incorporate their policy constraint either via behavior cloning terms (Wu et al., 2019; Peng et al., 2019; Fujimoto and Gu, 2021), restricting the policy-search space (Kumar et al., 2021), restricting the policy's action space (Fujimoto et al., 2019), or incorporating policy-divergence regularization into the critic (Nachum et al., 2019; Kostrikov et al., 2021). On the other hand, pessimistic methods suppress the value of out-of-distribution state-action pairs, disincentivizing the agent from traversing those regions. For example, Kidambi et al. (2020) and Yu et al. (2020) penalize value based on disagreement between transition models, Rigter et al. (2022) use an adversarial world model to generate pessimistic transitions, and (Kumar et al., 2020) penalize the value of actions too different from ones the behavior policy would choose.
**OtO RL.** Some research in the OtO RL setting involves empirical studies of algorithm implementation choices. For example, Lee et al. (2021) develop a replay sampling mechanism to mitigate large errors in bootstrap value function updates, and Ball et al. (2023) study choices like using LayerNorm to reduce value overestimation and sampling proportions between offline and online data. Most previous work in the OtO setting targets over-conservatism induced by a given offline RL algorithm (Beeson and Montana, 2022; Nakamoto et al., 2023; Luo et al., 2023). In contrast, PTGOOD approaches the OtO RL setting as an exploration problem and does not use conservatism or pessimism in any form.
**Control with Expert Demonstrations.** Closely related to OtO RL is learning from demonstration (LFD) (Schaal, 1996). Many LFD methods use a form of behavior cloning on expert or hand-crafted trajectories for policy initialization followed by online fine-tuning with RL operators (Hester et al., 2018; Vecerik et al., 2017; Rajeswaran et al., 2018; Nair et al., 2020; Song et al., 2023). In contrast, we study a setting where the learned policy has **no** prior access to demonstrations from expert or hand-crafted policies.
## 4 Planning to go out of Distribution
Later, in SS5, we look at existing intrinsic reward and upper confidence bound (UCB) exploration methods, and adapt them appropriately for the OtO setting. However, these exploration methods are lacking in two respects: (a) UCB methods are myopic and rely on ensemble-based uncertainty to drive exploration, and (b) intrinsic reward methods use a moving-target reward function which can cause instabilities in value function training which translates to instabilities in policy training. This
leads us to develop PTGOOD, an approach that overcomes these issues. We introduce PTGOOD first before looking at other comparator exploration methods.
PTGOOD drives exploration by estimating \(\rho_{\pi_{b}}\), the occupancy measure for policy \(\pi_{b}\) (effectively the marginal density over states and actions), and then planning over a tree of imagined rollouts to collect state-action pairs within low-density regions. PTGOOD considers not just a single state-action pair but selects state-action pairs for data collection that are within a few transitions (in terms of \(\mathcal{T}(\cdot)\)) of many other low-likelihood state-action pairs. We posit that data collected during online fine-tuning in the OtO setting should meet two criteria: (1) be non-redundant to data in the offline dataset and (2) be of relatively high reward.
PTGOOD satisfies criterion (1) through the use of the Conditional Entropy Bottleneck (CEB) (Fischer, 2020) to model the density of state-action pairs. PTGOOD satisfies criterion (2) by ensuring that the exploration guidance does not stray too far from the improving policy. This is accomplished by sampling the policy and adding a small amount of noise during the planning process. As the policy updates target high-reward regions in the vicinity of the current policy, exploring "close" to the policy is important. The notion of closeness is explored later in SS6.4.
### PTGOOD
PTGOOD can be applied as a complement to any model-based offline RL method. Given a learnt offline policy and dynamics model, PTGOOD plans the data collection process a step at a time to collect the next transition, which then augments the offline data and all data collected so far. The policy can now be updated with the new data. The data-collection planning process can then be repeated as many times as our budget of online interactions allows.
```
0: Dynamics model \(\hat{\mathcal{T}}\), encoder \(e\), marginal \(m\), depth \(d\), width \(w\), state \(s\), policy \(\pi\), noise hyperparameter \(\epsilon\)
1: Sample \(\pi\) with state \(s\), add sampled noise, and repeat \(w\) times
2: Forward-step prediction with \(\hat{\mathcal{T}}\) for each sampled action and store new current states
3:for\(i\) in \(d\)do
4: Sample \(\pi\) with new current states, add sampled noise, and repeat \(w\) times
5: Measure rate \(\mathcal{R}\) for each new current state and sampled action
6:for each new current state do
7: Compute and store the expected \(\mathcal{R}\) across all \(w\) sampled actions
8:endfor
9: Forward-step prediction with \(\hat{\mathcal{T}}\) for each sampled action and store new current states
10:endfor
11: Sum the stored expected \(\mathcal{R}\) (Step 7) back up the chain of predicted forward steps (Steps 2 & 9) to the original \(w\) sampled actions (Step 1)
12:\(a\) from first \(w\) sampled actions (Step 1) with the highest \(\mathcal{R}\) sum (Step 11)
```
**Algorithm 1** PTGOOD Planning Procedure
The planning part of this process is given in Algorithm 1. PTGOOD's planning procedure has a width \(w\) and a depth \(d\). Starting from a given state \(s\), we sample the policy \(w\) times and add a small amount of randomly-sampled Gaussian noise \(\mathcal{N}(0,\epsilon)\) with variance hyperparameter \(\epsilon\) to the actions. Then, the learned dynamics model \(\hat{\mathcal{T}}\) predicts one step forward from state \(s\) for each \(w\) actions, and action sampling is repeated with each new state. The sampling and forward-step process is repeated \(d\) times, forming a tree of possible paths from the original state \(s\). A key part of this algorithm is the subsequent scoring of state-action pairs using the _rate_\(\mathcal{R}\), which is described in the next section.
### The Rate \(\mathcal{R}\) and Modeling \(\rho_{\pi_{b}}\)
The _rate_ is used to measure how out-of-distribution a sample is. Rate has been used successfully in computer vision as a thresholding tool for out-of-distribution detection and has been shown to work well with CEB representations that we use here (Fischer, 2020). To ensure non-redundancy, we wish to collect low-probability state-action pairs according to \(\rho_{\pi_{b}}\). While, in general, we do not have access to \(\rho_{\pi_{b}}\), the offline dataset is filled with its samples. Hence we can model the offline data and use that model to target samples from occupancy measures of policies other than \(\pi_{b}\).
We fit, to convergence, an encoder \(e(z_{X}|x)\) and backward encoder \(b(z_{X^{\prime}}|x^{\prime})\) to a latent space \(Z\), via a standard CEB objective (described further in Appendix D.1, and given Equation 4). We use state-action pairs sampled uniformly at random from the offline dataset for \(x\) and use multiplicative noise drawn from a uniform distribution \(u\sim U(0.99,1.01)\) to form \(x^{\prime}=u\odot x\). Next, we learn a marginal \(m(z_{X})\) of our training data in the representation space of the encoder \(e(\cdot)\) as a mixture of Gaussians. See Appendix D for more details. Given this encoder conditional density \(e\), and marginal \(m\), the _rate_(Alemi et al., 2018, 2018) of a given state-action pair \(x\) is computed as:
\[\mathcal{R}(x)\triangleq\log e(z_{X}|x)-\log m(z_{X}). \tag{1}\]
## 5 Adapting Online Exploration Methods to the OtO Setting
One should question whether a new algorithm such as PTGOOD is really necessary. Aren't existing exploration methods sufficient? In this section, we explore this question and then show experimentally in the following section that PTGOOD overcomes the disadvantages of existing methods.
Motivated by the lack of current OtO exploration algorithms, we now examine intrinsic reward (SS5.1) and UCB exploration (SS5.2) methods in the OtO setting. We find that offline initializations can be destroyed when the intrinsic rewards introduced during online fine-tuning are too large relative to the true rewards used during offline pretraining. On the other hand, if intrinsic rewards are too small, the guided exploration yields no benefit. We suggest using two agents via the DeRL framework (Schafer et al., 2022). Here, both agents are pretrained offline, and one receives only the true rewards while the other receives intrinsic rewards during online fine-tuning. Also, with UCB methods, we find that the choice of ensemble over which uncertainty is computed changes exploration behavior. Despite the popularity of Q-function ensembles, it is not clear whether collecting data to reduce value uncertainty is better than reducing uncertainty in other learned components, such as transition functions in model-based algorithms. Ultimately, we examine the performance of using different ensembles to drive UCB exploration in our main experiments (SS6.3).
### Intrinsic Rewards
Intrinsic-reward methods guide exploration through a reward function that gives a bonus reward for relatively unexplored areas of the state space, the action space, or both. For example, Random Network Distillation (RND) (Burda et al., 2019) trains a network to predict the output of a fixed randomly-initialized network that transforms an incoming state. Here, the prediction error is used as a reward bonus. In this case, prediction error should be relatively high in unseen states, thereby leading the agent to explore unseen areas of the state space. Exploration is impossible during offline pretraining, which means that intrinsic rewards can only accomplish guided exploration during online fine-tuning. This leaves us to use stage-dependent reward functions: one for exploitation during offline pretraining and one for exploration during online fine-tuning.
We hypothesize that the relative magnitudes between the two rewards during online fine-tuning can complicate using intrinsic rewards in the OtO setting. For example, consider a situation where we use the modified reward at timestep \(t\) as the sum of the MDP's true (extrinsic) reward \(r_{t}^{e}\) and a weighted intrinsic reward \(r_{t}^{i}\): \(r_{t}=r_{t}^{e}+\lambda r_{t}^{i}\). If the intrinsic reward is too small relative to the extrinsic reward, we risk the exploration guidance having little-to-no influence on action selection. On the other hand, if the intrinsic reward is too large relative to the extrinsic reward, we risk destroying the initialization of the pretrained critic, which destroys the initialization of the pretrained actor.
To test our hypothesis, we evaluate RND agents with \(\lambda\in\{0,0.1,1,10,50\}\) in two environment-dataset combinations. Specifically, we use the Halfcheetah (Random) dataset from D4RL (Fu et al., 2020) and collect our own dataset from the DeepMind Control Suite (Tassa et al., 2018, 2020) in the Walker environment, which we call DMC Walker (Random). Both datasets were collected with behavior policies that select actions uniformly at random.1 All agents are pretrained offline with the true rewards, fine-tuned online over 50k agent-environment interactions with RND intrinsic rewards, and use Model-Based Policy Optimization (MBPO) (Janner et al., 2019) combined with
Soft Actor-Critic (SAC) (Haarnoja et al., 2017) as the base agent.2 Every 1k environment steps, we collect the agents' average undiscounted returns over ten evaluation episodes.
Footnote 2: For more details on agents, see Appendix D
Figure 1 reports the average (bold) \(\pm\) one standard deviation (shaded area) across five seeds. We note that when \(\lambda\) is relatively small in Halfcheetah (Random), the agents perform roughly the same as when no exploration guidance is used (i.e., \(\lambda=0\)). In contrast, a relatively large \(\lambda\) causes the agents to lose their pretrained initialization, as shown by the dramatic drop in evaluation returns at the beginning of online fine-tuning. Our hypothesis is also confirmed in DMC Walker (Random), with the added phenomenon of bi-modal returns across seeds occurring when \(\lambda=0.1\).
We propose using two agents to overcome this issue: one for exploitation and one for exploration. Such a framework has been shown to improve learning stability in Decoupled RL (DeRL) (Schafer et al., 2022). Both agents can be initialized with offline pretraining, but the exploitation agent only receives the MDP's true rewards, while the exploration agent receives the modified rewards during online fine-tuning. We only care about the exploitation agent for evaluation purposes and rely on the exploration agent for data collection. We refer to this agent as RND/DeRL.
### Upper Confidence Bound Exploration
UCB-style algorithms (Auer, 2002) direct exploration on the principle of "optimism in the face of uncertainty". Many recent implementations of this principle use ensembles of Q-functions to select actions \(a_{t}\) at timestep \(t\) according to a mixture of value and uncertainty: \(a_{t}=\operatorname*{arg\,max}_{a}Q_{\text{mean}}(s_{t},a)+\lambda Q_{ \text{std}}(s_{t},a)\) (e.g., Lee et al. (2021), Schafer et al. (2023)). Despite reward (and therefore value) being an important component in RL, it is unclear whether it is better to follow value uncertainty or the uncertainty in another learned component.
In general, model-based RL algorithms have four core learned components that are trained with different prediction targets, learning dynamics, or both. For example, MBPO+SAC trains transition and reward functions via standard supervised learning, value functions with Bellman backups and bootstrapped targets, and policies with value and entropy maximization. Given the aforementioned differences, can we reasonably expect the uncertainty of each component to drive exploration into the same regions of the state-action space?
To answer this question, we first train an MBPO+SAC agent with ensembles of all four previously-mentioned components on the Halfcheetah (Random) dataset and evaluate their uncertainties on 2,500 transition tuples from the Halfcheetah (Expert) dataset. We evaluate the ensembles' uncertainty on a dataset collected by an expert behavior policy, as it is likely to contain out-of-distribution tuples relative to the random dataset, which is where we ultimately care about evaluating uncertainty in the OtO setting. We repeat this exercise with datasets from the Hopper environment from D4RL. If uncertainty is the same across all learned components, we should expect to see a strong positive
Figure 1: Undiscounted evaluation returns in Halfcheetah (Random) (left) and DMC Walker (Random) (right) for \(\lambda\in\{0,0.1,1,10,50\}\) intrinsic-reward weights throughout online fine-tuning.
rank correlation between each pair of ensembles' uncertainty over the expert tuples. Table 1 shows Spearman's rho between the learned components. We color cells in green when \(\rho\geq 0.4\) and in red when \(\rho\leq-0.4\) for ease of reading.
We highlight that the rank correlation varies greatly. In some cases, two ensembles agree strongly (e.g., Value and Transition in Halfcheetah); in others, they disagree strongly (e.g., Value and Policy in Hopper) or show no relation (e.g., Transition and Policy in Halfcheetah). There is not necessarily a pattern that holds between the two environments. Hence, swapping learned components into the UCB action-selection equation would likely not result in similar data-collection behavior.
Perhaps, then, the ideal UCB-style algorithm would balance the uncertainty of each learned component. This balancing act is difficult because the range of each function contributing to the uncertainty computation may be significantly different, directly affecting the magnitude of the ensemble-disagreement quantity in the UCB equation. For example, a given environment's reward function may be bound to \([0,1]\), while its action space is bound to \([-1,1]\), and its state space is unbounded. Instead of devising a complex and adaptive balancing scheme in this work, we examine the effects of using different ensembles to drive exploration. Specifically, we evaluate one baseline that uses value-driven UCB (UCB(Q)) and one that uses dynamics-driven UCB (UCB(T)).
## 6 Experiments
In our experiments, we aim to answer the following questions: (1) Can PTGOOD improve agent evaluation returns within the given agent-environment interaction budget? (2) How important is guided exploration to agent evaluation returns during online fine-tuning? (3) Are the policy-constraint mechanisms that are important in the purely-offline setting important in the OtO setting?
### Baselines
We carefully design baselines that reflect prominent categories of exploration strategies in RL. Also, we tune each of our baselines on a per-environment per-dataset basis and report results for the best-performing hyperparameters for each method. See Appendix A for more details and results. Unless otherwise noted, all algorithms use MBPO+SAC as the core model-based RL algorithm.
First, we use a baseline we call **RND/DeRL** that combines RND-based intrinsic rewards with two agents via the DeRL framework described in SS5.1. We train the RND predictor using the offline dataset before online fine-tuning begins and periodically update the predictor's weights throughout the fine-tuning process. Second, we use baselines we call **UCB(Q)** and **UCB(T)**. The former uses the uncertainty from an ensemble of Q-functions, and the latter from an ensemble of transition functions in the UCB action-selection equation described in SS5.2. Third, we use a **Naive** agent that does not differentiate between offline and online training and does not use any exploration guidance but instead samples the agent to choose actions. The Naive agent contextualizes the added benefit of guided exploration. Fourth, we evaluate **Cal-QL**(Nakamoto et al., 2023), a model-free algorithm designed specifically for the OtO setting that is built on top of CQL (Kumar et al., 2020), a pessimistic offline RL algorithm. Cal-QL was designed to correct for instabilities during online fine-tuning induced by CQL's policy constraint. Finally, we contextualize the benefit of offline pretraining with **Scratch**, an agent that is only trained online but still has access to the offline dataset. None of the agents except for Cal-QL use conservatism or pessimism of any form during any stage of training. See Appendix D for architecture and hyperparameter details along with full implementation details for PTGOOD.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & Reward & Value & Transition & Policy & & Reward & Value & Transition & Policy \\ \hline Reward & & -0.26 & 0.20 & 0.15 & Reward & -0.13 & 0.54 & 0.33 \\ Value & & & **0.55** & **-0.41** & Value & & **-0.57** & **-0.67** \\ Transition & & & & 0.08 & Transition & & & **0.53** \\ Policy & & & & Policy & & & & **0.53** \\ \hline \end{tabular}
\end{table}
Table 1: Pair-wise rank correlation (Spearman’s Rho) between different ensembles’ uncertainty in Halfcheetah (left) and Hopper (right). We color cells in green when \(\rho\geq 0.4\) and in red when \(\rho\leq-0.4\) for ease of reading.
### Environments and Datasets
We evaluate PTGOOD and our baselines on a set of environment-dataset combinations that satisfy two criteria: (a) it must not be possible for current algorithms to learn an optimal policy during the offline pretraining phase, and (b) we must be able to surpass a random agent during offline pretraining. If criterion (a) is violated, there is no need for online fine-tuning. If criterion (b) is violated, then the offline pretraining phase is not useful, and training from scratch online would be unlikely to be beaten. We use datasets in the Halfcheetah and Hopper environments from the D4RL study. Additionally, we collect our own datasets from environments not represented in D4RL, including Ant, Humanoid, and the Walker task from the DeepMind Control Suite (DMC). The datasets that we collect follow the same dataset design principles of D4RL. See Appendix C for more details on our environments and datasets.
### OtO Results
For each environment-dataset combination, we first pretrain agents offline to convergence and then fine-tune online for 50k environment steps across five seeds. Every 1k environment steps, we collect undiscounted returns across 10 evaluation episodes. Reporting comparative results between RL algorithms is a complex problem (Patterson et al., 2023); therefore, we present results across various views and mediums. Table 2 shows the average \(\pm\) one standard deviation of evaluation returns at the 50k online-steps mark with the highest returns bolded. We highlight in blue when the highest returns are statistically significantly different via a two-sided Welch's t-test. Figure 11 displays undiscounted evaluation return curves for all algorithms in all environment-dataset combinations across the 50k online fine-tuning steps. Figure 12 displays undiscounted evaluation return curves in all five training runs for the best and second-best performing algorithms in each environment-dataset combination.
First, we answer question (1) in the affirmative by highlighting that PTGOOD consistently provides the strongest performance across all environment-dataset combinations. Table 2 shows that PTGOOD provides the highest returns in \(7/7\) environment-dataset combinations, which are statistically significant in \(5/7\). Also, Figure 11 shows that PTGOOD is generally stable relative to other baselines (e.g., RND/DeRL in Halfcheetah (Random)). We also note that PTGOOD tends to avoid the premature policy convergence that other methods sometimes exhibit (e.g., DMC Walker (Random), DMC Walker (Medium Replay), and Hopper (Random) in Figure 12). See Appendix E for more analysis. Also, aside from higher returns after training has finished, PTGOOD often outperforms other baselines during the middle portions of fine-tuning (e.g., Halfcheetah (Random) and Ant (Medium Replay) in Figure 12).
Second, we address question (2). We note that the Naive method is a strong baseline across all environment-dataset combinations that we tested. Additionally, we highlight that the Naive baseline outperforms some guided-exploration baselines on occasion (e.g., RND/DeRL in Halfcheetah (Random) and UCB(T) in Ant (Medium Replay)). These results suggest that certain types of exploration are not universally helpful in OtO RL.
Third, we answer question (3) by observing Cal-QL results in Table 2 and training curves in Figure 11. We note that Cal-QL performs poorly consistently. This is unsurprising because Cal-QL's base algorithm encourages the learned policy to remain close to the behavior policy. Due to our environment-dataset selection criteria, the behavior policies are highly suboptimal, which makes conservatism and pessimism an unideal choice. Also, for the most part, our tuned baselines do not
\begin{table}
\begin{tabular}{l||c c c c c c c} Dataset & PTGOOD & Naive & RND/DeRL & Scratch & UCB(0) & UCB(T) & Cal-QL \\ \hline Halfcheetah (R) & **8867 \(\pm\) 88** & 7434 \(\pm\) 782 & 6782 \(\pm\) 2013 & 7248 \(\pm\) 814 & 7300 \(\pm\) 861 & 8170 \(\pm\) 513 & -317 \(\pm\) 122 \\ DMC Walker (R) & **897 \(\pm\) 53** & 736 \(\pm\) 40 & 677 \(\pm\) 63 & 668 \(\pm\) 88 & 740 \(\pm\) 50 & 811 \(\pm\) 68 & 45 \(\pm\) 4 \\ Hopper (R) & **3246 \(\pm\) 123** & 1576 \(\pm\) 880 & 1818 \(\pm\) 786 & 1231 \(\pm\) 648 & 2037 \(\pm\) 382 & 2251 \(\pm\) 830 & 57 \(\pm\) 39 \\ Ant (R) & **5624 \(\pm\) 235** & 4663 \(\pm\) 626 & 5258 \(\pm\) 191 & 3702 \(\pm\) 901 & 5290 \(\pm\) 272 & 5022 \(\pm\) 299 & -310 \(\pm\) 575 \\ DMC Walker (MR) & **953 \(\pm\) 6** & 732 \(\pm\) 21 & 700 \(\pm\) 164 & 778 \(\pm\) 93 & 783 \(\pm\) 75 & 772 \(\pm\) 93 & 106 \(\pm\) 57 \\ Ant (MR) & **5866 \(\pm\) 114** & 4973 \(\pm\) 337 & 4836 \(\pm\) 695 & 4777 \(\pm\) 1085 & 5328 \(\pm\) 224 & 4508 \(\pm\) 1364 & 990 \(\pm\) 864 \\ Humanoid (MR) & **15050 \(\pm\) 878** & 11706 \(\pm\) 3403 & 1953 \(\pm\) 1199 & 10723 \(\pm\) 3903 & 13183 \(\pm\) 885 & 12079 \(\pm\) 2461 & 381 \(\pm\) 174 \\ \end{tabular}
\end{table}
Table 2: Average \(\pm\) one standard deviation of undiscounted evaluation returns after 50k environment steps of online fine-tuning. Highest returns per environment-dataset combination bolded. Statistical significance is shown with blue highlight. (R) denotes Random datasets, and (MR) denotes Medium Replay datasets.
experience an initial performance collapse during online fine-tuning, as seen in Figure 11. This is in contrast to prior OtO work that focuses on bias correction due to the policy-constraint mechanisms in offline RL algorithms (e.g., Figure 2 in Nakamoto et al. (2023)). These results suggest that avoiding pessimistic and conservative methods may be a sensible choice in the OtO RL setting.
Finally, we note that neither UCB type is consistently better than the other. Additionally, in some environment-dataset combinations, either method is outperformed by the Naive baseline (e.g., in Halfcheetah (Random) for UCB(Q) and Ant (Medium Replay) for UCB(T)). This evidence, when combined with our experiment in SS5.2, suggests that further research in multi-ensemble UCB exploration could prove fruitful.
### Planning Noise
Key to our algorithm is exploring both unknown and high-reward regions of the state-action space. Instead of targeting high-reward state-action pairs with a Q-function value estimate, PTGOOD remains "close" to the improving policy by adding a small amount of noise to actions during the planning process. Using noise instead of explicit value estimation has computational benefits (see Appendix B) and does not rely on values that may be overestimated due to distributional shift (Fujimoto et al., 2018, 2019).
The meanings of "far" and "close" in the context of action selection are likely to be environment-dependent. As such, we perform a sweep over \(\epsilon\) values in various environment-dataset combinations. Figure 2 shows the average \(\pm\) one standard deviation of undiscounted evaluation returns for Halfcheetah (Random) and DMC Walker (Medium Replay) for various noise levels. We note that there is an optimal noise hyperparameter in either environment. If \(\epsilon\) is too small, evaluation returns degrade slightly due to the reduced exploration. Also, if \(\epsilon\) grows too large, PTGOOD's exploration strays too far from the improving policy and may become close to random exploration, which produces significantly reduced evaluation returns.
## 7 Conclusion
In this work, we introduced PTGOOD, a complement for model-based RL algorithms for exploration in the OtO setting. PTGOOD uses an estimate of the behavior policy's occupancy measure within a non-myopic planner to target high-reward state-action pairs unrepresented in the offline dataset. Also, we examined major online RL exploration paradigms, identified their compatibility issues with the OtO setting, and ultimately produced several strong baselines. We demonstrated that PTGOOD consistently provides the highest returns and avoids suboptimal policy convergence across our benchmark environments. PTGOOD could be improved further with adaptive noise in the planning process, which could account for state-dependent exploration noise or action-space characteristics (e.g., different joint types in musculoskeletal control).
Figure 2: Average (bold line) \(\pm\) one standard deviation (shaded area) of evaluation returns for the noise experiment in Halfcheetah (Random) (left) and DMC Walker (Medium Replay) (right). |
2307.08810 | Operator Guidance Informed by AI-Augmented Simulations | This paper will present a multi-fidelity, data-adaptive approach with a Long
Short-Term Memory (LSTM) neural network to estimate ship response statistics in
bimodal, bidirectional seas. The study will employ a fast low-fidelity,
volume-based tool SimpleCode and a higher-fidelity tool known as the Large
Amplitude Motion Program (LAMP). SimpleCode and LAMP data were generated by
common bi-modal, bi-directional sea conditions in the North Atlantic as
training data. After training an LSTM network with LAMP ship motion response
data, a sample route was traversed and randomly sampled historical weather was
input into SimpleCode and the LSTM network, and compared against the higher
fidelity results. | Samuel J. Edwards, Michael Levine | 2023-07-17T19:56:09Z | http://arxiv.org/abs/2307.08810v1 | **Operator Guidance Informed by AI-Augmented Simulations**
## Abstract
_This paper will present a multi-fidelity, data-adaptive approach with a Long Short-Term Memory (LSTM) neural network to estimate ship response statistics in bimodal, bidirectional seas. The study will employ a fast low-fidelity, volume-based tool SimpleCode and a higher-fidelity tool known as the Large Amplitude Motion Program (LAMP). SimpleCode and LAMP data were generated by common bi-modal, bi-directional sea conditions in the North Atlantic as training data. After training an LSTM network with LAMP ship motion response data, a sample route was traversed and randomly sampled historical weather was input into SimpleCode and the LSTM network, and compared against the higher fidelity results._
## 1 Introduction
The safety of a ship and its crew in heavy weather and rough sea conditions demands proper operational guidance. Operational guidance is provided in the form of selection of speeds and headings, and is generally based on accessing ship motions response predictions from a pre-computed database or look-up table for a given condition. Operational guidance is an important consideration in the survival of a ship and has been the focus of many International Maritime Organization (IMO) publications, _IMO (1995), IMO (2007), IMO (2020)_. Recommendations for ship-specific operational guidance has been developed and discussed in the interim guidelines of the Second Generation Intact Stability by IMO, _IMO (2020)_. While these guidelines are certainly useful in design and at sea, they are not comprehensive.
The ocean environment is random and complex. Consequently a pre-computed database cannot completely capture all ocean conditions potentially encountered. Accordingly, a computationally feasible approach is needed to estimate ship responses for a range of conditions.
A simplified approach for ship motion response predictions typically assumes a unidirectional seaway with a unimodal wave spectrum. However, realistic seaways typically encompass both wind and swell components that are can be delineated in terms of wave directionality and modal frequencies. Bi-directionality and bimodal spectra are common wave characteristics that are suitable for consideration in predictive ship response models.
Multi-directionality has been considered in _Yano et al. (2019)_, where wave radar data were invoked to generate a wave spectrum in simulations of a Ropax ship. By Grim's effective wave and a reduced-order roll equation, the maximum roll angle was estimated at various ship headings and multiple metacentric heights for the given directional wave spectrum. While the maximum roll angle is a useful metric, other seakeeping and structural response parameters are necessary for a more comprehensive investigation of extreme motions and loads.
In a recent effort, _Levine et al. (2022),_ described a data-driven model to evaluate predicted ship motions in unidirectional waves with a unimodal spectrum. In this study, data-adaptive Long Short-Term
Memory (LSTM) neural networks were investigated as part of a multi-fidelity approach incorporating Large Amplitude Program (LAMP), _Shin et al. (2003)_, and a reduced-order model known as SimpleCode. An initial assessment of this multi-fidelity approach focused on prediction of ship motion responses in waves. LSTM networks were trained and tested with LAMP simulations as a target, and SimpleCode simulations and wave time-series as inputs. LSTM networks improved the fidelity of SimpleCode seakeeping predictions relative to LAMP, while retaining the computational efficiency of a reduced-order model. The study was expanded in _Howard, et al. (2022)_, to limited combinations of bimodal, bidirectional combinations.
In this paper, this data-adaptive approach employing LAMP, SimpleCode and LSTM neural networks is evaluated for prediction of ship motions in bimodal, bidirectional seas during a simulated voyage across the North Atlantic. Simulations are performed based on the David Taylor Model Basin (DTMB) 5415, _Moelgaard (2000)_, in the most common combinations of primary and secondary wave conditions observed in the North Atlantic. Then, random observations of primary and secondary sea states along a prescribed journey are used as input into SimpleCode, LAMP, and the LSTM framework and the seakeeping statistics and time series are compared.
## 2 Methodology
### SimpleCode and LAMP
SimpleCode is a reduced-order seakeeping simulation tool that can quickly produce reasonable results, _Smith et al. (2019)_. One of the key simplifications in SimpleCode is in the local variation of wave pressure, where the hydrostatic and Froude-Krylov equations can use volume integrals instead of integrating over the surface of the ship, _Weems and Window (2013)_. With pre-computed Bonjean curves, instantaneous submerged volume and geometric center, the sectional hydrostatic and Froude-Krylov forces can be computed efficiently.
LAMP is a higher fidelity simulation tool that considers the 6-DOF forces and moments acting on the ship implemented by a 4th order Runge-Kutta solver in the time domain, _Shin et al. (2003)_. Central to the code is the solution to the 3-D wave-body interaction problem. The pertubation velocity potential is solved over the mean wetted hull surface. Hydrostatic and Froude-Krylov forces are solved over the instantaneous wetted hull surface. Within LAMP, the nonlinearities considered in the solution can be altered through how the model is represented mathematically e.g., including body non-linear hydrodynamics or large lateral motions. The version of LAMP in the current application was LAMP-3. In LAMP-3, approximate body non-linear hydrodynamics with large lateral motions are accounted for. LAMP effectively estimated motions comparable to model tests, _Lin et al. (2007)_.
LAMP is computationally intensive relative to reduced-order SimpleCode. LAMP-3 can run at nearly real-time, though some parameters such as number of wave frequency components, free surface panel definition, and hull offsets can be adjusted and increase the computational effort. For example, generation of a single realization of a 30 minute epoch entails approximately 30 minutes of computational time. In contrast with the same number of frequency components, SimpleCode can run on the order of 5,000 independent realizations of 30 minute epoch data in 30 minutes of computational time, _Smith (2019)_.
From the perpective of fidelity, SimpleCode can produce an approximation to LAMP predictions when tuned radiation and diffraction forces are included, _Weems and Belenky (2018), Pipiras et al.
(2022). However, a fidelity gap still exists that can be potentially addressed using a data-adaptive machine learning method, _Levine et al. (2022), Howard et al. (2022)._ In this paper, this method is applied to the to bimodal and bidirectional waves.
### Long Short-Term Memory
An LSTM network, _Hochreiter and Schmidhuber (1997),_ is a type of recurrent neural network, which incorporates both short and long-term memory based on data-adaptive learning for estimation of a function. These memory effects are stored in weight matrices, which along with other operations, transform input matrices to target output matrices. The following set of equations show the operations that occur in a LSTM layer.
\[f_{1}=\sigma\big{(}W_{f_{1}}x^{[t]}+U_{f_{1}}h^{[t-1]}+b_{f_{1}}\big{)} \tag{1}\]
\[f_{2}=\sigma\big{(}W_{f_{2}}x^{[t]}+U_{f_{2}}h^{[t-1]}+b_{f_{2}}\big{)} \tag{2}\]
\[f_{3}=tanh\big{(}W_{f_{3}}x^{[t]}+U_{f_{3}}h^{[t-1]}+b_{f_{3}}\big{)} \tag{3}\]
\[f_{4}=\sigma\big{(}W_{f_{4}}x^{[t]}+U_{f_{4}}h^{[t-1]}+b_{f_{4}}\big{)} \tag{4}\]
\[c^{[t]}=f_{1}\bigcirc c^{[t-1]}+f_{2}\bigcirc f_{3} \tag{5}\]
\[h^{[t]}=f_{4}\bigcirc\tanh\big{(}c^{[t]}\big{)} \tag{6}\]
Here, \(W\) and \(U\) are weight matrixes, \(b\) are the bias vectors, \(x^{[t]}\) is the input vector at time \(t\), \(h^{[t]}\) is the hidden state vector at time \(t\), \(c^{[t]}\) is the cell state vector at time \(t\), \(\sigma\) is the sigmoid function, _tanh()_ is the hyperbolic tangent function, and \(\bigcirc\) represents the Hadamard product. The output or target at time \(t\) is equal to the hidden state vector at time \(t\), \(h^{[t]}\). The weight matrices and bias vectors are progressively adapted during the training process to minimize the specified loss between the training data and test data. Mean-squared error is the loss function to quantify the error between the training and test sets. The formula for the mean-squared error (MSE) is in the following equation, where
\[MSE=\frac{1}{N}\sum_{i=1}^{N}\big{(}y_{T}(t_{i})-y_{L}(t_{i})\big{)}^{2} \tag{7}\]
Here, \(N\) is the number of points in the time series; \(y\) is the response matrix of the time series for heave, roll, and pitch; subscript \(T\) is the target time series, subscript \(L\) is the LSTM produced time series; and \(t_{i}\) is the _i-th_ time instant in the time series.
The current framework encompassed three consecutive LSTM layers followed by a dense layer to transform the input into the target output. LSTM inputs were heave, roll, and pitch responses provided from SimpleCode as well as the wave elevation at the center of gravity of the ship at the ordered speed and heading and the corresponding slope of the wave field in the x- and y-directions to account for the bi-directional seas. The target time series were the heave, roll, and pitch motions predicted by LAMP in 6-DOF simulations. For higher fidelity accurate prediction of these vertical degrees of freedom (heave, roll and pitch), the horizontal degrees of freedom (surge, sway, and yaw) were
included in the LAMP simulations. Subsequent training of the LSTM network specifically focused on the relevant target outputs for the heave, roll and pitch motions. A series of LSTM networks are trained on a given set bimodal data and then tested on different bimodal systems sampled across the North Atlantic voyage.
### Experimental Set-up
Simulations were performed with the DTMB 5415, _Moelgaard (2000)_. A rendering of the Model 5415 is in Figure 1.
The basic parameters of the full-scale DTMB 5415 are in Table I.
To determine the scope of training data environmental conditions, the 100 most historically observed combinations of primary and secondary seas in the North Atlantic during the month of December were used. The historical records were sourced from Wavewatch III hindcasts run by the US National Oceanic and Atmospheric Administration, _Tolman (2009)_. The area defined as the North Atlantic is in Figure 2.
Fig.2: Bounds considered as “North Atlantic” to sample historical wave parameters. For this study, primary spectra characterizing wind-generated waves and secondary spectra
\begin{table}
\begin{tabular}{|c|c|} \hline Parameter & Value \\ \hline Lwl [m] & 142.0 \\ \hline B [m] & 19.06 \\ \hline T [m] & 6.51 \\ \hline Displacement [t] & 9,156.38 \\ \hline KG, from BL [m] & 7.71 \\ \hline LCG, from FP [m] & 72.1 \\ \hline \end{tabular}
\end{table}
Table I: Primary parameters of the full-scale DTMB 5415.
characterizing swell were formed from ITTC (International Tow Tank Conference) spectra, _ITTC (2014)_. In addition, a ship speed of 10.0 kts was considered along with primary relative wave headings from 0 to 330 degrees in 30 degree increments. In this paper, 0 degrees is defined as head seas and 180 degrees is following seas. Secondary spectra wave directions were determined based on the most probable difference between the primary and secondary sea directions.
SimpleCode was set to run in the 3-DOF (heave, roll, and pitch,) configuration while LAMP was configured to run in 6-DOF with a Proportional-Integral-Differential (PID) rudder controller. The difference in configurations between LAMP and SimpleCode can result in different global positions within respective runs due to SimpleCode being restricted in sway and yaw while the controller in LAMP attempts to keep the ordered heading but ultimately includes variations in position. As a result, the simulated ships experience different wave elevations and forces at the center of gravity. The distinction in experienced waves consequently causes phase shifts between the SimpleCode and LAMP time series that may increase as the simulation progresses and affect the LSTM performance. A 6-DOF version of SimpleCode with a similar PID controller to LAMP would mitigate the difference in experienced waves and likely improve performance. In the current structure, each realization was 1,920 seconds (including an initial 120 second wave ramp-up) with a time step length of 0.05 seconds. In total, 5 realizations were generated for each of the 2,400 combinations of conditions in both SimpleCode and LAMP for training, validation, and testing.
LSTM networks were then trained based on simulations of the most observed North Atlantic in December bi-modal sea states in SimpleCode and LAMP. A network was trained for each of the 12 primary headings. From the 12,000 runs, 50 were randomly selected from each primary relative wave heading for training for each network, 25 were randomly selected for validation, and 25 were selected for testing. Table II details the hyperparameters in the training process.
Table II. Defining hyperparameters of the LSTM framework.
\begin{tabular}{|c|c|} \hline Hyperparameter & Value \\ \hline Time steps, N & 18,000 \\ \hline Time resolution factor & 9 \\ \hline Hidden state size & 150 \\ \hline Number of LSTM layers & 3 \\ \hline \end{tabular}
Once the LSTM network was trained, a Great Circle route between Norfolk, USA and Bergen, Norway was generated. Random samples of primary and secondary seas based on the same historical wave database histogram during the month of December were generated along the Great Circle path. The simulation length for both LAMP and SimpleCode were the same as in the training stage. For each observation along the generated path, five simulations were run in SimpleCode and LAMP for a constant ship speed of 10 knots. The SimpleCode time series were then standardized by the training data statistics run through the trained LSTM network. Histograms of the standard deviation of heave, roll, and pitch along the path for SimpleCode, the LSTM-corrected time series, and LAMP were compared. Additionally, the time series from SimpleCode, the LSTM-correction, and LAMP in the sea state with the largest significant wave height were compared.
### LSTM Training
As mentioned in Section 2.3, the training and validation realizations for each network were randomly selected from combinations of the 100 most common combinations of primary and secondary significant wave height, primary and secondary modal period, and difference in direction between primary and secondary seas. The training success varied for each of the networks, which were partitioned by primary relative wave direction, as a result of variation in secondary sea heading. Prior to training, the SimpleCode input and LAMP target realizations were standardized by the training data statistics e.g., heave, roll, and pitch standard deviations and means. The training and validation error plot for the primary relative wave direction of 330 degrees over 100 training epochs is in Figure 3.
The loss in Figure 3 is the total mean square error between the standardized LSTM output and LAMP for heave, roll, and pitch.
The training and validation set sizes were limited by a single 4-GB Graphical Processing Unit (GPU) and by the length (19,200 points) of each realization. This limitation was the primary reason for randomly sampling realizations with different parameter combinations so that bias could be reduced without losing accuracy. Still, the not including more combinations of primary and secondary relative wave directions through increase in training and validation set sizes did affect overall performance - especially in cases where parameter combinations varied significantly from the training data set. However, the LSTM still provided an improvement in estimation of statistics compared to SimpleCode.
### North Atlantic Journey Statistical Comparison
The primary goal of this study was to generate statistics for pertinent seakeeping motions in bimodal, bidirectional seaways common in the winter in the North Atlantic for operational guidance. An example journey was plotted from Norfolk, USA to Bergen, Norway. The route was set through half-degree latitude and longitude grids for which weather data had been collected and was set to be the shortest distance possible, or a Great Circle route. The route is in Figure 4.
Figure 3: Training and validation error for a relative primary wave direction of 330 degrees over 100 epochs.
Over the route, primary and secondary wave parameters were randomly selected from historical weather observations from each latitude-longitude grid and the corresponding conditions were run in SimpleCode, LAMP, and through the trained LSTM networks. Although, the random selection of observations does not model the inherent dependence between wave parameters of adjacent/nearby grids, the imposed variation here provides a reasonable initial test of the LSTM networks. The standard deviation from each simulation was estimated over five, 30-minute realizations. The standard deviations from heave, roll, and pitch were tabulated, binned, and counted over the example voyage to provide a summary of the journey. Kernel density estimations of the standard deviation probability density functions (pdfs) for each considered degree of freedom are in Figures 5-7.
Fig.4: Adjusted great circle route between Norfolk, USA and Bergen, Norway.
Fig.5: Heave standard deviation pdfs from SimpleCode, the LSTM networks, and LAMP.
In each degree of freedom, SimpleCode generally over-predicts LAMP while the LSTM networks improve the estimation while generally under-estimating LAMP. In SimpleCode, the larger, more varied responses are likely due to the hydrodynamic forces being concentrated on strictly the presented 3-degrees of freedom. For example, all lateral forces acting on the hull in SimpleCode can only go into producing roll; when in reality, some of the produced movement goes into sway or even yaw. But since SimpleCode is constrained in the lateral frame in oblique seas, roll is over-estimated and varies widely. The LSTM improves upon SimpleCode in roll by not only moving the most probable standard deviation closer to the most probable LAMP standard deviation, but also in reducing the variation in standard deviation. The estimation of heave and pitch by SimpleCode is generally more accurate but is still larger than the LAMP estimates. Again, the LSTM mostly captures the peak behaviour seen in the LAMP observations and reduction in variation compared to SimpleCode but is generally leads to under-prediction.
In Figures 5-7, the LSTM provided an improvement compared to SimpleCode with respect to LAMP in capturing the seakeeping summary. To further test the LSTM, the results from the worst conditions, defined as largest primary and secondary significant wave heights, resulting from the random selection of conditions over the journey were compared to LAMP and SimpleCode. In Figures 8-10, the time series from heave, roll, and pitch from SimpleCode, the LSTM networks, and LAMP are compared for a primary significant wave height of 3.0 meters, a primary wave period of 6.5 seconds, a
Fig.6: Roll standard deviation pdfs from SimpleCode, the LSTM networks, and LAMP.
Fig.7: Pitch standard deviation pdfs from SimpleCode, the LSTM networks, and LAMP.
primary wave direction of 240 degrees, a secondary wave height of 1.5 meters, a secondary wave period of 11.5 seconds, and a secondary wave direction of 330 degrees.
In Figures 8-10, the LSTM improves in level of response and phas
Fig.8: Heave time series snippet centered about the largest LAMP heave response.
Fig.9: Roll time series snippet centered about the largest LAMP roll response.
Fig.10: Pitch time series snippet centered about the largest pitch heave response.
compared to SimpleCode. The LSTM predicted standard deviations from these selected time series compared to LAMP improved with respect to SimpleCode. In roll, the absolute percentage error of the LSTM estimated standard deviation compared to LAMP was 34.7% but it is a significant improvement to the 193% error in the SimpleCode roll standard deviation. The LSTM also generally captured the LAMP peak, which is centered in each time series snippet, in each degree of freedom in both location and magnitude.
## 4 Conclusion
Accurate predictions of the ship motion statistics are vital for operational guidance. When considering bimodal, bidirectional sea spectra, estimates of the ship response are not simple. Furthermore, generating response statistic lookup tables as functions of the many combinations of parameters sourced from high-fidelity simulations is computationally prohibitive. A data-driven approach to capture the high-fidelity response while taking advantage of lower-fidelity tools can provide a potential answer.
In this paper, LSTM neural networks were trained to improve low-fidelity, 3-DOF hydrodynamic simulations in bimodal, bidirectional seas run by SimpleCode to estimate the 3-DOF seakeeping motions of interest sourced from higher-fidelity 6-DOF hydrodynamic simulations in the same bimodal, bidirectional seas run in LAMP. Networks were trained with the most common combinations of wave parameters in the North Atlantic in December. The networks provided improved estimates of the statistics relative to LAMP compared to SimpleCode. Incorporation of the inherent dependence between the wave parameters of adjacent/nearby grids could help demonstrate the utility of fast SimpleCode-LSTM seakeeping predictions along realistic routes for path planning.
While the current LSTM framework improved upon SimpleCode, the flexibility and accuracy could be increased by expanding the size of the training and validation sets. Running the training on a larger/multiple GPUs or a more optimized set-up would allow for more variation in training and validation data. These more comprehensive training and validation sets would not only improve the network by increasing the amount of data but also general flexibility.
Additionally, a future 6-DOF SimpleCode could enhance the performance of the LSTM. The fuller accounting of force distribution could potentially allow the LSTM to focus on other improvements to reduced-order SimpleCode predictions.
## 5 Acknowledgements
The work described in this paper has been partially funded by the Office of Naval Research (ONR) under Dr. Woei-Min Lin. The work has also been funded by the Department of Defense SMART SEED Grant. The authors would also like to thank Dr. Kenneth Weems for assistance with SimpleCode and LAMP.
|
2301.05062 | Tracr: Compiled Transformers as a Laboratory for Interpretability | We show how to "compile" human-readable programs into standard decoder-only
transformer models. Our compiler, Tracr, generates models with known structure.
This structure can be used to design experiments. For example, we use it to
study "superposition" in transformers that execute multi-step algorithms.
Additionally, the known structure of Tracr-compiled models can serve as
ground-truth for evaluating interpretability methods. Commonly, because the
"programs" learned by transformers are unknown it is unclear whether an
interpretation succeeded. We demonstrate our approach by implementing and
examining programs including computing token frequencies, sorting, and
parenthesis checking. We provide an open-source implementation of Tracr at
https://github.com/google-deepmind/tracr. | David Lindner, János Kramár, Sebastian Farquhar, Matthew Rahtz, Thomas McGrath, Vladimir Mikulik | 2023-01-12T14:59:19Z | http://arxiv.org/abs/2301.05062v5 | [
###### Abstract
Interpretability research aims to build tools for understanding machine learning (ML) models. However, such tools are inherently hard to evaluate because we do not have ground truth information about how ML models actually work. In this work, we propose to build transformer models _manually_ as a testbed for interpretability research. We introduce Tracr, a "compiler" for translating human-readable programs into weights of a transformer model. Tracr takes code written in RASP, a domain-specific language (Weiss et al., 2021), and translates it into weights for a standard, decoder-only, GPT-like transformer architecture. We use Tracr to create a range of ground truth transformers that implement programs including computing token frequencies, sorting, and Dyck-n parenthesis checking, among others. We study the resulting models and discuss how this approach can accelerate interpretability research. To enable the broader research community to explore and use compiled models, we provide an open-source implementation of Tracr at [https://github.com/deepmind/tracr](https://github.com/deepmind/tracr).
Interpretability, Transformers, Language Models, RASP, Tracr
languageresourceLanguage Resources]Traer: Compiled Transformers as a Laboratory for Interpretability
## 1 Introduction
As deep learning models are becoming more capable and increasingly deployed in production, improving our ability to understand how they make decisions is crucial.
_Mechanistic interpretability_ aims to achieve this by reverse engineering neural networks and producing _mechanistic_ explanations of the algorithms a model implements. This approach has achieved success in convolutional neural networks for image classification. Cammarata et al. (2020) explain a range of specific circuits in InceptionV1 (Szegedy et al., 2015), including curve detectors, high-low frequency detectors, and neurons detecting more high-level concepts such as dogs or cars. Elhage et al. (2021) and Wang et al. (2022) achieve early success in interpreting transformer language models using similar methods.
Despite this success, the toolbox of approaches for generating mechanistic explanations remains small and poorly understood. Part of the difficulty is that evaluating mechanistic explanations requires creativity and effort by researchers. It is difficult to evaluate how well an explanation tracks the actual mechanism used by the model when all our knowledge of the mechanism comes from the explanation itself. Without access to ground truth about the proposed mechanism, we must verify the methods used to study it in some other way.
The standard approach for evaluating mechanistic explanations combines evidence from many ad-hoc experiments (e.g., Olah et al. (2020) and Olsson et al. (2022)). However, since this is expensive
Figure 1: |Tracr allows us to create models that implement a known mechanism. We can then compare this mechanism to explanations an interpretability tool produces.
to do, many methods are only evaluated in toy models (e.g., Elhage et al. (2022)) or on a handful of nontrivial circuits in real models (e.g., Chan et al. (2022)). Systematic evaluation in nontrivial settings is usually intractable as it requires a lot of researcher time.
The situation is analogous to trying to invent a microscope lens without ever being able to point it at familiar, well-understood shapes. Through careful reasoning and experimentation, we might notice regularities in the tiny world seen through the lens, and begin to trust findings made with it; but if we could look through the lens at something we already understand, we would recognise its optical properties and correct its flaws.
We propose to directly tackle the absence of ground truth explanations by "compiling" human readable code to weights of a neural network. In this report, we present Tracr, a proof-of-concept implementation of such a compiler. Using this approach, we can create models which perform nontrivial computation with a known implementation. We can then evaluate interpretability tools by applying them to compiled models and comparing the resulting explanation to the ground truth.
Imagine we want to evaluate a method for locating specific knowledge in transformer models, such as "causal tracing" (Meng et al., 2022). In real language models, it can be challenging to check its correctness: the method might point out a location in the model, but we can't easily independently verify its claim, since no trusted procedure for establishing such facts about models in the wild exists yet. With Tracr we can construct models that encode some information in a specific location and check if our method correctly locates it. We can further explore special cases, such as information stored redundantly in different places.
In this work, we focus on transformer models (Vaswani et al., 2017) and use RASP, a domain-specific programming language for describing transformer computations (Weiss et al., 2021). We develop an approach to compile RASP programs to the weights of a transformer model by combining hand-coded and fully interpretable model components. We further propose a method that uses gradient descent to compress the compiled models to make them more efficient and realistic.
More specifically, in this report, we:
* Describe a modified version of the RASP programming language better suited for being compiled to model weights (Section3.2) and discuss some limitations of the RASP programming model.
* Introduce Tracr, a "compiler" for translating RASP programs into transformer model weights (Section3.4). To describe Tracr, we also introduce craft, its intermediate representation for expressing linear algebra operations using named basis directions (Section3.3).
* Showcase several transformer models obtained by using Tracr (Section4).
* Propose an optimization procedure to "compress" the compiled models and make them more efficient and realistic (Section5). We analyse models compressed this way, demonstrating superposition (Elhage et al., 2022).
* Discuss potential applications and limitations of Tracr and how compiled models can help to accelerate interpretability research (Section6).
* Provide an open-source implementation of Tracr ([https://github.com/deepmind/tracr](https://github.com/deepmind/tracr)).
## 2 Background
Before describing Tracr, let us recap the transformer architecture and the RASP programming language.
### Transformer Models
A transformer model consists of alternating _multi-headed attention_ (MHA) and _multi-layer perceptron_ (MLP) layers with residual connections.
Multi-headed attention (Vaswani et al., 2017) computes attention maps on sequences of length \(N\). A single attention head \(i\) first computes an attention pattern
\[A^{i}=\text{softmax}\left((xW_{Q}^{i})(xW_{K}^{i})^{T}/\sqrt{d_{k}}\right)\in \mathbb{R}^{N\times N}\]
for some input \(x\in\mathbb{R}^{N\times d}\), where \(W_{Q}^{i},W_{K}^{i}\in\mathbb{R}^{d\times d_{k}}\) are learnable parameters. Usually, we call the entries of \((xW_{K}^{i})\)_keys_, and the entries of \((xW_{Q}^{i})\)_queries_. _Multi-headed_ attention combines \(H\) attention heads heads by computing
\[\text{MHA}(x)=\text{Concat}\left[A^{1}(xW_{V}^{1}),\dots,A^{H}(xW_{V}^{H}) \right]W_{O}\]
where \(W_{V}^{i}\in\mathbb{R}^{d\times d_{v}}\) and \(W_{O}\in\mathbb{R}^{Hd_{v}\times d}\) are another set of learnable parameters. We commonly call the entries of \((xW_{V}^{i})\)_values_.
The MLP layers in transformer models compute \(\text{MLP}(x)=\sigma(xW_{1})W_{2}\) where \(W_{1}\in\mathbb{R}^{d\times h}\), \(W_{2}\in\mathbb{R}^{h\times d}\) are learnable weights, and \(\sigma\) is a non-linear function, often the Gaussian Error Linear Unit (GeLU; Hendrycks and Gimpel, 2016). For simplicity we use the Rectified Linear Unit (ReLU; Agarap, 2018).
In this paper, we focus on decoder-only transformers with the popular GPT architecture (Radford et al., 2018), which consists of alternating blocks of MHA, MLP, and layer normalization (Ba et al., 2016). The input to the model is the sum of a learned embedding of a sequence of input tokens and a positional embedding. The model is trained to predict the next token using gradient descent.
### Transformer Circuits
We adopt the _circuits_ view of transformers, introduced by Elhage et al. (2021). This view (1) focuses on the transformer being a residual stream architecture and (2) introduces an alternative parameterisation for attention operations. Both make it easier to reason about the computation done by transformers and will help us when assembling transformers manually.
**The residual stream view.** Transformers have residual connections at each attention and MLP layer. Elhage et al. (2021) consider the residual connections a core feature of the architecture and describe
Figure 2: An example RASP program (left) that computes the fraction of previous “x” tokens at each position of the input. Tracr compiles this program to a transformer model. We show the full residual stream of the compiled model at each layer for the input sequence “xxx” (right). Attn 1 is a no-op, MLP 1 computes the indicator variable is_x, Attn 2 implements the select-aggregate operation to compute frac_prevs, and MLP 2 is a no-op again. Section 4 discusses this and other examples in more detail.
the model in terms of a _residual stream_ that each layer reads from and writes to in sequence. The residual stream acts as a type of memory that earlier layers can use to pass information to later layers.
**Parameterising attention as \(W_{QK}\) and \(W_{OV}\).** Following Elhage et al. (2021), we parameterise an attention head by two (low-rank) matrices \(W_{QK}{}^{i}=W_{Q}^{i}(W_{K}^{i})^{T}/\sqrt{d_{k}}\in\mathbb{R}^{d\times d}\) and \(W_{OV}{}^{i}=W_{V}^{i}W_{O}^{i}\in\mathbb{R}^{d\times d}\) where we split \(W_{O}\) into different heads, such that \(W_{O}=[W_{O}^{1},\ldots W_{O}^{H}]\), where each \(W_{O}^{i}\in\mathbb{R}^{d_{o}\times d}\). We can then write MHA as
\[A^{i}=\text{softmax}\left(xW_{QK}{}^{i}x^{T}\right)\qquad\qquad\text{MHA}(x)= \sum_{i=1}^{H}A^{i}xW_{OV}{}^{i}\]
Importantly, we can think of MHA as summing over the outputs of \(H\) independent attention heads, each parameterised by low-rank matrices \(W_{QK}\) and \(W_{OV}\). \(W_{QK}\) acts as a bilinear operator reading from the residual stream, and \(W_{OV}\) is a linear operator both reading from and writing to the residual stream. The softmax is the only nonlinearity in an attention head.
### The RASP Programming Language
We build on the _Restricted Access Sequence Processing Language_ (RASP), a domain-specific language for expressing transformer computations. Weiss et al. (2021) propose RASP as a computational model to describe transformers and provide an interpreter for RASP code. We are primarily interested in compiling actual transformer models. In this section, we review the main features of RASP; for a more detailed description, refer to Weiss et al. (2021).
A RASP program can be seen as a computational graph, with each node taking on a particular value when evaluating the entire graph on a given input token sequence. We usually refer to programs by the node at the tip of the graph, with the nodes it depends on left implicit. There are two basic node types, _sequence operations_ and _selectors_, and two types of RASP operations, _elementwise operations_ and _select-aggregate operations_.
**Sequence operations.** A sequence operation (s-op) represents sequences of values during evaluation. tokens and indices are built-in primitive s-ops that return a sequence of input tokens or their indices, respectively. For example: tokens("hello") = \([\text{h},\text{e},\text{l},\text{l},\text{o}]\), and indices("hello") = \([0,1,2,3,4]\). S-ops roughly correspond to the state of the residual stream in transformers.
**Elementwise operations.** RASP allows arbitrary elementwise operations on s-ops. For example, we can compute (3*indices)("hello") = \([0,3,6,9,12]\). Elementwise operations roughly correspond to MLP layers in transformers.
**Select-aggregate operations.** To move information between token positions, RASP provides _select-aggregate_ operations which roughly correspond to attention in transformers. A _selector_ has a graph dependency on two s-ops and evaluates on inputs of length \(N\) to a binary matrix of size \(N\times N\). To create a selector, the select operation takes two s-ops and a boolean predicate \(p(x,y)\). For example:
\[\text{select(indices,}[1,0,2],<)("\text{abc"})=\begin{bmatrix}1&0&0\\ 0&0&0\\ 1&1&0\end{bmatrix}.\]
Here, \(p(x,y)=x<y\), where \(x\) comes from indices, and \(y\) comes from the constant s-op \([1,0,2]\).
The aggregate operation takes as input a selector and an s-op, and produces an s-op that averages
the value of the s-op weighted by the selection matrix. For example:
\[\text{aggregate}\begin{pmatrix}\begin{bmatrix}1&0&0\\ 0&0&0\\ 1&1&0\end{bmatrix},\begin{bmatrix}10,20,30\end{bmatrix}\end{pmatrix}=[10,0,15].\]
A selector roughly corresponds to an attention pattern in a transformer. Together a select-aggregate operation roughly corresponds to an attention head in transformers.
## 3 Tracr: A Transformer Compiler for RASP
To introduce Tracr, we first describe how RASP maps to the transformer architecture (Section 3.1) and propose a few modifications to RASP that make this mapping more straightforward (Section 3.2). Next, we introduce craft, our "assembly language" for transformer models (Section 3.3). Finally, we describe how Tracr translates RASP programs to transformer weights (Section 3.4).
Appendix A contains some more technical details, and we provide a full open-source implementation of Tracr at [https://github.com/deepmind/tracr](https://github.com/deepmind/tracr).
### Mapping RASP to Tranformers
RASP provides a computational model of transformers. For the most part, we can map RASP operations directly to the components of a transformer model.
**Embeddings.** The built-in s-ops tokens and indices correspond to a transformer's token and position embeddings. For example, we can embed the tokens and positions as categorical variables in orthogonal subspaces of the embedding space.
**MLP layers.** Any elementwise operation in RASP can be approximately computed by an MLP layer simply because MLPs can approximate any function with accuracy depending on the width and depth of the MLP (Hornik et al., 1989).
**Attention layers.** RASP's select-aggregate operations map to the attention layers in transformer models. The post-softmax attention pattern needs to match the selection matrix for all inputs to implement a given selector. So, given a large enough key/query-dimension, an attention head can implement an arbitrary binary attention pattern using its \(W_{0K}\) matrix. The \(W_{0V}\) matrix of the attention head can then implement the aggregate operation.
### Modifications to RASP
While we can map RASP operations to transformers, we need to make a few modifications to the RASP language to allow translating it to model weights.
**Disallow arbitrary selector combinations.** RASP allows to combine selectors using boolean operations; however, there is no natural analogue for this in real transformers. Combining selectors with different input variables is particularly problematic. For example, in RASP we can define a selector
select(a, b, ==) and select(c, d, ==)
using four s-ops a,b,c, and d. However, a real attention pattern only has two input vector spaces. There is no straightforward and efficient construction for representing arbitrary compositions of selectors (Appendix C). Because of this, we restrict RASP to selectors with only two input variables. In practice, this limitation turns out not to be severe. In particular, we were able to implement programs to solve all tasks described by Weiss et al. (2021).
**Encoding annotations.** A compiled model needs to pass information between layers. In a transformer, it is natural to do this via the residual stream. However, we have to decide how to represent information in the residual stream. For simplicity, we only use two encodings: categorical and numerical. We encode categorical variables as one-hot vectors in a dedicated subspace of the residual stream. We encode numerical variables as the magnitude of a dedicated one-dimensional subspace of the residual stream. Categorical encoding is generally less efficient when numerical encoding is possible, but some aggregate operations only work with one type of encoding. For instance, aggregate can compute a mean across token positions, which is not natural with attention on a one-hot encoded subspace but straightforward with a numerical one. However, numerically-encoded data is generally harder to work with, requiring a decoding step.
We require each s-op to be either categorical or numerical and augment RASP with the ability to annotate s-ops with the desired encoding. By default, we assume s-ops are categorical.
**Beginning of sequence token.** Transformers often assume any input sequence to start with a dedicated "beginning of sequence" token (BOS). We make the BOS token mandatory in RASP because it is crucial when implementing arbitrary attention patterns. In particular, RASP allows selectors that can produce all-zero rows; this is convenient when programming in RASP, but the softmax makes this behaviour impossible in a real attention head. In these situations, we use the BOS token as a "default" position to attend to: it is attended to iff no other token is. This allows the non-BOS part of the sequence to emulate the intended RASP behaviour. In our case, this choice comes from practical considerations; but, interestingly, real models sometimes show similar behaviour (e.g., see Elhage et al., 2021).
### craft: An Assembly Language for Transformers
If RASP is the high-level language we compile, craft is our "assembly language", offering slightly more abstraction than operating on pure weight matrices.
craft represents vector spaces with labelled basis dimensions and operations on them. This allows us to define projections or other linear operations in terms of basis direction labels. Importantly, craft abstracts away the need to keep track of padding in weight matrices.
We implement a transformer in craft that sticks closely to the transformer circuits view provided by Elhage et al. (2021). In particular, the residual stream is a vector space \(R\) with a basis. An attention head can be defined using a bilinear operator \(W_{QK}:Q\times K\rightarrow\mathbb{R}\) and a linear operator \(W_{OV}:V\to O\), where \(Q,K,V,O\subset R\) are the vector spaces that reuse the same basis. craft then handles the projection of these operators up to \(R\times R\rightarrow\mathbb{R}\) and \(R\to R\), which corresponds to adding the requisite padding.
In practice, we first independently translate each RASP computation into a craft component, then assign components to layers, and finally construct the residual stream space \(R\), ensuring that all information needed at a given layer in the model is embedded by previous layers.
Moreover, craft models are independent of concrete transformer implementations. A craft model can be translated into weights of any standard GPT-like transformer implementation.
Figure 3: |Tracr translates RASP to craft and then to model weights, analogous to how programming languages are first translated to assembly then to machine code.
### Compiler Overview
We are now ready to describe Tracr in detail. Tracr comes with an implementation of RASP embedded in Python. This allows us to write RASP programs in Python and makes it easier to provide annotations, such as variable encodings. In Tracr, a RASP program is a data structure that is incrementally constructed by passing in dependencies to each operation. We also do a few basic simplifications of RASP programs at this stage. For example, we combine consecutive elementwise operations into a single s-op.
Tracr translates RASP programs to transformer weights in six steps:
1. Construct a computational graph.
2. Infer s-op input and output values.
3. Independently translate s-ops to craft components.
4. Assign components to layers.
5. Construct craft model.
6. Assemble transformer weights.
Let us go through these step by step. Figure 4 gives a schematic overview using an example program.
**1. Construct a computational graph.** First, we trace the whole program to create a directed graph representing the computation. The graph has source nodes representing tokens and indices and a sink node for the output s-op.
**2. Infer s-op values.** For each s-op, we need to decide how to embed it in the residual stream. To use categorical encodings, we need to know which values an s-op can take. All nodes have a finite set of output values because computations are deterministic, and we have a finite input vocabulary and context size. Therefore, in the second step, we traverse the graph and annotate each node with its possible outputs. This annotation uses simple heuristics that ensure we find a superset of the values an s-op will take, though, sometimes, an output set can contain values that the s-op never takes in practice.
**3. Independently translate s-ops.** Next, we consider each node in the computational graph independently and translate it into a craft component. Elementwise operations become MLP blocks, and select-aggregate operations become attention blocks. We use a library of manually engineered MLP and attention blocks to approximate arbitrary functions for numerical and categorical inputs and outputs. MLPs with categorical inputs and outputs function as lookup tables. MLPs with numerical inputs and outputs use an explicit construction based on the universal function approximation theorem. For attention layers, we translate a selector into the \(W_{0K}\) operator and the corresponding
Figure 4: Schematic overview of how Tracr compiles the frac_prevs program from Figure 2 with a input vocabulary (“x”, “y”) and context size 3. (a) shows the computational graph with value annotations after step 2 of the compilation. (b) shows how is_x and frac_prevs are translated to model components independently in step 3. (c) shows the assembled model which has two no-op components because models blocks always need to have one attention and one MLP layer.
aggregate operation into the \(W_{OV}\) operator. We only support attention with categorical inputs. For more details on the MLP and attention blocks, see Appendix A.
**4. Assign components to layers.** To construct a transformer model, we need to allocate all craft components in the computational graph to layers. Ideally, we want to find the smallest model to perform the desired computation. We can generally formulate this as a combinatorial optimization problem with several constraints: the transformer architecture has alternating attention and MLP layers, and all computations that depend on each other need to be in the correct order. For scope reasons, we solve this with a heuristic. First, we compute the longest path from the input to a given node. This path length is an upper bound for the layer number to which we can allocate the node. Then we apply additional heuristics to combine layers with blocks that we can compute in parallel. This approach returns a correct but sometimes suboptimal layer allocation.
**5. Construct a craft model.** We construct the residual stream space as the direct sum of all model components' input and output spaces. In other words, we embed each s-op in its own orthogonal subspace, which is reserved for its sole use throughout the entire network. Now, we can traverse the computational graph in the order determined by the layer allocation and stack the components to obtain a full transformer represented in craft.
**6. Assemble transformer weights.** Finally, we translate the craft representation of the model into concrete model weights. First, we combine parallel MLP layers into a single layer and parallel attention heads into a single layer. In attention layers, we then split up the \(W_{QK}\) and \(W_{OV}\) matrices into \(W_{q}\), \(W_{k}\), \(W_{o}\), \(W_{v}\) weight matrices. Finally, we adjust the shapes of all weights and connect them to our transformer architecture. We can then infer the model configuration (depth, layer width, residual stream size, etc.) to fit the elements we have created.
We base our transformer implementation on the example decoder-only transformer from Haiku (Hennigan et al., 2020), notably removing the layer norms. Extending Tracr to support any other transformer implementation is straightforward by reimplementing only step 6.
## 4 Exploring Compiled Transformers
Having described Tracr, we are now ready to start compiling models. In this section, we walk through two example programs to illustrate how the compiled models work. Appendix D contains more examples. Overall, we were able to compile RASP programs for all the tasks described in Weiss et al. (2021), though we had to modify a few of the programs to only use features supported by Tracr.
### Example 1: Counting tokens
Figure 2 shows our primary running example, the frac_prevs program, that computes the fraction of previous "x" tokens. It uses one MLP layer and one attention head. However, because our model architecture always starts with an attention layer, the compiled model has four layers, with the first and last layers being no-ops.
The frac_prevs model has a 14 dimensional residual stream, but it uses 12 out of these for the input embeddings. The computation uses two numerical variables which correspond to the remaining two dimensions. The input embeddings have a few special dimensions. tokens:bos is the beginning of sequence token which we need to implement arbitrary attention patterns (cf. Section 3.2), and one is an input dimension that is fixed to 1. The model uses this dimension as a constant, e.g., to add a bias in MLP layers.
### Example 2: Sorting
As a second example, let us consider sorting a sequence of numbers. Figure 5 shows a sort_unique program that sorts a sequence of unique tokens.
The program computes the target position of each token by using the selector_width primitive in RASP, which computes the number of elements in each row of a selector that with the value 1. selector_width can be implemented in terms of other RASP operations (Weiss et al., 2021), but not using our variant of RASP, so we treat it as a primitive that compiles directly to an attention and MLP layer (here Attn 1 and MLP 1). See Appendix A for more details.
Weiss et al. (2021) propose a sort program that can handle duplicates (cf. their Figure 13). However, that implementation uses a selector
smaller = select(tokens, tokens, <) or (select(key, key, ==) and select(indices, indices, <)) to treat duplicates, which is not supported by Tracr (see Section 3.2). In Appendix D, we provide an alternative implementation of sort that handles duplicates by adding a small multiple of indices to the keys and then applying sort_unique.
### More examples
Tracr can compile a wide range of RASP programs. In Appendix D, we discuss a few more examples, leading up to checking balanced parentheses (_Dyck-n_). Our open-source Tracr implementation contains a library of even more example programs to compile.
## 5 Compressing Compiled Transformers
Tracr models can be sparse and inefficient because they reserve an orthogonal subspace of the residual stream for each s-op. In this section, we propose an experimental approach for "compressing" the resulting models and making them more efficient. This feature is presented as preliminary work and is not yet provided in the Tracr library. Here, we present two case studies of compressing compiled models.
Figure 5: | RASP program that sorts a sequence of numbers without duplicates. Attn 1 and MLP 1 implement the selector_width primitive (cf. Appendix A) which the program uses to compute the target position for each token. Attn 2 moves the tokens to the desired position, and MLP 2 is a no-op.
In addition to making Tracr models more efficient, the compressed models allow us to study how real neural networks might compress \(D\) features into a representation space with fewer than \(D\) dimensions. This phenomenon is called _superposition_(Ehlage et al., 2022); however, to our knowledge, it has not been studied in models deeper than two layers.
### Gradient Descent Based Compression
We use a single linear projection \(W\in\mathbb{R}^{D\times d}\) to compress the disentangled residual stream with size \(D\) to a smaller space with dimension \(d<D\). We modify the model to apply \(W^{T}\) whenever it reads from and \(W\) whenever it writes to the residual stream (see Figure 6). We freeze the weights of all layers and train only \(W\) using stochastic gradient descent (SGD).
Since vanilla Tracr models are sparse and have orthogonal features, this process can be viewed as learning the projection from a "hypothetical disentangled model" to the "observed model" described by Elhage et al. (2022).
We want the compressed model to minimise loss under the constraint that it implements the same computation as the original model. To achieve this, we train \(W\) to minimise \(\mathbb{E}_{X}[\mathcal{L}(W,x)]\), where
\[\mathcal{L}(W,x) =\mathcal{L}_{\text{out}}(W,x)+\mathcal{L}_{\text{layer}}(W,x)\] \[\mathcal{L}_{\text{out}} =\text{loss}(f(x),\hat{f}_{W}(x))\] \[\mathcal{L}_{\text{layer}} =\sum_{\text{layer }i}(h_{i}(x)-\hat{h}_{W,i}(x))^{2}\]
where \(f(x)\) is the output of the compiled model for input \(x\), \(\hat{f}_{W}(x)\) is the output of the compressed model, and \(h_{i}(x)\) and \(\hat{h}_{W,i}(x)\) are the output vectors at layer \(i\) of the respective models.
For categorical outputs, \(\mathcal{L}_{\text{out}}\) is the softmax cross-entropy loss, whereas, for numerical outputs, it is the mean-squared error. \(\mathcal{L}_{\text{layer}}\) is a regularization term that incentives the compressed model to match the per-layer outputs of the original model. To minimise this loss, the compressed model will have to approximate the computation of the original model but with a smaller residual stream.
We could set up this compression in other ways. For example, we could use a different projection at each layer, use different matrices for embedding and unembedding, or modify weights other than \(W\) when compressing the model. These design choices come with a tradeoff between making the model more expressible and potentially more realistic and enforcing the ground truth computation. For simplicity, we use a shared \(W\) for embedding/unembedding at every layer, and we already observe a rich structure in models compressed with this procedure.
Figure 6: Training setup for compressing a compiled transformer model. At each layer, we use the same matrix \(W\in\mathbb{R}^{D\times d}\) to embed the disentangled \(D\)-dimensional residual stream to \(d\leq D\) dimensions. We freeze the layer weights and only train \(W\) to compress the model.
## Appendix B contains more details on the training setup, hyperparameters, and resources used.
### What does the compression learn?
As our first case study, Figure 7 shows the example model from Figure 2, that computes the fraction of token "x". By learning an embedding matrix \(W\), we can reduce the residual dimension from \(D=14\) to \(d=6\) without hurting performance. Once we reduce \(d\) further, the model's performance starts to suffer.
To understand the compression better, we can study how \(W\) embeds the original \(D\) features in \(d<D\) dimensions. We can only do this because we started with a compiled model with known features. Figure 8 shows \(W^{T}W\) for compressing the model to \(d=8\). We can compare this to using principle component analysis (PCA) to compress the model. To interpret the results, we need to use our knowledge of the algorithm the model implements. The input tokens:x and the variables is_x and frac_prevs are crucial for computing the fraction of tokens that is "x", and we find that these variables mostly get separate dimensions in the compressed residual stream. The other input tokens stored in tokens:a, tokens:b, tokens:c are not necessary for solving the task, and so they are discarded in the compressed model. Other variables, such as the indices embeddings, are stored in non-orthogonal dimensions in the compressed space. This is consistent with existing findings on superposition as the indices embeddings are sparse and do not occur together (Elhage et al., 2022).
However, some of our results go beyond previous work on superposition. For example, Tracr models often have multiple variables that depend on each other and encode shared information. In our running example is_x is an indicator variable that essentially contains the same information as the
input dimension tokens:x.1 In Figure 8, we see that the embeddings of is_x and tokens:x share part of the embedding space. Intuitively, this occurs because the variables encode similar information.
Footnote 1: They are not exactly the same because is_x is only populated in a later layer. But, if is_x = 1, then tokens:x = 1.
In preliminary experiments, we found that shared information between variables seems to influence how superposition occurs. For example, varying the data distribution to have two variables share more or less information changes the correlation patterns between embedded features. Prior models of superposition do not explain this effect, and we leave fully understanding it for future work.
### Do the compressed models still implement the same computation?
Even if the compressed models successfully achieve a low loss, we need to check if they implement the same computation as the compiled models, or else we would no longer know the ground truth mechanisms the models implement. To this end, we evaluate the average cosine similarity between the output at each layer of the two models.
For the compressed frac_prevs model, the cosine similarity is close to 1, which implies that the compressed model is consistent with the compiled model (up to differences in norm).2
Footnote 2: In categorical tasks the compressed model is encouraged to output vectors with a large norm due to the output softmax. We found that this can sometimes lead to the norm of the outputs at intermediate layers also changing even though the cosine similarity is 1.
However, in other cases, the cosine similarity stays below 1 even as the compressed model gets close to 100% in accuracy. As an example, Figure 9 shows results from compressing the sort_unique model. Here, the compressed model achieves almost perfect accuracy on the task, but the average cosine similarity of the outputs at individual layers stays around 0.8. This suggests that the compressed model solves the tasks differently from the original compiled model.
By inspecting the models' outputs at each layer, we can attribute the error to the target_pos variable. In the Tracr model, target_pos is encoded categorically, with a dimension allocated per
Figure 9: | We compress the sort_unique program (Figure 5). The two plots on the right show that the compressed model achieves nearly perfect accuracy, but the layer outputs of the compressed model are different from the original compiled model. The left plot shows the average layer outputs of the compiled model, the compressed model, and the squared error between both. The source of the error is that the compressed model seems to learn to use a different (numerical) encoding for the target_pos variable.
position. However, the compiled model only uses one of these dimensions. This suggests that the compressed model moves the tokens to the target position with a numerical encoding of the target position rather than a categorical encoding. During training, this reduces the output loss at the cost of increasing the layer output regulariser.
This case shows that even in this fairly restrictive compression setup, the compressed model can learn a different computation to be more efficient. This is both encouraging and problematic: it is evidence that we can achieve meaningful compression with a simple approach; however, even in this restrictive setting, the compressed model is not guaranteed to be faithful to the original RASP program, undermining the value provided by the compiler as a source of ground truth.
Overall, using SGD on top of compiled models seems promising to make them more efficient and naturalistic. We hope that future work can make this training setup more robust and that we can ultimately fully integrate it in a future version of Tracr.
## 6 Discussion
We provide an open-source implementation of Tracr because we think it has many potential applications in interpretability research. In this section, we discuss applications we see for Tracr and compiled transformers more generally and reflect on the current limitations of Tracr and how they can be addressed.
### Applications of compiled models in interpretability research
Compilers like Tracr allow researchers to set up controlled experiments that test specific hypotheses about the computational structure of transformers. In this way, it acts as a laboratory for research in interpretability, enabling research that might otherwise be intractable.
**Test cases for interpretability tools.** Compiled models serve as a natural foundation for testing the faithfulness (Jacovi and Goldberg, 2020) of an explanation, and provide a way to falsify (Leavitt and Morcos, 2020) the explanations given by interpretability techniques. Ultimately, they could be used to build libraries of test cases for interpretability tools, which could in turn enable quantitative evaluation metrics. For example, Meng et al. (2022) propose a method to locate factual knowledge in transformers. Tracr could allow us to test what this or similar methods can locate in a range of models implementing different algorithms, contextualising its result in real models.
**Replacing model components.** Another way to evaluate our understanding of how a model works is to replace parts of the model with hand-coded components. For example, Nanda and Lieberum (2022) test their understanding of how a transformer implements modular addition by replacing components of the model with their own idealised implementation and find that this can _increase_ downstream performance, which is strong evidence that the proposed explanation is correct. While Tracr compiles an algorithm into a full transformer model, it could be adapted to only compile part of a model to replace part of a trained model. This could make it easier to evaluate our understanding of a large model.
**Understanding model phenomena and developing new techniques.** Beyond evaluation, compiled models can be used as a testbed for studying circuits-level phenomena and developing new approaches for interpreting transformer models. For example, in Section5 we successfully induced superposition in compressed Tracr models. Future work could analyse superposition in Tracr models, extending previous work in toy models (Ehlage et al., 2022; Scherlis et al., 2022). In particular, Tracr allows studying how the structure of computation implemented by a model affects which features will be
stored in superposition. One goal for this line of research could be to predict how a specific Tracr model will be compressed, which features will be stored in superposition and how. A complementary approach is to try reversing the superposition induced by a compression procedure, e.g., using ideas from compressed sensing and dictionary learning (Aharon et al., 2006; Donoho, 2006).
### Limitations of RASP and Tracr
RASP and Tracr are limited in terms of expressivity, efficiency and realism compared to real transformer models. Many of these limitations could be overcome in future versions of Tracr.
**Expressivity.** RASP is designed for algorithmic tasks that map an input sequence to a discrete output sequence. However, current language models usually map a sequence of input tokens to a probability distribution over the next token. Circuits in real models often consist of components that increase or decrease the probability of some tokens based on previous tokens (Wang et al., 2022). RASP, and hence Tracr, cannot model such "probabilistic" computation, but could potentially be extended to support it. RASP only uses binary attention patterns, which inherently limits the range of algorithms it can implement (Merrill et al., 2022). A way to extend RASP to support numeric attention patterns is discussed in Weiss et al. (2021).
**Efficiency.** Tracr models store all variables in orthogonal subspaces of the residual stream. Even if a variable is only used in part of the computation, Tracr reserves a subspace of the residual stream for it in all layers of the model. Real models use a more compressed representation and likely reuse dimensions for multiple features. Improved versions of the compression procedure discussed in Section 5 could address this limitation, as would using a constraint optimisation solver instead of a heuristic for layer allocation.
**Realism.** Tracr constructs layers from hand-coded parameter matrices. This is both unrealistic and inefficient, but could be addressed by learning the layers in isolation, then assembling them into a full model manually. Similarly, instead of manually splitting the \(W_{0K}\) and \(W_{OV}\) matrices, matrix factorisation could be used to get more efficient solutions. Also, Tracr models align their features with the computational basis. This is unrealistic, and makes the resulting models easy to interpret just by inspecting the residual stream activations. Rotating the basis of the compiled model is a straightforward way to address this if obfuscation is needed; compression would be an even more comprehensive approach.
While all of these issues could be overcome in a more sophisticated compiler, there are fundamental limitations on the role compiled models can play. Compiled models are an intermediate step between very simple toy models and real learned models. They help us understand ideas and methods, but results in compiled models do not necessarily generalise to real models. Compared with real models, compiled models will always be simpler. For example, we will likely never compile full-fledged language models. Compiled models will be more likely to be intepretable (e.g., the axis-aligned orthogonal residual stream bases in Tracr), and more likely to fit into existing paradigms for thinking about transformers. When using them to evaluate interpretability tools, we should be careful to make sure that the tools do not exploit this, treating such evaluations as a minimum bar rather than a full validation of a technique. Conversely, some methods might conceivably rely on features present in real models but not in compiled models.
## 7 Conclusion
In this work, we proposed manually constructing neural network weights and using them to develop and evaluate new interpretability tools. To this end, we developed Tracr, a tool for compiling
human-readable code to the weights of a transformer model.
We outlined our vision for the use of compiled models in interpretability, and there may other potential applications of Tracr within and beyond interpretability research. We are looking forward to seeing other researchers use it, and we hope studying compiled models will help to increase our understanding of neural networks.
## Acknowledgements
We thank Avraham Ruderman, Jackie Kay, Michela Paganini, Tom Lieberum, and Geoffrey Irving for valuable discussions, Victoria Krakovna and Marlene Staib for collaborating on early experiments with compiling RASP, and Chris Olah and Tristan Hume for feedback on an early draft of this paper. We thank the LessWrong user "Gurkenglas" for pointing out a mistake in an earlier draft of Appendix C.
## Author Contributions
VM proposed the initial idea for Tracr and wrote our RASP implementation. DL, VM, JK and MR designed and developed Tracr. DL designed, implemented, and ran the compression experiments in Section 5. MR wrote documentation and led the open-sourcing process. JK derived the theoretical results in Appendix C. TM and VM advised on research direction. DL and VM wrote the manuscript. DL led the project.
|
2305.17191 | MT-SLVR: Multi-Task Self-Supervised Learning for Transformation
In(Variant) Representations | Contrastive self-supervised learning has gained attention for its ability to
create high-quality representations from large unlabelled data sets. A key
reason that these powerful features enable data-efficient learning of
downstream tasks is that they provide augmentation invariance, which is often a
useful inductive bias. However, the amount and type of invariances preferred is
not known apriori, and varies across different downstream tasks. We therefore
propose a multi-task self-supervised framework (MT-SLVR) that learns both
variant and invariant features in a parameter-efficient manner. Our multi-task
representation provides a strong and flexible feature that benefits diverse
downstream tasks. We evaluate our approach on few-shot classification tasks
drawn from a variety of audio domains and demonstrate improved classification
performance on all of them | Calum Heggan, Tim Hospedales, Sam Budgett, Mehrdad Yaghoobi | 2023-05-29T09:10:50Z | http://arxiv.org/abs/2305.17191v2 | # MT-SLVR: Multi-Task Self-Supervised Learning for Transformation In(Variant) Representations
###### Abstract
Contrastive self-supervised learning has gained attention for its ability to create high-quality representations from large unlabelled data sets. A key reason that these powerful features enable data-efficient learning of downstream tasks is that they provide augmentation invariance, which is often a useful inductive bias. However, the amount and type of invariances preferred is not known apriori, and varies across different downstream tasks. We therefore propose a multi-task self-supervised framework (MT-SLVR) that learns both variant and invariant features in a parameter-efficient manner. Our multi-task representation provides a strong and flexible feature that benefits diverse downstream tasks. We evaluate our approach on few-shot classification tasks drawn from a variety of audio domains and demonstrate improved classification performance on all of them.
Calum Heggan\({}^{1}\), Tim Hospedales \({}^{2}\), Sam Budgett\({}^{3}\), Mehrdad Yaghoobi\({}^{1}\)\({}^{1}\) School Of Engineering, University of Edinburgh, Scotland,
\({}^{2}\) School Of Informatics, University of Edinburgh, Scotland,
\({}^{2}\) Thales UK RTI
[email protected], [email protected], [email protected], [email protected]
**Index Terms**: few-shot, multi-task, augmentation-invariance, speech classification
## 1 Introduction
Few-shot learning, which aims to learn with limited data, has become increasingly popular in response to the lack of large labelled datasets for many practical applications. Models trained using self-supervision (where a deep neural network (DNN) is trained with pseudo-labels that define pre-text tasks) have demonstrated strong success on few-shot learning tasks, with contrastive objectives among the most successful. Contrastive methods' efficacy is attributed to learning an inductive bias in the form of invariances to applied augmentations [1, 2]. For example, affine transformation invariance is typically useful for object category recognition, where pose is a nuisance factor [1]. However, the ideal type and degree of invariance is not known apriori, and varies across downstream tasks. So, contrastively trained invariant features do not provide a _one size fits all_ solution [2, 3, 4]. For example, a model learned to be pitch-shift invariant [5] would likely fail a task which relies on pitch sensitivity features. To learn a model which can successfully solve various downstream tasks, we require a feature representation with both invariant and transformation-sensitive properties.
We propose a parameter-efficient multi-task learning framework to address this limitation of existing contrastive learners. We simultaneously learn a contrastive objective (to learn augmentation invariances) and a transformation prediction objective (to learn augmentation sensitivity), thus providing a more flexible feature for downstream tasks. Our contributions include: 1) A novel multi-task learning framework; 2) A parameter-efficient solution to multi-task learning based on task-agnostic and task-specific features; 3) Evaluation of few-shot classification over 10 datasets, spanning audio and speech domains; 4) Analysis of learnt invariance strength and its relation to performance. Code can be found here.
## 2 Self-Supervision for Few-Shot Classification
A common goal of using self-supervision is to learn a powerful data representation without the need for large corpuses of labelled training data. This representation can then be used for other downstream tasks, where it can either be fine-tuned, using some labelled data from the target domain, or left as a static feature extractor. This type of approach is a particularly strong candidate for use in few-shot learning, where training a model from scratch for the task is difficult due to limited amount of labelled examples.
This use case of self-supervision is utilised in our work. In particular, we use pre-trained self-supervised models (used as static feature extractors) and a linear classifier in order to solve few-shot classification tasks.
Such few-shot problems can be formalised in terms of containing a support set \(\mathcal{S}\) with a few training samples per class and a query set \(\mathcal{Q}\) with test samples. These tasks are typically expressed as N-Way K-Shot tasks, with N being the number of classes and K being the number of examples per class. More formally, task components look like:
\[\mathcal{S}=\{\left(x_{1},y_{1}\right),\left(x_{2},y_{2}\right),\ldots,\left( x_{\mathcal{M}},y_{\mathcal{M}}\right)\} \tag{1}\]
\[\mathcal{Q}=\{\left(x_{1},y_{1}\right),\left(x_{2},y_{2}\right),\ldots,\left( x_{\mathcal{L}},y_{\mathcal{L}}\right)\} \tag{2}\]
where each example \(\left(x,y\right)\) consists of an input \(\mathbf{x}\in\mathbb{R}^{D}\) and a class label \(\mathbf{y}\in\{1,\ldots,N\}\), with \(\mathcal{M}\) and \(\mathcal{L}\) being the total number of support and query examples respectively.
## 3 Related work
**Self-Supervised Learning:** Since self-supervised learning is a large topic [6, 7, 8], we focus on relevant trends for brevity. One key trend is the success of methods which utilise augmentations for learning, including many contrastive methods [1, 9, 10, 11] as well as predictive ones [12]. These approaches learn invariances or sensitivity to applied augmentations, respectively. Audio-specialised contrastive variants include COLA [13], CLAR [5], and the work by Fonseca et al. [14]. In this work, we focus on SimCLR [1], SimSiam [9] and a custom transformation prediction framework. In the SimCLR/SimSiam methods, augmentation pipelines generate multiple 'views' of each data point and the DNN is trained to map them to a similar area in the feature space, allowing the model to learn an augmentation-invariant representation. SimCLR [1] and SimSiam [9] are distinct in a few ways; SimCLR uses implicit negative sampling and a temperate scaled cross-entropy loss, while SimSiam only uses positive-pair contributions and optimizes for cosine similarity. Utilising the same augmentation pipelines as described above, transformation prediction algorithms instead try to predict how or if specific augmentations
have been applied to input samples [12]. Unlike contrastive learning, algorithms with this objective learn sensitivity to augmentations. Since multiple augmentations can be applied to each input, we implement a multi-label TP model, where each augmentation is predicted independently.
**Few-Shot Classification for Audio & Speech:** Currently, only a handful of works exist investigating the few-shot learning regime for acoustic data [15, 16, 17]. Within these, few-shot speech classification, especially over different types of speech (language, accent, emotion etc) is heavily underrepresented. We make extensive use of the MetaAudio [17] benchmark due to its publicly available codebase. Additionally, we propose an extension to MetaAudio, including 3 new speech datasets suitable for few-shot classification [18, 19, 20].
**Multi-Task Learning & Invariances:** Most highly related to this work are others which deal with multi-task learning and/or the study of invariances/equivariances learnt by self-supervision. In particular, our work relates to: [2], which showed that different computer vision tasks benefit from different (in)variances; HyperSimCLR [4], which demonstrated that a hypernetwork can adapt a representation to the (in)variances needed for downstream tasks; and AugSelf [3] that also investigates co-learning contrastive and predictive self-supervision in computer vision. Our work differentiates itself in a few key ways, including: a parameter-efficient solution to multi-task learning via the use of adapters [21]; the application to acoustic data; the extent and complexity of applied augmentations; and the diversity of downstream tasks considered. Other related works include those which investigate multi-task learning in the audio domain, such as PASE [22].
## 4 Mt-Slvr
Motivated by the intuition that solely learning invariances to augmentations may be suboptimal for specific downstream tasks, we propose to co-learn opposing objectives. Specifically, we learn a feature space using both contrastive and predictive self-supervision. We name our approach **MT-SLVR** (**M**ulti-**T**ask Self-Supervised **L**earning for Transformation In/(**V**ariant) **R**epresentations). We conjecture that _different downstream tasks benefit from different type and strength of invariance, and that providing both augmentation sensitive and invariant features will lead to superior performance_.
**Objective:** We introduce the notation \(t_{\phi}(t_{\phi}^{aug})_{aug\in\mathcal{A}}\) to denote applied augmentation pipelines, where \(t_{\phi}\) is a composition of individual augmentations (\(t_{\phi}^{aug}\)) and their parametrisations (\(\phi\)), and \(\mathcal{A}\) is the set of augmenting training (e.g. \(\mathcal{A}=\{\)Pitch Shift, Fade\(\}\)). For our contrastive component (\(\mathcal{L}_{Cont}\)), we calculate loss in the same manner as the original works [1, 9]. For the predictive component, we propose a Multi-Label Augmentation Prediction (MLAP) framework, where augmentations are independently predicted for input samples. Formally, given a base feature extractor \(f_{\theta}\), a multi-layer MLP for transformation prediction \(\psi_{\theta}\), the Binary Cross-Entropy loss (BCE), and augmented samples \(v_{1}=t_{\phi_{1}}(x)\) and \(v_{2}=t_{\phi_{2}}(x)\), our predictive loss is defined as:
\[\mathcal{L}_{MLAP}(x)=\sum_{aug\in\mathcal{A}}BCE\left(\psi_{\theta}(f_{ \theta}(v_{1}),f_{\theta}(v_{2})),y_{aug}\right) \tag{3}\]
where
\[y_{aug}=\mathbb{I}(t_{\phi}^{aug}(x)) \tag{4}\]
and \(\mathbb{I}\) is the indicator function with labels a value of 0 if \(t_{\phi}^{aug}\) has not been applied to \(x\), and 1 if it has. For a given \(x\), sampled augmentation pipelines \(v_{1}\) and \(v_{2}\) consist of the same type and ordering of augmentations, however do not share augmentation specific parameters. This is done to keep alignment with original SimCLR [1] and SimSiam [9] works, which also make this restriction. The total objective for the multi-task problem can be expressed as:
\[\mathcal{L}_{Total}=\mathcal{L}_{Cont}+\lambda\cdot\mathcal{L}_{MLAP} \tag{5}\]
Where \(\lambda\) is a hyperparameter which balances the individual losses. Optimising for this total objective encourages the shared extractor \(f_{\theta}\) to learn both augmentation-invariant and augmentation-sensitive features.
**Architecture:** We propose jointly optimising the objectives by utilising both task-specific and task-agnostic features within the neural network. More formally, we introduce the notation \(\theta_{s}\), \(\theta_{0}\) and \(\theta_{1}\) to represent shared, contrastive specific and predictive specific parameters respectively. Objectives for our multi-task approach are then:
\[\mathcal{L}_{Cont}(x;f_{\theta_{s}},f_{\theta_{0}}) \tag{6}\]
\[\mathcal{L}_{MLAP}(x;f_{\theta_{s}},f_{\theta_{1}}) \tag{7}\]
where the task-specific parameters are defined by architectural changes made to assist multi-task. In particular, we employ two of these changes: 1) Splitting the final output layer of the network such that each task corresponds to the outputs of half of the final layer neurons; and 2) The fitting of residual or batch-normalisation adapters throughout the model, as in [21]. We use adapters in the same way as proposed in the original work, where lightweight modules are added around residual blocks. These modules take the form:
\[g(x;\alpha)=x+\alpha*x \tag{8}\]
where \(\alpha\) can either be a batch normalisation or 1x1 convolutional layer. Although lightweight, the included adapters do influence parametrisation. As a multiplier compared to the base model, models fit with adapters have the following parametrisation: Batch Normalisation (BN) \(\approx 1\times\), Series Adapters (Series) \(1.2\times\), Parallel Adapters (Parallel) \(1.2\times\).
**Augmentations:** We use the augmentations and corresponding parameters from CLAR [5], see Table 1. These encompass seven temporal or frequency-based augmentations. For sampling, we place no restrictions on the number of augmentations per sample, nor in which order they appear, except that at least one augmentation must be present. Each augmentation (except for the first) is activated with its own Bernoulli probability, allowing cases in which all augmentations are present.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Augmentation & Sherothand & Parameter & Values(s) \\ \hline \multirow{2}{*}{Pitch Shift} & PS & Min / Max Transpose Seminences & -15 / 15 \\ \hline \multirow{2}{*}{Fade} & \multirow{2}{*}{FD} & Shape & Lin. Log. Exp \\ & & Max In / Out Ratio & 0.5 / 0.5 \\ \hline \multirow{2}{*}{While Noise} & \multirow{2}{*}{WN} & Min / Max SNR in dB & 3 / 30 \\ & & Min Max f-Decay & -1 / 0 \\ \hline \multirow{2}{*}{Mixed Noise} & \multirow{2}{*}{MN} & Min / Max SNR in dB & 3 / 30 \\ & & Min Max f-Decay & -2/2 \\ \hline \multirow{2}{*}{Time Masking} & TM & Max Max Ratio & 0.125 \\ \hline \multirow{2}{*}{Time Shift} & TS\({}^{1}\) & Min / Max Shift Ratio & 0.5 \\ \hline \multirow{2}{*}{Time Surech} & TS\({}^{2}\) & Min / Max Surech Factor & 0.5 / 1.5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Details of augmentations used, along with their respective parameters. We introduce shorthand for later use.
## 5 Setup
**Pre-Training**: Our pre-training pipeline consists of two distinct parts, self-supervised learning on the balanced training subset of the popular AudioSet [23] (containing \(\approx\) 60\(hrs\) of audio), and hyperparameter optimisation based on average performance over the validation splits of the MetaAudio benchmark [17]. More specifically, we selected learning rates for each approach by comparing the average rank of trained models on tasks drawn from MetaAudio. Learning rates tested were between 1x10\({}^{-6}\) and 1x10\({}^{-2}\). Rates selected were 1x10\({}^{-4}\) for baselines and 0.5x10\({}^{-4}\) for multi-task approaches. All included models were trained for 1,000 epochs on the ResNet-18 backbone (with a final dense output of 1,000), using the Adam [31] optimiser. We generate sample-wise augmentations, where 1 to 7 augmentations (see Table 1) are selected and applied in a random order. Models were trained on a mix of RTX GPUs and on average took 30 hrs to complete.
**Data Processing:** Like other works [5, 13] we utilise a 2-d 3-channel spectrogram-based representation for input to the model. For pre-training, augmentations are applied before this conversion. For variable length sets at evaluation time, we utilise fixed length splitting and majority voting for classification, as described in [17].
**Few-Shot Classification:** We evaluate our models on few-shot classification tasks drawn from a variety of datasets. Within our selection, we consider both the general audio and speech domains. For general audio, we make use of the MetaAudio [17] benchmark, while for speech we source additional datasets [18, 19, 20]. For those included in MetaAudio, we use the test split presented by the original work, while for our own speech datasets, we utilise all classes for testing. We detail all of these datasets in Table 2. Following the methodology from [32], we freeze our learnt ResNet-18 backbone after pre-training (hence no fine-tuning) and solve tasks using a per few-shot task linear classifier. More specifically, we use a log-loss instantiation of the SGDClassifier as provided in sklearn [33]. For models which have multiple heads, we concatenate features before input to the classifier. Performance on each downstream dataset is reported as the average 5-way 1-shot task performance, \(\pm\) the 95% Confidence Interval (CI), taken over 10,000 tasks.
**Competitors:** We compare the following methods: Contrastive learning only [1, 9]; Multi-label transformation Predictive learning only; MT-Simple denoting our multi-task loss on a simple ResNet backbone; MT-Split denoting a ResNet backbone split at the final layer with one loss applied to each branch; MT-{BN, Series, Parallel} denoting a parameter-efficient multi-task split with shared ResNet blocks and task-specific BN, Series, or Parallel adapters. We note that we exclude Wav2Vec [34] and other Contrastive Predictive Coding (CPC) based methods from our comparison as they do not explicitly learn either augmentation invariances or variances, and hence fall out of scope of our research question.
**Invariance Analysis:** We also analyse our model in terms of measuring the learned augmentation (in)variance of the multi-task learned representation. We follow the work by Ericsson et al. [2] by utilising the Mahalanobis distance between our original training samples and their transformed counterparts. Like in [2], given a feature extractor \(f\) with feature space covariance \(\Sigma\), a transformation \(t_{\phi}^{aug}\) whose parameters belong to a set of all possible \(\phi\in\Phi\), and a dataset \(\mathcal{D}\), we measure strength of invariance as:
\[M_{f}^{T_{\phi}^{aug}}\left(\mathcal{D}\right)=\frac{1}{\left|\mathcal{D} \right|\left|\Phi\right|}\sum_{x\in\mathcal{D}}\sum_{\phi\in\Phi}m_{f}^{t_{ \phi}^{aug}}\left(x\right) \tag{9}\]
where
\[m_{f}^{t_{\phi}^{aug}}\left(x\right)=\sqrt{\left(f(x)-f\left(v\right)\right) \mathbf{\Sigma}^{-1}\left(f(x)-f\left(v\right)\right)^{T}} \tag{10}\]
and \(v\) is the transformed input sample \(t_{\phi}^{aug}(x)\). A feature extractor with zero total Mahalanobis distance between the original input samples and their transformed counterparts is perfectly invariant, while values greater represent increasing sensitivity.
## 6 Results
### Few-Shot Learning Results
[!htb]
Across experiments (see Tables 3 and 4), we observe strong improvements over both baselines (contrastive and predictive only), across all datasets. Ranked, the top 3 consist of the batch-normalisation, series and parallel adapters, followed by a mix of the others. Notably, both the naive multi-task approach (MT-Simple), where all features are shared between tasks, and the split branch counterpart (MT-Split) both yield worse (SimCLR) or only marginal improvements (SimSiam) on the baseline contrastive approaches. This shows that a richer multi-task architecture is necessary, and our parallel adapter approach provides this. We also observe some differences between contrastive methods used. Specifically, for SimCLR we observe a much higher spread of top ranking methods, while for SimSiam the parallel adapter method performs best in 9/10 cases, typically with much larger margins between it and the next best. We also observe that out of all 10 datasets, our absolute top performances in 8/10 are from SimCLR based methods.
### Invariance Analysis
To illustrate what (in)variances our framework has learned, we measure the distance between original and augmented samples (Sec 5) for our training set. The results in Tab. 5 show a few key trends. In particular, we note that: 1) Different heads of our
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Name & Setting & \(N^{*}\) Classes & \(N^{*}\) Samples & Format & Sample Length \\ \hline Balanced AudioSet [23] & Mixed & 527 & 20,550 & Fixed & 10s \\ \hline ESC-50 [24] * & Environmental & 50 & 2,000 & Fixed & 5s \\ NSynt(2) [*] & Instrumentation & 1,006 & 305,978 & Fixed & 4s \\ FDSExBeigh [28] * & Mixed & 41 & 11,073 & Variable & 0.3s - 30s \\ Watkins Marine Mammal Sounds [27] * & Marine Mammals & 32 & 1,698 & Variable & 0.1 - 150s \\ BirdExF 2000 (PramD) [28] * & Bird Song & 715 & 63,364 & Variable & 3s - 180s \\ \hline VoxCeleb1 [29] * & Speaker & 1,251 & 153,516 & Variable & 3s - 180s \\ SpeechCommandX2 [30] * & Keyword & 35 & 105,829 & Fixed & 1s \\ Crema-D [18] & Emotion & 6 & 7,442 & Variable & 1s - 5s \\ Speech Accent Archive [19] & Accent & 122 & 2,060 & Variable & 17s - 110s \\ Common Voice v12 Delta [30] & Language & 88 & 256,243 & Variable & 5s - 30s \\ \hline \hline \end{tabular}
\end{table}
Table 2: High level details of all datasets considered. Split into environmental sounds (TOP) and different types of speech (BOTOTOM). Included datasets originating from MetaAudio are marked with *.
multi-task approaches do indeed learn significantly different degrees of invariance to applied augmentations; and 2) On average, even the simple multi-task approaches decrease invariance strength compared to the contrastive baseline. Interestingly, we observe that the naive multi-task baselines (MT-Simple, MT-Split) do not successfully learn distinct invariances in either case, which may explain their weaker performance relative to other proposed approaches. We do not see a clear trend where a larger difference in augmentation strength between heads is predictive of final performance ranking. For example, the series adapter has the largest invariance strength difference, however does not rank first for either contrastive framework. Thus, although diverse (in)variance strength is important in providing a flexible representation, there is a more complex relationship that still needs to be understood. Finally, we expand our analysis by considering the average weight norms learned for each of the multi-task heads by our linear classifier for a representative set of datasets in Tab. 6.Our results illustrate that across different downstream tasks, the relative importance of contrastive versus predictive heads varies. This illustrates why the presence of both is advantageous for the numerical results in Tab 3, and shows how downstream tasks can easily tune the degree of importance attributed to each feature by learning the linear combination, removing the need for human intervention at either the pre-train or downstream task steps.
## 7 Conclusion & Future Work
We considered the idea that different downstream tasks may prefer different degrees of (in)variance in a pre-trained representation. Leveraging this insight, we developed a novel multi-task learner that exploits both contrastive and predictive learning, providing both augmentation invariant and augmentation sensitive features. To this end, we developed a novel multi-task architecture that provides both features by sharing most parameters and exploiting compact task-specific adapters. Our analysis showed that this multi-task architecture indeed learns substantially different invariances with each head. Each downstream task learning a linear combination of these features, is free to select its own operating point on the (in)variance spectrum., reducing the need for specific pre-train to downstream task tuning. We evaluated our approach on a diverse suite of few-shot classification tasks from a total of 10 audio and speech datasets and two contrastive learners (SimSiam and SimCLR). The results showed that our multi-task features improve on pure contrastive learning and provides the best performance in nearly all cases. In particular, we highlight that SimCLR with parallel adapters performed best on average. This work showed that multi-task learning produces more general features. This will enable faster adaptation to diverse downstream applications where lots of labelled data is not available, such as for voice recognition, speaker identification and emotion detection.
## 8 Acknowledgement
This work is supported by the Engineering and Physical Sciences Research Council of the UK (EPSRC) Grant number EP/S000631/1 and the UK MOD University Defence Research Collaboration (UDRC) in Signal Processing, EPSRC iCASE account EP/V519674/1 and Thales UK Ltd.
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline \hline Model (\(f_{\text{D}}\)) & Head & ESC-50 & NSynch & RingClef & Conv-D & SSA & C-Voice & Avg Rank \\ \hline \multirow{2}{*}{MT-Split} & C & 0.43 & 0.41 & 0.40 & 0.46 & 0.41 & 0.36 \\ & P & 0.57 & 0.59 & 0.60 & 0.54 & 0.59 & 0.64 \\ \hline \multirow{2}{*}{MT-Bno} & C & 0.41 & 0.39 & 0.41 & 0.43 & 0.40 & 0.37 \\ & P & 0.59 & 0.61 & 0.59 & 0.57 & 0.60 & 0.63 \\ \hline \multirow{2}{*}{MT-Series} & C & 0.39 & 0.33 & 0.36 & 0.39 & 0.37 & 0.30 \\ & P & 0.61 & 0.67 & 0.64 & 0.61 & 0.63 & 0.70 \\ \hline \multirow{2}{*}{MT-Parallel} & C & 0.41 & 0.37 & 0.36 & 0.38 & 0.31 \\ & P & 0.59 & 0.63 & 0.64 & 0.82 & 0.62 & 0.69 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Average linear classifier feature weight for the (P)redictive and (C)Intranstive heads in multi-task **SimCLR**.
\begin{table}
\begin{tabular}{l|c c c c c c c c c|c} \hline \hline Model (\(f_{\text{D}}\)) & ESC-50 & NSynch & Kaggle18 & Watkins & BintClef & VacCleb & SCv2 & Crema-D & SAA & C-Voice & Avg Rank \\ \hline \multirow{2}{*}{
\begin{tabular}{c} Cont Only \\ Pred Only \\ \end{tabular} } & 51.74\({}^{\pm 0.40}\) & 68.78\({}^{\pm 0.30}\) & 31.72\({}^{\pm 0.27}\) & 42.89\({}^{\pm 0.42}\) & 23.94\({}^{\pm 0.52}\) & 24.13\({}^{\pm 0.32}\) & **23.89\({}^{\pm 0.28}\)** & 28.11\({}^{\pm 0.30}\) & 23.51\({}^{\pm 0.34}\) & 28.50\({}^{\pm 0.34}\) & 5.0 \\ & 37.76\({}^{\pm 0.41}\) & 62.52\({}^{\pm 0.36}\) & 21.72\({}^{\pm 0.34}\) & 28.88\({}^{\pm 0.39}\) & 21.04\({}^{\pm 0.30}\) & 21.68\({}^{\pm 0.40}\) & 20.08\({}^{\pm 0.37}\) & 21.68\({}^{\pm 0.30}\) & 23.08\({}^{\pm 0.36}\) & 5.0 \\ & 37.66\({}^{\pm 0.41}\) & 62.52\({}^{\pm 0.36}\) & 21.72\({}^{\pm 0.34}\) & 28.88\({}^{\pm 0.39}\) & 21.04\({}^{\pm 0.30}\) & 21.68\({}^{\pm 0.40}\) & 20.08\({}^{\pm 0.37}\) & 21.68\({}^{\pm 0.30}\) & 23.08\({}^{\pm 0.34}\) & 23.00\({}^{\pm 0.42}\) & 7.0 \\ \hline \multirow{2}{*}{MT-Bno} & C & 0.41 & 0.39 & 0.41 & 0.43 & 0.40 & 0.40 & 0.46 & 0.41 \\ & P & 0.57 & 0.59 & 0.60 & 0.54 & 0.59 & 0.64 & 0.59 & 0.64 \\ \hline \multirow{2}{*}{MT-Bno} & C & 0.41 & 0.39 & 0.41 & 0.43 & 0.40 & 0.40 & 0.37 \\ & P & 0.59 & 0.61 & 0.59 & 0.57 & 0.60 & 0.63 & 0.63 \\ \hline \multirow{2}{*}{MT-Series} & C & 0.39 & 0.33 & 0.36 & 0.39 & 0.37 & 0.30 \\ & P & 0.61 & 0.67 & 0.64 & 0.61 & 0.63 & 0.70 \\ \hline \multirow{2}{*}{MT-Parallel} & C & 0.41 & 0.37 & 0.36 & 0.38 & 0.38 & 0.31 \\ & P & 0.59 & 0.63 & 0.64 & 0.82 & 0.62 & 0.69 \\ \hline \hline \end{tabular}
\end{table}
Table 4: 5-Way _1-Shot Performance Comparison between SimSiam methods. We compare SimSiam on its own (Baseline), Multi-Task Learning with no, or simple tricks (MT-Simple / Split), and Multi-Task with adapters (MT-Bn / Series / Parallel)._
\begin{table}
\begin{tabular}{l|c c c c c c c c|c} \hline \hline Model (\(f_{\text{D}}\)) & ESC-50 & NSynch & Kaggle18 & Watkins & BintClef & VacCleb & SCv2 & Crema-D & SAA & C-Voice & Avg Rank \\ \hline \multirow{2}{*}{
\begin{tabular}{c} Cont Only \\ Pred Only \\ \end{tabular} } & 51.74\({}^{\pm 0.40}\) & 68.78\({}^{\pm 0.30}\) & 31.72\({}^{\pm 0.27}\) & 42.89\({}^{\pm 0.42}\) & 23.94\({}^{\pm 0.52}\) & 24.13\({}^{\pm 0.32}\) & **23.89\({}^{\pm 0.28}\)** & 28.11\({}^{\pm 0.30}\) & 23.51\({}^{\pm 0.34}\) & 28.50\({}^{\pm 0.34}\) & 5.0 \\ & 37.76\({}^{\pm 0.41}\) & 62.52\({}^{\pm 0.36}\) & 21.72\({}^{\pm 0.34}\) & 28.88\({}^{\pm 0.39}\) & 21.04\({}^{\pm 0.30}\) & 21.68\({}^{\pm 0.40}\) & 20.08\({}^{\pm 0.37}\) & 21.68\({}^{\pm 0.30}\) & 23.08\({}^{\pm 0.34}\) & 23.00\({}^{\pm 0.42}\) & 5.8 \\ \hline \multirow{2}{*}{MT-Bno} & C & 0.42 & 0.41 & 0.40 & 0.40 & 0.46 & 0.41 & 0.36 \\ & P & 0.57 & 0.59 & 0.60 & 0.54 & 0.59 & 0.64 \\ \hline \multirow{2}{*}{MT-Bno} & C & 0.41 & 0.39 & 0.41 & 0.43 & 0.40 & 0.40 & 0.37 \\ & P & 0.59 & 0.61 & 0.59 & 0.57 & 0.60 & 0.63 \\ \hline \multirow{2}{*} |
2310.03352 | Tractable Bounding of Counterfactual Queries by Knowledge Compilation | We discuss the problem of bounding partially identifiable queries, such as
counterfactuals, in Pearlian structural causal models. A recently proposed
iterated EM scheme yields an inner approximation of those bounds by sampling
the initialisation parameters. Such a method requires multiple (Bayesian
network) queries over models sharing the same structural equations and
topology, but different exogenous probabilities. This setup makes a compilation
of the underlying model to an arithmetic circuit advantageous, thus inducing a
sizeable inferential speed-up. We show how a single symbolic knowledge
compilation allows us to obtain the circuit structure with symbolic parameters
to be replaced by their actual values when computing the different queries. We
also discuss parallelisation techniques to further speed up the bound
computation. Experiments against standard Bayesian network inference show clear
computational advantages with up to an order of magnitude of speed-up. | David Huber, Yizuo Chen, Alessandro Antonucci, Adnan Darwiche, Marco Zaffalon | 2023-10-05T07:10:40Z | http://arxiv.org/abs/2310.03352v1 | # Tractable Bounding of Counterfactual Queries by Knowledge Compilation
###### Abstract
We discuss the problem of bounding partially identifiable queries, such as counterfactuals, in Pearlian structural causal models. A recently proposed iterated EM scheme yields an inner approximation of those bounds by sampling the initialisation parameters. Such a method requires multiple (Bayesian network) queries over models sharing the same structural equations and topology, but different exogenous probabilities. This setup makes a compilation of the underlying model to an _arithmetic circuit_ advantageous, thus inducing a sizeable inferential speed-up. We show how a single _symbolic_ knowledge compilation allows us to obtain the circuit structure with symbolic parameters to be replaced by their actual values when computing the different queries. We also discuss parallelisation techniques to further speed up the bound computation. Experiments against standard Bayesian network inference show clear computational advantages with up to an order of magnitude of speed-up.
## 1 Introduction
Causal inference is an important direction for modern AI. Following Pearl's _ladder of causation_(Bareinboim et al., 2022), observational data are sufficient to compute correlational queries, while answering interventional queries requires an additional structure such as the causal graph and dedicated computational schemes such as the popular _do calculus_(Pearl, 2009). Moving further into counterfactual inference requires the full specification of the underlying causal model, including the structural equations and the exogenous parameters. While the equations might be available (or sampled), the exogenous parameters are typically latent and unavailable. Most counterfactuals are therefore _partially identifiable_ and only bounds are obtained for the corresponding queries (Shpitser and Pearl, 2007).
Despite the hardness of the task (Zaffalon et al., 2021), approximate bounding schemes exist. These include polynomial programming (Duarte et al., 2021), credal networks inference (Zaffalon et al., 2020), sampling (Zhang et al., 2022), and EM (Zaffalon et al., 2021). The latter, in particular, allows us to derive credible intervals while reducing the bounds' computation to iterated (Bayesian network) inferences in a fully specified structural causal model. Such a method requires multiple queries over models sharing the same structural equations but different exogenous probabilities.
Tractable _arithmetic circuits_(e.g., Darwiche (2022b)) offer a graphical formalism to represent generative probabilistic models and compute standard inferential tasks in linear time by a circuit traversal. The ACE library1 allows Bayesian network compilation to arithmetic circuits with state-of-the-art performances (Agrawal et al., 2021).
Footnote 1: [http://reasoning.cs.ucla.edu/ace](http://reasoning.cs.ucla.edu/ace).
The goal of this paper is to adopt the above compilation strategy to achieve a sizeable inferential speed-up in the computation of bounds for counterfactual queries. In particular, we consider a _symbolic_ knowledge compilation as in Darwiche (2022a) to obtain the circuit structure with the symbolic parameters to be replaced by their actual values when computing the different queries (Sect. 3). We also present parallelisation techniques to further speed up the bound computation. Experiments based on ACE against standard Bayesian network algorithms report computational speed-ups up to an order of magnitude (Sect. 4). This contribution appears to be the first application of knowledge compilation to counterfactual inference. A discussion on the outlooks of these strategies is in Sect. 5.
Notation and Basics
Variable \(X\) takes values from a finite set \(\Omega_{X}\), \(\theta_{X}\) is a probability mass function (PMF) over \(X\), \(\theta_{x}\) denote the probability of \(X=x\), and \(\lambda_{x}\) the indicator function of that event.
Bayesian Networks (BNs)Given variables \(Y\) and \(X\), a conditional probability table (CPT) \(\theta_{Y|X}\) is a collection of PMFs over \(Y\) indexed by the values of \(X\). Given a joint variable \(\mathbf{X}:=(X_{1},\ldots,X_{n})\) and a directed acyclic graph \(\mathcal{G}\) with nodes in a one-to-one correspondence with the variables in \(\mathbf{X}\), a BN is a collection of CPTs \(\mathbf{\theta}:=\{\theta_{X_{i}|\mathrm{Pa}_{X_{i}}}\}_{i=1}^{n}\), where \(\mathrm{Pa}_{X_{i}}\) denotes the _parents_ of \(X_{i}\) according to \(\mathcal{G}\) (see, e.g., Fig. 1). A BN induces a PMF \(\theta_{\mathbf{X}}\) s.t. \(\theta_{\mathbf{x}}=\prod_{i=1}^{n}\theta_{x_{i}|\mathrm{pa}_{X_{i}}}\), for each \(\mathbf{x}\in\Omega_{\mathbf{X}}\).
Arithmetic Circuits (ACs)We can express the joint PMF of a BN as a multi-linear function of the CPT parameters, i.e., \(\theta_{\mathbf{x}}=\sum_{x_{1}^{\prime},\ldots,x_{n}^{\prime}}\prod_{i}\theta_{x _{i}^{\prime}|\mathrm{pa}_{X_{i}}}\lambda_{x_{i}}\). Such an exponential-size representation becomes more compact by exploiting the BN conditional independence relations induced by \(\mathcal{G}\) and consequently moving the sums inside the products. The representation might be even more compact if different CPT parameters take the same value. Common examples are context-specific independence relations and CPTs implement deterministic relations through _degenerate_ (i.e., 0/1) values only. Such functions are graphically depicted as ACs composed by leaves, annotated by CPT probabilities and indicator functions, and inner nodes containing sums and multiplications (e.g., Fig. 2). Those ACs are called _tractable_, as they allow to answer some queries in linear-time, through feed-forward passes on the circuit structure. A number of _compilation_ algorithms have been proposed to build compact AC representations of BNs.
Structural Causal Models (SCMs)A _structural equation_ (SE) \(f\) associated with variable \(Y\) and based on the input variable(s) \(X\), is a surjective function \(f:\Omega_{X}\to\Omega_{Y}\) that determines the value of \(Y\) from that of \(X\). Given two joint variables \(\mathbf{U}\) and \(\mathbf{V}\), called respectively _exogenous_ and _endogenous_, a collection of SEs \(\{f_{V}\}_{V\in\mathbf{V}}\) such that, for each \(V\in\mathbf{V}\) the input variables of \(f_{V}\) are in \((\mathbf{U},\mathbf{V})\), is called a _partially specified_ SCM (PSCM). A PSCM induces a directed graph \(\mathcal{G}\) with nodes in correspondence with the variables in \((\mathbf{U},\mathbf{V})\) and such that there is an arc between two variables if and only if the first variable is an input variable for the SE of the second (e.g., Fig. 3). We focus on _semi-Markovian_ PSCMs, i.e., those PSCMs that lead to acyclic graphs. A _fully specified_ SCM (FSCM) is just a PSCM \(M\) paired with a collection of marginal PMFs, one for each exogenous variable. As SEs induce (degenerate) CPTs, an FSCM defines a BN over \((\mathbf{U},\mathbf{V})\) based on \(\mathcal{G}\).
Causal Queries in FSCMsBN algorithms allow to compute inferences in FSCMs. This is trivially the case for observational queries involving joint or conditional states of the endogenous variables. For interventional queries, this can be also done provided that the SEs of the intervened variables are replaced by constant maps pointing to the selected state. For counterfactual queries, where the same variable may be observed as well as subject to intervention, albeit in distinct _worlds_, we use auxiliary structures where different copies of the endogenous variables and their SEs are considered in each world. Han et al. (2023) provides a precise characterisation of the computational complexity of those inferences in terms of treewidth.
Partially Identifiable Causal Queries in PSCMsFSCMs are rarely available. Considering a PSCM specification with a dataset \(\mathcal{D}\) of endogenous observations represents a more common setup. This is not critical for observational queries: a BN over the endogenous variables can be obtained by deriving its graph from that of the PSCM and the CPTs from \(\mathcal{D}\)(Tian, 2002). Interventional queries can be possibly reduced to observational queries by the _do calculus_(Pearl, 2009). If this is not possible, we say that the query is only _partially identifiable_. In those cases, a characterisation is still provided by the bounds spanned by the values of the query computed for all the FSCMs consistent with the PSCM and the endogenous BN (e.g., Zaffalon et al. (2020)). Counterfactual queries are very often only partially identifiable.
Figure 1: A BN over two Boolean variables.
Figure 3: A FSCM over two endogenous (black) and two exogenous variables (grey nodes).
## 3 Tractable Bounding of Counterfactuals
Bounding partially identifiable queries PSCMs is an NP-hard problem even on polytrees (Zaffalon et al., 2021; Theorem 2).
Zhang et al. (2022) have proposed a Bayesian sampling procedure that eventually approximates the bounds via credible intervals. The sampling is query-driven; new queries will require new sampling. The accuracy of the approximation is unclear in general as a systematic experimental analysis is missing.
EM ApproachThe algorithm proposed by Zaffalon et al. (2021) samples the initialisation of the exogenous chances, which are used to start an EM scheme returning a compatible FSCM specification. Alg. 1 depicts a single EM run. The interval spanned by the values of the query computed on the FSCMs returned by the EM for each run provides an (inner) approximation of the expectation bounds. This approach is 'agnostic' w.r.t. the query. It aims at reconstructing the uncertainty related to exogenous variables (via sets of probabilities). Once this is done, different (counterfactual) queries will use the same sets of probabilities to compute the wanted bounds--no more sampling is needed.
```
1:\(t\gets 0\)
2:while\(P(\mathcal{D}|\{\theta_{U}^{t+1}\}_{U\in\mathcal{U}})\geq P(\mathcal{D}|\{ \theta_{U}^{t}\}_{U\in\boldsymbol{U}})\)do
3:for\(U\in\boldsymbol{U}\)do
4:\(\theta_{U}^{t+1}\leftarrow|\mathcal{D}|^{-1}\sum_{\boldsymbol{v}\in\mathcal{D }}\theta_{U|\boldsymbol{v}}^{t}\)
5:\(t\gets t+1\)
6:endfor
7:endwhile
```
**Algorithm 1** In a PSCM paired with an endogenous dataset \(\mathcal{D}\), given a random initialisation \(\{\theta_{U}^{(0)}\}_{U\in\boldsymbol{U}}\) in input, the algorithm returns the exogenous chances \(\{\theta_{U}\}_{U\in\boldsymbol{U}}\) obtained after likelihood convergence.
An approximate bounding scheme based on Alg. 1 may suffer from two potential bottlenecks: (i) an insufficient number of runs leading to a poor inner bound approximation; (ii) the time needed by the FSCM inferences required by the exogenous queries (line 4) and the likelihood evaluation (line 2).
Regarding (i), Zaffalon et al. (2022) derived a characterisation of the accuracy of the bounds in terms of credible intervals, and the EM scheme has been proven to yield accurate bounds with relatively few runs.
Here we address instead (ii) by first noticing that the queries needed by Alg. 1 are computed on different FSCMs based on the same PSCM, thus having possibly different exogenous chances, but always the same endogenous CPTs implementing the SEs of the PSCM. This is true for the models corresponding to different time steps \(t\), but also when different exogenous initialisations are considered in input. In practice the algorithm requires the computation of inferences in different BNs having the same CPTs for the non-root nodes, but different marginal PMFs on the root nodes. This simple remark suggests the use of AC compilation to achieve faster inferences.
Symbolic Knowledge CompilationConsider the AC compilation of two BNs over the same variables and with the same graph but different CPT parameters. Suppose these parameters, separately for each BN, have no repeated values. In that case, the compiler minimises the size of the ACs by only exploiting the independence relations induced by the BN graph. As these are the same for the two BNs, the two ACs returned by the compiler should share the same inner nodes and the same indicators on the leaves while differing only on the chances in the leaves.
This fact allows for a _symbolic_ compilation achieved by regarding the chances in the leaves as symbolic parameters to be replaced by their actual values during an inferential computation. Compilers can quickly implement symbolic compilation by replacing the BN parameters with unique numerical identifiers to be eventually retrieved in the AC returned by the compiler.
Returning to the queries of interest for the EM scheme, we might intend PSCM compilation as a symbolic compilation achieved by treating the exogenous PMFs as parameters. In contrast, the endogenous CPTs, implementing the SEs and remaining the same for all the models, are treated as constant numerical values. The degenerate nature of the CPTs can be exploited by the compiler to achieve smaller ACs and hence faster inferences (e.g., with the FSCM in Fig. 3 as input, ACE returns an AC with 96 arcs if the determinism of the CPTs is not exploited and 23 arcs otherwise). After the PSCM symbolic compilation, the AC of each FSCM required by Alg. 1 is obtained in linear time (w.r.t. the AC size) by replacing the parameters of the _symbolic_ AC with the actual values in the particular FSCM.
The queries required by Alg. 1 are, for each \(\boldsymbol{v}\in\mathcal{D}\), the computation of endogenous marginal \(\theta_{\boldsymbol{v}}\) and the exogenous posterior \(\theta_{u|\boldsymbol{v}}\), to be computed for each \(U\in\boldsymbol{U}\) and \(u\in\Omega_{U}\). We therefore focus on the computation of the joint query \(\theta_{u,\boldsymbol{v}}\) for each \(U\in\boldsymbol{U}\), \(u\in\Omega_{U}\) and \(\boldsymbol{v}\in\mathcal{D}\). This is performed in linear time by a bottom-up traversal of the AC after instantiating the indicators of the variables in \(\boldsymbol{V}\) and \(U\).
ParallelisationAlg. 1 allows for a straightforward parallelisation at the run level. A more sophisticated parallelisation can be based on _c-components_(Tian, 2002). In a PSCM, a c-component is a set of variables connected through undirected paths consisting solely of exogenous-to-endogenous arcs. For each c-component, we define a subgraph consisting of the nodes in the c-component and its
direct parents, with all other variables and edges removed. The corresponding sub-model might yield the chances of the exogenous variables in the c-component through Alg. 1. The procedure can be executed in parallel, separately for each c-component.
## 4 Experiments
To evaluate the benefits of the proposed AC approach when running the EM scheme in Alg. 1, we compare the AC execution times against those based on standard BN inference for a synthetic benchmark of \(335\) PSCMs. The PSCM graphs have a random topology (Erdos-Renyi sampling), the number of nodes ranges between \(5\) and \(21\) (avg. \(9.9\)), and the number of root nodes (i.e., exogenous variables) between \(2\) and \(10\) (avg. \(4.5\)). All the endogenous variables are binary, while the cardinalities of the exogenous ones range between \(3\) and \(256\) (avg. \(29.6\)). Each PSCM comes with a dataset of endogenous observations of size between \(1,000\) and \(5,000\) records obtained by sampling a compatible FSCM. The benchmark and the code used for the simulations are available in a dedicated repository.2
Footnote 2: anonymous.4open.science/r/uai-E5D7.
The code is built on the top of CREDIC3[Cabanas et al., 2020], a Java library implementing the EM scheme and embedding a BN inference engine. Here we consider inferences based on variable elimination with the min-fill heuristics. The symbolic compilation is instead developed within the Java/C++ ACE compiler (see Footnote 1). The experiments are run on a dual 2.20GHz Intel(R) Xeon(R) Silver 4214 CPU Dell PowerEdge R540 server running Ubuntu 20.04.6 LTS. All the experiments are performed using a fixed seed for the random initialisation and, as expected, resulted in the exact same set of PSCMs.
Footnote 3: github.com/idsia/credici.
For each PSCM, we perform \(200\) runs of \(500\) iterations. We set a timeout to \(15\) minutes for each experiment. The BN approach based on the whole model often reaches this limit. Thus, as a baseline for the BN approach, we consider the faster BNC approach based on queries in the sub-BNs associated with the model c-components. The number of c-components for the benchmark models ranges between \(1\) and \(10\) (avg. \(4.2\)). The parallelisation of BNC over the different components is denoted instead as BNP. We similarly denote as ACC the method based on the (symbolic) compilation of the sub-BNs and as ACP its parallelisation. The overall execution times (in hours) on the whole benchmark for the four methods are \(T_{\mathrm{BNC}}=17.0\), \(T_{\mathrm{BNP}}=7.3\), \(T_{\mathrm{ACC}}=2.4\), and \(T_{\mathrm{ACP}}=1.3\). This clearly shows the advantage of the (symbolic) knowledge compilation.
A deeper analysis is provided by computing, separately for each PSCM, the ratio between the EM execution time of a particular approach and that of BNC. Fig. 4 shows the boxplots of the different approaches. In practice, using ACs makes the bounding of the counterfactual queries one order of magnitude faster. Note also that we considered PSCM of bounded size (\(\leq 21\) nodes) just to permit a comparison against the BN approaches, which cannot handle bigger networks in reasonable time limits.
## 5 Conclusions
In this study, we have investigated the potential of knowledge compilation within the framework of partially identifiable queries, such as counterfactuals, in structural causal models. We have assumed that structural equations are given together with a dataset of endogenous observations. From these we reconstruct the uncertainty about the exogenous variables with sets of probabilities.
The advantages of using knowledge compilation appear clear: the new approach leads to one order of magnitude speed-up compared to pre-existing models based on Bayesian nets.
As future work we intend to use the knowledge compilation approach to execute the EM scheme in very large models along two dimensions: the size of the network as well as the cardinality of exogenous variables. The latter is in particular an important factor to represent general 'canonical' specifications of PSCMs. These specifications enable one to be dispensed of the requisite to provide structural equations in input: a causal graph with endogenous data would suffice to compute counterfactual inference.
We also intend to explore more in-depth problems with network structures with large treewidth that may thus be intractable by variable elimination.
Figure 4: Runtime savings w.r.t. BNC. |
2309.01426 | A Unified Framework for Guiding Generative AI with Wireless Perception
in Resource Constrained Mobile Edge Networks | With the significant advancements in artificial intelligence (AI)
technologies and powerful computational capabilities, generative AI (GAI) has
become a pivotal digital content generation technique for offering superior
digital services. However, directing GAI towards desired outputs still suffer
the inherent instability of the AI model. In this paper, we design a novel
framework that utilizes wireless perception to guide GAI (WiPe-GAI) for
providing digital content generation service, i.e., AI-generated content
(AIGC), in resource-constrained mobile edge networks. Specifically, we first
propose a new sequential multi-scale perception (SMSP) algorithm to predict
user skeleton based on the channel state information (CSI) extracted from
wireless signals. This prediction then guides GAI to provide users with AIGC,
such as virtual character generation. To ensure the efficient operation of the
proposed framework in resource constrained networks, we further design a
pricing-based incentive mechanism and introduce a diffusion model based
approach to generate an optimal pricing strategy for the service provisioning.
The strategy maximizes the user's utility while enhancing the participation of
the virtual service provider (VSP) in AIGC provision. The experimental results
demonstrate the effectiveness of the designed framework in terms of skeleton
prediction and optimal pricing strategy generation comparing with other
existing solutions. | Jiacheng Wang, Hongyang Du, Dusit Niyato, Jiawen Kang, Zehui Xiong, Deepu Rajan, Shiwen Mao, Xuemin, Shen | 2023-09-04T08:18:35Z | http://arxiv.org/abs/2309.01426v1 | A Unified Framework for Guiding Generative AI with Wireless Perception in Resource Constrained Mobile Edge Networks
###### Abstract
With the significant advancements in artificial intelligence (AI) technologies and powerful computational capabilities, generative AI (GAI) has become a pivotal digital content generation technique for offering superior digital services. However, directing GAI towards desired outputs still suffer the inherent instability of the AI model. In this paper, we design a novel framework that utilizes wireless perception to guide GAI (WiPE-GAI) for providing digital content generation service, i.e., AI-generated content (AIGC), in resource-constrained mobile edge networks. Specifically, we first propose a new sequential multi-scale perception (SMSP) algorithm to predict user skeleton based on the channel state information (CSI) extracted from wireless signals. This prediction then guides GAI to provide users with AIGC, such as virtual character generation. To ensure the efficient operation of the proposed framework in resource constrained networks, we further design a pricing-based incentive mechanism and introduce a diffusion model based approach to generate an optimal pricing strategy for the service provisioning. The strategy maximizes the user's utility while enhancing the participation of the virtual service provider (VSP) in AIGC provision. The experimental results demonstrate the effectiveness of the designed framework in terms of skeleton prediction and optimal pricing strategy generation comparing with other existing solutions.
Wireless perception, AI-generated content, resource allocation, quality of service
## I Introduction
In recent years, the accelerated proliferation of diverse user data, advancements in hardware devices, and the evolution of AI models catalyze the rapid progression of generative artificial intelligence (GAI) technology [1]. As a result, the artificial intelligence-generated content (AIGC) and its associated applications attract considerable attention [2]. Major technological giants, such as Microsoft and Google, invest heavily in creating their own exclusive GAI model, with the objective of offering users a more comprehensive digital service [3]. A representative work is OpenAI's ChatGPT, which achieves notable breakthroughs in emulating human in text processing tasks. For instance, ChatGPT is capable of not only executing grammar error detection and refinement, but also generating text, code, and performing content retrieval operations [4]. Beyond text processing, the powerful capabilities of GAI are also unleashed in the realm of image and video generation. For instance, Stable Diffusion can generate images based on users' descriptions (i.e., prompts), as well as process images according to users' instructions, including style modifications and rectification of missing pixels and other visual imperfections [5].
In comparison to the conventional generation methods, GAI exhibits two salient advantages. First, GAI has a superior productivity, capable of generating digital content quickly in accordance with user directives. For example, stable diffusion model [6] can generate a high-definition image within seconds, which is challenging to accomplish by a traditional user based generation method. Second, AIGC exhibits greater diversity, manifested in two aspects [7]. The first aspect pertains to the richness of the generated content. Owing to the randomness of the seed in AI models, GAI's outputs can vary significantly even with identical instructions. For example, the diffusion model can generate entirely different images with the same input prompt, thus offering users a broader range of choices. The second aspect is the multimodal presentation format, which allows AIGC to be delivered in various forms such as text, images, videos, and even audio [8]. This makes AIGC highly adaptable, catering to a range of applications. Due to these aforementioned benefits, GAI has emerged as the critical engine for creating digital content, playing an indispensable role in our progression towards a more immersive and interactive next-generation Internet [9].
Despite the significant advancements, several challenges still need to be tackled for practical applications. _First, the inherent instability of AI models makes it difficult to meet users' needs, especially when generating digital content directly related to users themselves_[6]. For example, in augmented reality (AR) applications, such as virtual game and shopping, the virtual service providers (VSPs) use the GAI technology to create virtual characters for users. However, due to the randomness of seeds in AI model and the difficulty of conveying information through prompts about users' posture to the AI model, the generated characters may not align accurately with the actual user. As a result, users may generate |
2302.03143 | Sparsification of Monotone $k$-Submodular Functions of Low Curvature | Pioneered by Benczur and Karger for cuts in graphs [STOC'96], sparsification
is a fundamental topic with wide-ranging applications that has been studied,
e.g., for graphs and hypergraphs, in a combinatorial and a spectral setting,
and with additive and multiplicate error bounds. Rafiey and Yoshida recently
considered sparsification of decomposable submodular functions [AAAI'22]. We
extend their work by presenting an efficient algorithm for a sparsifier for
monotone $k$-submodular functions of low curvature. | Jannik Kudla, Stanislav Živný | 2023-02-06T22:15:07Z | http://arxiv.org/abs/2302.03143v1 | # Sparsification of Monotone \(k\)-Submodular
###### Abstract
Pioneered by Benczur and Karger for cuts in graphs [STOC'96], sparsification is a fundamental topic with wide-ranging applications that has been studied, e. g., for graphs and hypergraphs, in a combinatorial and a spectral setting, and with additive and multiplicate error bounds. Rafiev and Yoshida recently considered sparsification of decomposable submodular functions [AAAI'22]. We extend their work by presenting an efficient algorithm for a sparsifier for monotone \(k\)-submodular functions of low curvature.
## 1 Introduction
The idea of "sparsifying a graph" (i. e., reducing the number of edges) while preserving the value of all cuts goes back to the influential paper [6]. The original motivation was to speed up algorithms for cut problems and graph problems more generally. This concept turned out to be very influential, with several generalisations and extensions from graph cuts [5, 7, 2] to sketching [1, 2], sparsifiers for cuts in hypergraphs [25, 31], spectral sparsification [42, 41, 40, 18, 27, 39, 24], sparsification of other predicates [14], and additive sparsification [4].
The cut function of a graph is an important example of a submodular function, which we define now. Let \(E\) be a finite set. A (set) function \(F:2^{E}\to\mathbb{R}\) defined on subsets of \(E\) is called _submodular_ if
\[F(S\cap T)+F(S\cup T)\ \leq\ F(S)+F(T)\qquad\forall S,T\subseteq E. \tag{1}\]
Submodularity is a fundamental concept in combinatorial optimisation, with applications across computer science and economics [30, 45, 38, 16]. An equivalent definition of submodular functions captures the idea of _diminishing returns_.
\[F(T\cup\{e\})-F(T)\ \leq\ F(S\cup\{e\})-F(S)\qquad\forall S\subseteq T\subseteq E,e\in E\setminus T. \tag{2}\]
A set function \(F\) is _decomposable_ if \(F=\sum_{i=1}^{N}f_{i}\), where \(f_{i}:2^{E}\to\mathbb{R}\) for each \(i\in[N]=\{1,\ldots,N\}\) and \(E\) is a finite set of size \(n=|E|\). The cut function in a graph is an example
of a decomposable submodular function, in which the number \(N\) of individual functions is equal to the number of edges in the graph. (This is true even if the graph is directed and with nonnegative edge weights.)
Rafiey and Yoshida [34] considered the following natural sparsification problem for \(F\).1 Given tolerance parameters \(\varepsilon,\delta\in(0,1)\), find a vector \(w\in\mathbb{R}^{N}\), called an \(\varepsilon\)-_sparsifier_ (or just a _sparsifier_), such that the function \(F^{\prime}=\sum_{i=1}^{N}w_{i}f_{i}\) satisfies, with probability at least \(1-\delta\),
Footnote 1: Each \(f_{i}\) is represented by an oracle that returns, for any \(S\subseteq E\), the value \(f_{i}(S)\).
\[(1-\varepsilon)F^{\prime}(S)\ \leq\ F(S)\ \leq\ (1+\varepsilon)F^{\prime}(S) \qquad\forall S\subseteq E, \tag{3}\]
and \(\mathsf{size}(w)\), the set of nonzero entries of \(w\), is as small as possible. The idea in [34] is, for each \(i\in[N]\), to sample function \(f_{i}\) with probability \(\kappa_{i}\) proportional to the ratio
\[p_{i}=\max_{\begin{subarray}{c}S\subseteq E\\ F(S)\neq 0\end{subarray}}\frac{f_{i}(S)}{F(S)}. \tag{4}\]
If \(f_{i}\) is sampled, i. e., if it is decided that \(f_{i}\) shall be part of the sparsifier, it is included in the sparsifier with weight \(1/\kappa_{i}\), making its expected weight equal to \(\mathbb{E}\left[w_{i}\right]=\kappa\cdot 1/\kappa_{i}=1\) - its weight in the initial decomposition. In statistical terms, the sampling procedure is _unbiased_. The authors of [34] showed the following.
**Theorem 1** ([34]).: _Let \(F=\sum_{i=1}^{N}f_{i}\), where each \(f_{i}:2^{E}\to\mathbb{R}\) is submodular. For every \(\varepsilon,\delta\in(0,1)\) there is a vector \(w\in\mathbb{R}^{N}\) such that_
1. \(\mathbb{P}\left[w\text{ is an $\varepsilon$-sparsifier}\right]\geq 1-\delta\)_;_
2. \(\mathbb{E}\left[\mathsf{size}(w)\right]=\mathcal{O}\left(\frac{n}{ \varepsilon^{2}}\sum_{i=1}^{N}p_{i}\right)\)_, where_ \(p_{i}=\max_{\begin{subarray}{c}S\subseteq E\\ F(S)\neq 0\end{subarray}}\frac{f_{i}(S)}{F(S)}\)_._
Computing and in many interesting cases even approximating the \(p_{i}\)'s is by far the hardest step on the way to constructing a sparsifier. We shall refer to the \(p_{i}\)'s as the _peak contributions_ - since \(p_{i}\) describes, on a scale from \(0\) to \(1\), the maximum contribution of \(f_{i}\) to \(F\) when a set \(S\subseteq E\) is chosen in favour of \(f_{i}\).
Let \(F=\sum_{i=1}^{N}f_{i}\) be as in Theorem 1, i. e., with all \(f_{i}\)'s submodular. Let \(\left|\mathsf{EX}(\mathcal{B}(f_{i}))\right|\) be the number of extreme points in the base polyhedron of \(f_{i}\)[16], and let \(B=\max_{i\in[N]}\left|\mathsf{EX}(\mathcal{B}(f_{i}))\right|\) (cf. Appendix B.1 for precise definitions). The authors of [34] claim that
\[\sum_{i=1}^{N}p_{i}\ \leq\ Bn, \tag{5}\]
which implies by the virtue of Theorem 1 the existence of a sparsifier of expected size \(\mathcal{O}(\frac{Bn^{2}}{\varepsilon^{2}})\). As we will see later, this only holds if the \(f_{i}\)'s are monotone. Using an \(\mathcal{O}(\sqrt{n})\)-approximation of the peak contributions using the ellipsoid method [3], it is then established in [34] that if all \(f_{i}\)'s are not only submodular but also monotone, a sparsifier of expected size \(\mathcal{O}(\frac{Bn^{2.5}\log n}{\varepsilon^{2}})\) can be found in randomised polynomial time, assuming (5) holds. Here a function \(F:2^{E}\to\mathbb{R}\) is called _monotone_ if \(F(S)\leq F(T)\) for any \(S\subseteq T\subseteq E\).
ContributionsAs our main contribution, we provide a sparsification algorithm for decomposable monotone \(k\)-submodular functions of low curvature. As a starting point, we observe in Section 2 (and prove in Appendix A) that the sampling algorithm from [34] used to prove Theorem 1 is largely independent of submodularity, leading to a more general sparsification algorithm for decomposable functions. Along the way, we establish a concentration bound revealing that it is very unlikely that the resulting sparsifier exceeds \((3/2)\)-times the expected size. In detail, consider a finite domain \(\mathcal{D}\), which is the power set \(\mathcal{D}=2^{E}\) in the case of set functions. Further suppose that \(F:\mathcal{D}\to\mathbb{R}\) is decomposable as \(F=\sum_{i=1}^{N}f_{i}\), where \(f_{i}:\mathcal{D}\to\mathbb{R}\) for each \(i\in[N]\).2
Footnote 2: Each \(f_{i}\) is represented by an evaluation oracle that takes time \(\mathcal{O}(\mathrm{EO}_{i})\) to return \(f_{i}(S)\) for any \(S\in\mathcal{D}\).
**Theorem 2** (Informal version of Theorem 3).: _Let \(F=\sum_{i=1}^{N}f_{i}\), where \(f_{i}:\mathcal{D}\to\mathbb{R}\). For every \(\varepsilon,\delta\in(0,1)\) there is a vector \(w\in\mathbb{R}^{N}\) such that_
1. \(\mathbb{P}\left[w\text{ is an $\varepsilon$-sparsifier}\right]\geq 1-\delta\)_;_
2. \(\mathbb{E}\left[\mathsf{size}(w)\right]=\mathcal{O}\left(\frac{\log|\mathcal{ D}|+\log\frac{1}{\delta}}{\varepsilon^{2}}\sum_{i=1}^{N}p_{i}\right)\)_, where_ \(p_{i}=\max_{\begin{subarray}{c}A\in\mathcal{D}\\ F(A)\neq 0\end{subarray}}\frac{f_{i}(A)}{F(A)}\)_;_
3. \(\mathbb{P}\left[\mathsf{size}(w)\leq\frac{3}{2}\mathbb{E}\left[\mathsf{size} (w)\right]\right]\geq 1-4\varepsilon^{2}\)_._
As our primary contribution, we use Theorem 2 to give in Section 3 a sparsifier for decomposable monotone \(k\)-submodular functions of low curvature. As our secondary contribution, we clarify certain results on sparsification of submodular functions from [34]. Firstly, we show that Inequality (5) claimed in [34] is incorrect by giving a counterexample. However, we show that Inequality (5) holds under the additional assumption of monotonicity. This is done in Appendix B. Secondly, in Appendix C we give a sparsifier for a class of decomposable monotone submodular functions of bounded arity.
For a natural number \(k\geq 1\), a function \(F:(k+1)^{E}\to\mathbb{R}\) defined on \(k\)-tuples of pairwise disjoints subsets of \(E\) is called \(k\)-submodular if \(f\) satisfies inequalities similar to the submodularity inequality given in Inequality (1). In detail, let \(\mathbf{A}=(A_{1},\ldots,A_{k})\in(k+1)^{E}\) be a \(k\)-tuple of pairwise disjoint subsets of \(E\), and similarly for \(\mathbf{B}=(B_{1},\ldots,B_{k})\in(k+1)^{E}\). Then, \(F:(k+1)^{E}\to\mathbb{R}\) is called \(k\)_-submodular_ if
\[f(\mathbf{A}\cap\mathbf{B})+f(\mathbf{A}\sqcup\mathbf{B})\ \leq\ f( \mathbf{A})+f(\mathbf{B}), \tag{6}\]
where
\[\mathbf{A}\cap\mathbf{B}\ =\ (A_{1}\cap B_{1},\ldots,A_{k}\cap B_{k}), \tag{7}\]
and
\[\mathbf{A}\sqcup\mathbf{B}\ =\ ((A_{1}\cup B_{1})\underset{i\in\{2, \ldots,k\}}{\bigcup}(A_{i}\cup B_{i}),\ldots,(A_{k}\cup B_{k})\underset{i\in\{ 1,\ldots,k-1\}}{\bigcup}(A_{i}\cup B_{i})). \tag{8}\]
Under this definition, \(1\)-submodularity corresponds exactly to the standard notion of submodularity for set functions as defined in Inequality (1), and similarly \(2\)-submodularity corresponds to _bisubmodularity_[8, 9]. The class of \(k\)-submodular functions was introduced in [19] and played an important role in the study of so-called finite-valued CSPs [20, 26].
While minimising 1-submodular functions [37, 22] and 2-submodular functions [17] given by evaluation oracles can be done efficiently [21], the complexity of the minimisation problem of \(k\)-submodular functions is open for \(k\geq 3\). On the other hand, the approximability of the maximisation problem is well understood for \(k\)-submodular functions [47, 23, 32], also for the monotone case under cardinality constraint [35], in the streaming model [13], and other variants [44, 43, 33].
The definition of monotonicity for submodular functions gracefully extends to \(k\)-submodular functions: \(F:(k+1)^{E}\to\mathbb{R}\) is called _monotone_ if \(F(\mathbf{A})\leq F(\mathbf{B})\) for all \(\mathbf{A}=(A_{1},\ldots,A_{k})\in(k+1)^{E}\) and \(\mathbf{B}=(B_{1},\ldots,B_{k})\in(k+1)^{E}\) with \(A_{i}\subseteq B_{i}\) for every \(i\in[k]\).
An important concept studied in the context of submodular functions is that of _bounded curvature_[12]. For a monotone submodular function \(F:2^{E}\to\mathbb{R}_{\geq 0}\), the _curvature_ (also called _total curvature_ in [46]) \(c_{F}\) of \(F\) is defined by
\[c_{F}\ =\ 1-\min_{S\subseteq E,e\in E\setminus S}\frac{\Delta_{e}F(S)}{ \Delta_{e}F(\emptyset)}, \tag{9}\]
where \(\Delta_{e}f(S)\) denotes the marginal gain of \(e\) with respect to \(S\), i. e.,
\[\Delta_{e}F(S)\ =\ F(S\cup\{e\})-F(S). \tag{10}\]
In other words, the curvature compares the marginal gain of adding an element of the ground set to any set and the empty set. Note that \(c_{F}\in[0,1]\), with the upper bound following from Inequality (2). Also, \(c_{F}=0\) holds precisely when \(f\) is modular, i. e., when Inequality (1) (equivalently, Inequality (2)) holds with equality. We say that \(f\) has _low curvature_ if \(c_{F}<1\). Intuitively, the curvature \(c_{F}\) represents "how much the function curves". The notion of curvature was extended from submodular to \(k\)-submodular functions in [36], cf. also [28]. In order to define it, we first need to introduce the notion of marginal values for \(k\)-submodular functions, which is a natural generalisation of the \(k=1\) case. Let \(F:(k+1)^{E}\to\mathbb{R}\) be a \(k\)-submodular function. For \(\mathbf{A}=(A_{1},\ldots,A_{k})\), \(i\in[k]\), and \(e\in E\setminus\cup_{j\in[k]}A_{j}\), we define the marginal gain of \(e\) with respect to \(\mathbf{A}\) and \(i\) as
\[\Delta_{e,i}F(\mathbf{A})\ =\ F(A_{1},\ldots,A_{i-1},A_{i}\cup\{e\},A_{i+1}, \ldots,A_{k})-F(\mathbf{A}). \tag{11}\]
Then, the curvature \(c_{F}\) of \(F\) is defined as
\[c_{F}\ =\ 1-\min_{i\in[k],e\in E,\mathbf{A}\in(k+1)^{E\setminus\{e\}}} \frac{\Delta_{e,i}F(\mathbf{A})}{\Delta_{e,i}F(\emptyset)}. \tag{12}\]
As before, we say that \(f\) has _low curvature_ if \(c_{F}<1\).
As our main contribution, we will show that under the assumption of monotonicity and low curvature one can efficiently approximate the peak contributions, leading to an efficient execution of the sampling algorithm from Section 2. Apart from being technically non-trivial, we also see our work as a conceptual contribution to the area of sparsification by exploring more general settings than previous works.
## 2 The Core Algorithm
In this section, we will describe the core of all our sparsification algorithms - a randomised sampling routine initially described by Rafiey and Yoshida [34] for decomposable submodular
functions. As alluded to in Section 1, we observe that it is largely independent of submodularity, leading to a more general sparsification algorithm for decomposable functions. Most of the presented material follows closely Section 3 in [34] and details are deferred to Appendix A.
The algorithm we present here constructs an \(\varepsilon\)-sparsifier for any decomposable function \(F=\sum_{i=1}^{N}f_{i}:\mathcal{D}\to\mathbb{R}\) probabilistically. As in [34], it relies on sampling functions with probabilities proportional to the ratios, for each \(i\in[N]\),
\[p_{i}=\max_{\begin{subarray}{c}A\in\mathcal{D}\\ F(A)\neq 0\end{subarray}}\frac{f_{i}(A)}{F(A)}. \tag{13}\]
The procedure with all details is given in Algorithm 1.
```
0: Function \(F=f_{1}+\cdots+f_{N}\) with \(f_{i}:\mathcal{D}\to\mathbb{R}\) given by evaluation oracles; error tolerance parameters \(\varepsilon,\delta\in(0,1)\)
0: Vector \(w\in\mathbb{R}^{N}\) such that * \(\mathbb{P}\left[w\text{ is an }\varepsilon\text{-sparsifier}\right]\geq 1-\delta\); * \(\mathbb{E}\left[\mathsf{size}(w)\right]=\mathcal{O}\left(\frac{\log|\mathcal{ D}|+\log\frac{1}{\delta}}{\varepsilon^{2}}\sum_{i=1}^{N}p_{i}\right)\), where \(p_{i}=\max_{\begin{subarray}{c}A\in\mathcal{D}\\ F(A)\neq 0\end{subarray}}\frac{f_{i}(A)}{F(A)}\); * \(\mathbb{P}\left[\mathsf{size}(w)\leq\frac{3}{2}\mathbb{E}\left[\mathsf{size} (w)\right]\right]\geq 1-4\varepsilon^{2}\).
1:\(w\leftarrow(0,\ldots,0)\)
2:\(\kappa\gets 3\log\Big{(}\frac{2|\mathcal{D}|}{\delta}\Big{)}/\varepsilon^{2}\)
3:for\(i=1,\ldots,N\)do
4:\(p_{i}\leftarrow\max_{\begin{subarray}{c}A\in\mathcal{D}\\ F(A)\neq 0\end{subarray}}\frac{f_{i}(A)}{F(A)}\)\(\triangleright\) compute peak contribution (here: naively)
5:\(\kappa_{i}\leftarrow\min\{1,\kappa p_{i}\}\)\(\triangleright\) cap at \(1\) as \(\kappa_{i}\) is a probability
6:\(w_{i}\leftarrow\begin{cases}1/\kappa_{i}&\text{with probability }\kappa_{i}\\ 0&\text{with probability }1-\kappa_{i}\end{cases}\)\(\triangleright\) sample weight of \(f_{i}\)
7:endfor
8:return\(w\)
```
**Algorithm 1** The Core Sparsification Algorithm
**Theorem 3**.: _Algorithm 1 outputs a vector \(w\in\mathbb{R}^{N}\) such that_
1. \(\mathbb{P}\left[w\text{ is an }\varepsilon\text{-sparsifier}\right]\geq 1-\delta\)_;_
2. \(\mathbb{E}\left[\mathsf{size}(w)\right]=\mathcal{O}\left(\frac{\log|\mathcal{ D}|+\log\frac{1}{\delta}}{\varepsilon^{2}}\sum_{i=1}^{N}p_{i}\right)\)_, where_ \(p_{i}=\max_{\begin{subarray}{c}A\in\mathcal{D}\\ F(A)\neq 0\end{subarray}}\frac{f_{i}(A)}{F(A)}\)_;_
3. \(\mathbb{P}\left[\mathsf{size}(w)\leq\frac{3}{2}\mathbb{E}\left[\mathsf{size} (w)\right]\right]\geq 1-4\varepsilon^{2}\)_._
**Remark 4**.: Algorithm 1 can be invoked with \(\delta=\mathcal{O}(1/n^{c})\) so that it yields an \(\varepsilon\)-sparsifier with high probability. This only influences the running time by a constant factor \(c\) because of the dependence on \(\log\frac{1}{\delta}\).
**Remark 5**.: If the size of the sparsifier is of primary interest, running Algorithm 1 a couple of times and taking the smallest vector \(w\) (with respect to \(\mathsf{size}(w)\)) leads to a procedure that, for any fixed \(\varepsilon>0\), returns a sparsifier of size \(\mathcal{O}\left(\frac{\log|\mathcal{D}|+\log\frac{1}{\delta}}{\varepsilon^{2} }\sum_{i=1}^{N}p_{i}\right)\) after a logarithmic number
of iterations. This is a consequence of Theorem 3 (iii). Notice that it might be necessary to choose \(\delta\) appropriately to also guarantee that the solution indeed is an \(\varepsilon\)-sparsifier with high probability.
**Corollary 6**.: _In the setting of Algorithm 1, let \(\widehat{p}_{1},\ldots,\widehat{p}_{N}\in\mathbb{R}_{\geq 0}\) satisfy \(\widehat{p}_{i}\geq p_{i}\) for all \(i\in[N]\). If Algorithm 1 is executed with the \(\widehat{p}_{i}\)'s instead of \(p_{i}=\max_{\begin{subarray}{c}A\in\mathcal{D}\\ F(A)\neq 0\end{subarray}}\frac{f_{i}(A)}{F(A)}\) in line 4, it returns a vector \(w\in\mathbb{R}^{N}\) such that_
1. \(\mathbb{P}\left[w\text{ is an $\varepsilon$-sparsifier}\right]\geq 1-\delta\)_;_
2. \(\mathbb{E}\left[\mathsf{size}(w)\right]=\mathcal{O}\left(\frac{\log|\mathcal{ D}|+\log\frac{1}{\delta}}{\varepsilon^{2}}\sum_{i=1}^{N}\widehat{p}_{i}\right)\)_;_
3. \(\mathbb{P}\left[\mathsf{size}(w)\leq\frac{3}{2}\mathbb{E}\left[\mathsf{size} (w)\right]\right]\geq 1-4\varepsilon^{2}\)_._
Note that Corollary 6 implies that any constant-factor approximation of the \(p_{i}\)'s will do the job, leading to the same asymptotic bounds. However, in the general setting of functions \(f_{1},\ldots,f_{N}:\mathcal{D}\to\mathbb{R}\), there is no way of obtaining this much faster than the exact \(p_{i}\)'s. Even in the case where all \(f_{i}\)'s are submodular, the \(p_{i}\)'s are hard to approximate.
**Remark 7**.: In general, the best upper bound we know on the peak contributions is \(p_{i}\leq 1\). Thus, Corollary 6 tells us that it is correct to invoke Algorithm 1 with \(\widehat{p}_{i}=1\) for all \(i\in[N]\). Since \(\kappa>1\) for \(\varepsilon\in(0,1)\), we then have \(\kappa_{i}=\min\{1,\kappa p_{i}\}=1\). This results in the initial decomposition, i. e., Algorithm 1 essentially computes nothing - a sparsifier is not for free!
**Remark 8**.: There are various ways to implement Algorithm 1, leading to different running time bounds. The representation of the functions involved plays a key role here. In the most general scenario where no further assumptions are made, it is reasonable to assume each \(f_{i}\) is represented by an evaluation oracle with response time \(\mathcal{O}(\mathsf{EO}_{i})\). We may further assume an additional oracle for \(F\) with response time \(\mathcal{O}(\mathsf{EO}_{\Sigma})\) - it is in fact the case that, in many applications, \(F(S)\) can be computed in a much faster way than by adding up all \(f_{i}(S)\) for \(i\in[N]\).
The main work to be done for Algorithm 1 to successfully construct a (small) sparsifier is the computation or approximation of the peak contributions. In the most general setting, we are required to compute \(p_{i}\) by iterating through all \(A\in\mathcal{D}\), which takes time at least \(\Omega(|\mathcal{D}|)\). Hence, the running time of a naive implementation is in \(\mathcal{O}\left(N|\mathcal{D}|\sum_{i=1}^{N}\mathsf{EO}_{i}\right)\). The main contribution of our work is to show how to approximate \(p_{i}\)'s efficiently for interesting cases, most notably for monotone \(k\)-submodular functions of low curvature.
## 3 Monotone \(k\)-Submodular Functions of Low Curvature
Let \(F:(k+1)^{E}\to\mathbb{R}_{\geq 0}\) be decomposable as \(F=\sum_{i=1}^{N}f_{i}\) such that each \(f_{i}:(k+1)^{E}\to\mathbb{R}_{\geq 0}\) is a non-negative monotone \(k\)-submodular function of low curvature. It follows from the definitions that \(F\) is also non-negative, monotone, \(k\)-submodular, and of low curvature.
We will show how to approximate the peak contributions \(p_{i}=\max_{\mathbf{A}\in(k+1)^{E}}\frac{f_{i}(\mathbf{A})}{F(\mathbf{A})}\).
To this end, it suffices to approximate \(\max_{\mathbf{A}\in(k+1)^{E}}\frac{f(\mathbf{A})}{g(\mathbf{A})}\) for two monotone \(k\)-submodular functions \(f,g:(k+1)^{E}\to\mathbb{R}_{\geq 0}\) of low curvature. Given \(\mathbf{A}=(A_{1},\ldots,A_{k})\in(k+1)^{E}\), we
define
\[S_{f}(\mathbf{A}):=\sum_{i=1}^{k}\sum_{e\in A_{i}}\Delta_{e,i}\left(f\mid\varnothing,\ldots,\varnothing\right) \tag{14}\]
and
\[S_{g}(\mathbf{A}):=\sum_{i=1}^{k}\sum_{e\in A_{i}}\Delta_{e,i}\left(g\mid \varnothing,\ldots,\varnothing\right). \tag{15}\]
It turns out that \(\frac{S_{f}(\mathbf{A})+f(\varnothing,\ldots,\varnothing)}{S_{g}(\mathbf{A})+ g(\varnothing,\ldots,\varnothing)}\) approximates \(\frac{f(\mathbf{A})}{g(\mathbf{A})}\) well.
**Lemma 9**.: _Let \(\mathbf{A}\in(k+1)^{E}\) be an \((1-\varepsilon)\)-approximate maximiser of \(\frac{S_{f}(\mathbf{A})+f(\varnothing,\ldots,\varnothing)}{S_{g}(\mathbf{A})+ g(\varnothing,\ldots,\varnothing)}\). Then_
\[\frac{f(\mathbf{A})}{g(\mathbf{A})}\geq(1-\varepsilon)(1-c_{f})(1-c_{g})\frac {f(\mathbf{A}^{*})}{g(\mathbf{A}^{*})}\]
_for any \(\mathbf{A}^{*}\in(k+1)^{E}\)._
Setting \(\varepsilon=1/2\) in Lemma 9 gives a \(\frac{1}{2}(1-c_{f})(1-c_{g})\)-approximation, which is a constant factor if \(c_{f}\) and \(c_{g}\) are considered constants. It remains to describe how \(\max_{\mathbf{A}\in(k+1)^{E}}\frac{S_{f}(\mathbf{A})+f(\varnothing,\ldots, \varnothing)}{S_{g}(\mathbf{A})+g(\varnothing,\ldots,\varnothing)}\) can be approximated up to a factor of \(1-\varepsilon\) (or \(1/2\) specifically in our use case). This is done by a reduction to the modular-ratio-max problem,3 defined below, for which we design a fully polynomial approximation scheme (FPTAS).
Footnote 3: Note that we allow \(A\) and \(B\) equal to zero, while the \(x_{i}\)’s and \(y_{i}\)’s are strictly positive. This is a somewhat technical requirement to avoid division by zero.
\begin{tabular}{|l|} \hline MODULAR-RATIO-MAX \\
**Given:**\(x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}\in\mathbb{R}_{>0}\) and \(A,B\in\mathbb{R}_{\geq 0}\). \\
**Want:** Index set \(\varnothing\neq I\subseteq[n]\) such that \(\varrho(I):=\frac{A+\sum_{i\in I}x_{i}}{B+\sum_{i\in I}y_{i}}\) is maximal. \\ \hline \end{tabular}
The reduction to modular-ratio-max is now easy to describe. Recall that
\[\frac{S_{f}(\mathbf{A})+f(\varnothing,\ldots,\varnothing)}{S_{g}(\mathbf{A})+ g(\varnothing,\ldots,\varnothing)}=\frac{A+\sum_{i=1}^{k}\sum_{e\in A_{i}} \Delta_{e,i}\left(f\mid\varnothing,\ldots,\varnothing\right)}{B+\sum_{i=1}^{k }\sum_{e\in A_{i}}\Delta_{e,i}\left(g\mid\varnothing,\ldots,\varnothing\right)} \tag{16}\]
with \(A:=f(\varnothing,\ldots,\varnothing)\) and \(B:=g(\varnothing,\ldots,\varnothing)\). If we number the pairs \((e,i)\in E\times[k]\) in some arbitrary order \((e_{1},i_{1}),\ldots,(e_{nk},i_{nk})\), we find that maximising \(\frac{S_{f}(\mathbf{A})+f(\varnothing,\ldots,\varnothing)}{S_{g}(\mathbf{A})+ g(\varnothing,\ldots,\varnothing)}\) is the same as maximising
\[\frac{A+\sum_{\ell\in I}\Delta_{e_{\ell},i_{\ell}}\left(f\mid\varnothing, \ldots,\varnothing\right)}{B+\sum_{\ell\in I}\Delta_{e_{\ell},i_{\ell}}\left( g\mid\varnothing,\ldots,\varnothing\right)} \tag{17}\]
over all index sets \(\varnothing\neq I\subseteq[nk]\) (the \(I=\varnothing\) case can be checked manually or ignored as it corresponds to \(\mathbf{A}=(\varnothing,\ldots,\varnothing)\)). Since \(f\) and \(g\) are monotone, marginal gains are always non-negative, so \(\Delta_{e_{\ell},i_{\ell}}\left(f\mid\varnothing,\ldots,\varnothing\right)\geq 0\) and \(\Delta_{e_{\ell},i_{\ell}}\left(g\mid\varnothing,\ldots,\varnothing\right)\geq 0\) for all \(\ell\in[nk]\). To satisfy the strict positivity as required in the definition of modular-ratio-max, we can drop the marginal gains that are equal to \(0\). This will only shrink the problem.
Summing up, the algorithm to compute \(p_{i}=\max_{\mathbf{A}\in(k+1)^{E}}\frac{f_{i}(\mathbf{A})}{F(\mathbf{A})}\) can be outlined as follows:
* Compute a \((1/2)\)-approximation \(\mathbf{A}^{*}\) to \(\max_{\mathbf{A}\in(k+1)^{E}}\frac{S_{f}(\mathbf{A})+f(\varnothing,\ldots, \varnothing)}{S_{g}(\mathbf{A})+g(\varnothing,\ldots,\varnothing)}\) via the modular-ratio-max reduction and Algorithm 2.
* Let \(\widehat{p}_{i}:=\frac{2}{(1-c_{f})(1-c_{g})}\frac{f(\mathbf{A}^{*})}{g( \mathbf{A}^{*})}\).
It is guaranteed that \(\widehat{p}_{i}\geq p_{i}\) by Lemma 9. Moreover, \(\widehat{p}_{i}=\mathcal{O}(1)\cdot p_{i}\), so we get a sparsifier of an expected size that matches the existence result by applying the core algorithm. Moreover, the algorithm runs in polynomial time, as we will see in Lemma 10.
### Fptas
If \(A=B=0\), modular-ratio-max has a very simple solution: We can just take \(I=\{i\}\) for an index \(i\) that maximises \(x_{i}/y_{i}\). However, this is not optimal in general as the following example shows. Let \(A=1\), \(B=100\) and \(x_{1}=2\), \(y_{1}=3\), \(x_{2}=1\), \(y_{2}=1\). Clearly, the ratio \(x_{i}/y_{i}\) is maximised for \(i=2\), leading to an overall \(\varrho\)-value of
\[\varrho(\{2\})=\frac{1+1}{100+1}=\frac{2}{101}\]
as opposed to
\[\varrho(\{1,2\})=\frac{1+2+1}{100+3+1}=\frac{4}{104},\]
which is clearly larger. In fact, it is not hard to see that the maximiser of \(x_{i}/y_{i}\) does not even provide a constant-factor approximation; it may end up arbitrarily bad compared to an optimal solution. This indicates that we need to do something else.
The solution is an FPTAS based on binary search, outlined in Algorithm 2. This is possible because we can easily solve the associated decision problem: Given a target value \(\lambda\in\mathbb{R}\), does there exist an index set \(\varnothing\neq I\subseteq[n]\) such that \(\varrho(I)\geq\lambda\)?
The decision problem is simplified by algebraic equivalence transformations:
\[\varrho(I)\geq\lambda\iff\frac{A+\sum_{i\in I}x_{i}}{B+\sum_{i\in I}y_{i}} \geq\lambda\iff A-B\lambda+\sum_{i\in I}\left(x_{i}-\lambda y_{i}\right)\geq 0\]
To see if the last expression is non-negative for any non-empty index set \(I\), we consider the indices in non-increasing order of the quantities \(x_{i}-\lambda y_{i}\). We have to take first index in this order (as \(I\neq\varnothing\) is required) and will then take all remaining indices \(i\) for which \(x_{i}-\lambda y_{i}\) is positive. This maximises the LHS over all \(I\neq\varnothing\). If this maximum is non-negative, we know that \(\varrho(I)\geq\lambda\) by the above equivalences.
Let \(m:=\min\{x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}\}\) and \(M:=\max\{x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}\}\). For any non-empty index set \(I\), we always have
\[\varrho(I)=\frac{A+\sum_{i\in I}x_{i}}{B+\sum_{i\in I}y_{i}}\geq\frac{A+m}{B+ nM} \tag{18}\]
as well as
\[\varrho(I)=\frac{A+\sum_{i\in I}x_{i}}{B+\sum_{i\in I}y_{i}}\leq\frac{A+nM}{B+ m}, \tag{19}\]
so we can initialise the binary search with \(\varrho^{-}=\frac{A+m}{B+nM}\) and \(\varrho^{+}=\frac{A+nM}{B+m}\). Once the interval \([\varrho^{-},\varrho^{+}]\) has length at most \(\varepsilon\frac{A+m}{B+nM}\), we know that the multiplicative error is at most \(\varepsilon\). Since the interval size halves in each step, this point is reached after no more than iterations, as the following lemma shows.
**Lemma 10**.: _The binary search in Algorithm 2 terminates in \(k:=\left\lceil\log\frac{1}{\varepsilon}+2\left(\log n+\log\frac{M}{m}\right)\right\rceil\) iterations. Moreover, the final set \(I\) satisfies \(\varrho(I)\geq(1-\varepsilon)\varrho(I^{*})\) for any \(\varnothing\neq I^{*}\subseteq[n]\)._
Proof.: To see the iteration bound, we note that \(|\varrho^{+}-\varrho^{-}|\) shrinks by a factor of \(2\) in each iteration. Thus, after \(k\) iterations, it holds
\[\left|\varrho^{+}-\varrho^{-}\right|\leq\frac{1}{2^{k}}\left|\frac{A+nM}{B+m} -\frac{A+m}{B+nM}\right|\leq\frac{1}{2^{k}}\frac{A+nM}{B+m}.\]
We want this to be \(\leq\varepsilon\frac{A+m}{B+nM}\), which is equivalent to
\[\frac{1}{2^{k}}\frac{A+nM}{B+m}\leq\varepsilon\frac{A+m}{B+nM}\iff 2^{k}\geq\frac{1}{\varepsilon}\frac{A+nM}{A+m}\frac{B+nM}{B+m}. \tag{20}\]
Since \(nM\geq m\), we have \(\frac{A+nM}{A+m}\leq\frac{nM}{m}\) as well as \(\frac{B+nM}{B+m}\leq\frac{nM}{m}\), hence
\[\frac{1}{\varepsilon}\frac{A+nM}{A+m}\frac{B+nM}{B+m}\leq\frac{1}{\varepsilon} \left(\frac{nM}{m}\right)^{2},\]
so it suffices to satisfy \(2^{k}\geq\frac{1}{\varepsilon}\left(\frac{nM}{m}\right)^{2}\) in order for Equation (20) to hold. This is indeed satisfied for any \(k\geq\log\frac{1}{\varepsilon}+2\left(\log n+\log\frac{M}{m}\right)\), showing the iteration bound.
For the error bound, let \(\varnothing\neq I^{*}\subseteq[n]\) be arbitrary. By Equation (18), we know that \(\varrho(I^{*})\geq\frac{A+m}{B+nM}\). Moreover, the binary search preserves two invariants:
1. \(\varrho^{-}\leq\varrho(I)\leq\varrho^{+}\) for the set \(I\) in Algorithm 2, and
2. There is no set \(\varnothing\neq I^{\prime}\subseteq[n]\) with \(\varrho(I^{\prime})>\varrho^{+}\).
Combining both and the fact that \(|\varrho^{+}-\varrho^{-}|\leq\varepsilon\frac{A+m}{B+nM}\) at termination, we conclude
\[\varrho(I)\geq\varrho^{-}=\varrho^{+}-\left(\varrho^{+}-\varrho^{-}\right) \geq\varrho^{+}-\varepsilon\frac{A+m}{B+nM}\geq\varrho(I^{*})-\varepsilon \varrho(I^{*})=(1-\varepsilon)\varrho(I^{*}),\]
where we also exploited \(\varrho(I^{\prime})\geq\frac{A+m}{B+nM}\) (Equation (18)) and we used invariant 2 as \(\varrho(I^{*})\leq\varrho^{+}\).
We remark that, if all input numbers are integers, we get an iteration bound of
\[k\leq\left\lceil\log\frac{1}{\varepsilon}+2\left(\log n+\log U\right)\right\rceil =\mathcal{O}\left(\log\frac{1}{\varepsilon}+\log n+\log U\right),\]
where \(U\) is the largest number that occurs as part of the input. We have now solved modular-ratio-max.
```
0:modular-ratio-max instance \(x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}\in\mathbb{R}_{>0}\) and \(A,B\in\mathbb{R}_{\geq 0}\); error tolerance \(\varepsilon>0\).
0: Index set \(\varnothing\neq I\subseteq[n]\) such that \(\varrho(I)\geq(1-\varepsilon)\varrho(I^{*})\) for all \(\varnothing\neq I^{*}\subseteq[n]\).
1:procedurecheck(\(\lambda\))
2: Sort indices such that \(x_{i_{1}}-\lambda y_{i_{1}}\geq x_{i_{2}}-\lambda y_{i_{2}}\geq\cdots\geq x_{i_ {n}}-\lambda y_{i_{n}}\)
3:\(I\leftarrow\{i_{1}\}\)
4:\(S\leftarrow(A-\lambda B)+(x_{i_{1}}-\lambda y_{i_{1}})\)
5:for\(\ell=2\)to\(n\)do
6:if\(x_{i_{\ell}}-\lambda y_{i_{\ell}}>0\)then
7:\(I\gets I\cup\{i_{\ell}\}\)
8:\(S\gets S+(x_{i_{\ell}}-\lambda y_{i_{\ell}})\)
9:endif
10:endfor
11:return\(\begin{cases}I&\text{if }S\geq 0\\ \bot&\text{otherwise}\end{cases}\)
12:endprocedure
13:\(I\leftarrow\{1\}\)\(\triangleright\) initialise with arbitrary feasible solution
14:\(m\leftarrow\min\left\{x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}\right\}\)
15:\(M\leftarrow\max\left\{x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}\right\}\)
16:\(\varrho^{-}\leftarrow\frac{A+m}{B+nM}\)
17:\(\varrho^{+}\leftarrow\frac{A+nM}{B+m}\)
18:while\(|\varrho^{+}-\varrho^{-}|>\varepsilon\frac{A+m}{B+nM}\)do\(\triangleright\) binary search till multiplicative error is \(\leq\varepsilon\)
19:\(\lambda\leftarrow\frac{1}{2}\left(\varrho^{-}+\varrho^{+}\right)\)
20:\(I_{\lambda}\leftarrow\textsc{check}(\lambda)\)
21:if\(I_{\lambda}=\bot\)then
22:\(\varrho^{+}\leftarrow\lambda\)
23:else
24:\(I\gets I_{\lambda}\)
25:\(\varrho^{-}\leftarrow\lambda\)
26:endif
27:endwhile
28:return\(I\)
```
**Algorithm 2** FPTAS for modular-ratio-max
### Proof of Lemma 9
In the rest of his section, we will prove Lemma 9. But first we need a helpful lemma.
**Lemma 11**.: _Let \(f,g:(k+1)^{E}\to\mathbb{R}_{\geq 0}\) be monotone \(k\)-submodular of low curvature. Then_
\[\frac{(1-c_{f})S_{f}(\mathbf{A})+f(\varnothing,\ldots,\varnothing)}{S_{g}( \mathbf{A})+g(\varnothing,\ldots,\varnothing)}\leq\frac{f(\mathbf{A})}{g( \mathbf{A})}\leq\frac{S_{f}(\mathbf{A})+f(\varnothing,\ldots,\varnothing)}{(1- c_{g})S_{g}(\mathbf{A})+g(\varnothing,\ldots,\varnothing)}\]
_for all \(\mathbf{A}\in(k+1)^{E}\)._
Proof.: Fix \(\mathbf{A}\in(k+1)^{E}\) and label the elements of \(E\) in such a way that \(A_{i}=\left\{e_{1}^{(i)},\ldots,e_{a_{i}}^{(i)}\right\}\) for each \(1\leq i\leq k\), where \(a_{i}=|A_{i}|\). Now,
\[f(\mathbf{A})-f(\varnothing,\ldots,\varnothing)=\sum_{i=1}^{k}\sum_{j=1}^{a_{i} }\Delta_{e_{j}^{(i)},i}\left(f\mid A_{1},\ldots,A_{i-1},\{e_{1}^{(i)},\ldots,e _{j-1}^{(i)}\},\varnothing,\ldots,\varnothing\right).\]
The \(\Delta_{e_{j}^{(i)},i}\left(f\mid A_{1},\ldots,A_{i-1},\{e_{1}^{(i)},\ldots,e _{j-1}^{(i)}\},\varnothing,\ldots,\varnothing\right)\) terms can be estimated in both directions. By the diminishing returns property, we know that
\[\Delta_{e_{j}^{(i)},i}\left(f\mid A_{1},\ldots,A_{i-1},\{e_{1}^{(i)},\ldots,e _{j-1}^{(i)}\},\varnothing,\ldots,\varnothing\right)\leq\Delta_{e_{j}^{(i)},i} \left(f\mid\varnothing,\ldots,\varnothing\right),\]
while we conclude
\[\Delta_{e_{j}^{(i)},i}\left(f\mid A_{1},\ldots,A_{i-1},\{e_{1}^{(i)},\ldots,e _{j-1}^{(i)}\},\varnothing,\ldots,\varnothing\right)\geq(1-c_{f})\Delta_{e_{j}^ {(i)},i}\left(f\mid\varnothing,\ldots,\varnothing\right)\]
from the curvature of \(f\). Since
\[S_{f}(\mathbf{A}):=\sum_{i=1}^{k}\sum_{j=1}^{a_{i}}\Delta_{e_{j}^{(i)},i} \left(f\mid\varnothing,\ldots,\varnothing\right),\]
we see that \((1-c_{f})S_{f}(\mathbf{A})\leq f(\mathbf{A})-f(\varnothing,\ldots,\varnothing) \leq S_{f}(\mathbf{A})\) after combining both inequalities. Analogously, we derive the \((1-c_{g})S_{g}(\mathbf{A})\leq f(\mathbf{A})-f(\varnothing,\ldots,\varnothing) \leq S_{g}(\mathbf{A})\) for \(g\). Next,
\[\frac{f(\mathbf{A})}{g(\mathbf{A})}=\frac{f(\mathbf{A})-f(\varnothing,\ldots, \varnothing)+f(\varnothing,\ldots,\varnothing)}{g(\mathbf{A})-g(\varnothing, \ldots,\varnothing)+g(\varnothing,\ldots,\varnothing)}\leq\frac{S_{f}( \mathbf{A})+f(\varnothing,\ldots,\varnothing)}{(1-c_{g})S_{g}(\mathbf{A})+g( \varnothing,\ldots,\varnothing)}\]
and
\[\frac{f(\mathbf{A})}{g(\mathbf{A})}=\frac{f(\mathbf{A})-f(\varnothing,\ldots,\varnothing)+f(\varnothing,\ldots,\varnothing)}{g(\mathbf{A})-g(\varnothing, \ldots,\varnothing)+g(\varnothing,\ldots,\varnothing)}\geq\frac{(1-c_{f})S_{f}( \mathbf{A})+f(\varnothing,\ldots,\varnothing)}{S_{g}(\mathbf{A})+g(\varnothing, \ldots,\varnothing)},\]
squeezing \(\frac{f(\mathbf{A})}{g(\mathbf{A})}\) between \(\frac{(1-c_{f})S_{f}(\mathbf{A})+f(\varnothing,\ldots,\varnothing)}{S_{g}( \mathbf{A})+g(\varnothing,\ldots,\varnothing)}\) and \(\frac{S_{f}(\mathbf{A})+f(\varnothing,\ldots,\varnothing)}{(1-c_{g})S_{g}( \mathbf{A})+g(\varnothing,\ldots,\varnothing)}\).
**Lemma** (Lemma 9 restated).: _Let \(\mathbf{A}\in(k+1)^{E}\) be an \((1-\varepsilon)\)-approximate maximiser of \(\frac{S_{f}(\mathbf{A})+f(\varnothing,\ldots,\varnothing)}{S_{g}(\mathbf{A})+g( \varnothing,\ldots,\varnothing)}\). Then_
\[\frac{f(\mathbf{A})}{g(\mathbf{A})}\geq(1-\varepsilon)(1-c_{f})(1-c_{g}) \frac{f(\mathbf{A}^{*})}{g(\mathbf{A}^{*})}\]
_for any \(\mathbf{A}^{*}\in(k+1)^{E}\)._
Proof.: Note that \(\frac{f(\mathbf{A})}{g(\mathbf{A})}\) and \(\frac{f(\mathbf{A}^{*})}{g(\mathbf{A}^{*})}\) are included in the ranges stated by Lemma 11, i. e.,
\[\frac{(1-c_{f})S_{f}(\mathbf{A}^{*})+f(\varnothing,\ldots,\varnothing)}{S_{g}( \mathbf{A}^{*})+g(\varnothing,\ldots,\varnothing)}\leq\frac{f(\mathbf{A}^{*})} {g(\mathbf{A}^{*})}\leq\frac{S_{f}(\mathbf{A}^{*})+f(\varnothing,\ldots, \varnothing)}{(1-c_{g})S_{g}(\mathbf{A}^{*})+g(\varnothing,\ldots,\varnothing)}\]
and
\[\frac{(1-c_{f})S_{f}(\mathbf{A})+f(\varnothing,\ldots,\varnothing)}{S_{g}( \mathbf{A})+g(\varnothing,\ldots,\varnothing)}\leq\frac{f(\mathbf{A})}{g( \mathbf{A})}\leq\frac{S_{f}(\mathbf{A})+f(\varnothing,\ldots,\varnothing)}{(1- c_{g})S_{g}(\mathbf{A})+g(\varnothing,\ldots,\varnothing)}.\]
Combining this with the fact that \(\mathbf{A}\) is an \((1-\varepsilon)\)-approximate maximiser and the non-negativity of \(f\) and \(g\), we conclude that
\[\frac{f(\mathbf{A})}{g(\mathbf{A})} \geq\frac{(1-c_{f})S_{f}(\mathbf{A})+f(\varnothing,\ldots, \varnothing)}{S_{g}(\mathbf{A})+g(\varnothing,\ldots,\varnothing)}\] \[\geq(1-c_{f})\frac{S_{f}(\mathbf{A})+f(\varnothing,\ldots, \varnothing)}{S_{g}(\mathbf{A})+g(\varnothing,\ldots,\varnothing)}\] \[\geq(1-\varepsilon)(1-c_{f})(1-c_{g})\frac{S_{f}(\mathbf{A}^{*})+ f(\varnothing,\ldots,\varnothing)}{(1-c_{g})S_{g}(\mathbf{A}^{*})+g(\varnothing, \ldots,\varnothing)}\] \[\geq(1-\varepsilon)(1-c_{f})(1-c_{g})\frac{f(\mathbf{A}^{*})}{g( \mathbf{A}^{*})}.\]
|
2308.16273 | Identifiable specializations for ODE models | The parameter identifiability problem for a dynamical system is to determine
whether the parameters of the system can be found from data for the outputs of
the system. Verifying whether the parameters are identifiable is a necessary
first step before a meaningful parameter estimation can take place.
Non-identifiability occurs in practical models. To reparametrize a model to
achieve identifiability is a challenge. The existing approaches have been shown
to be useful for many important examples. However, these approaches are either
limited to linear models and scaling parametrizations or are not guaranteed to
find a reparametrization even if it exists. In the present paper, we prove that
there always exists a locally identifiable model with the same input-output
behaviour as the original one obtained from a given one by a partial
specialization of the parameters. As an extra feature of our approach, the
resulting (at least) locally identifiable reparameterization has the same
shape: the monomials in the new state variables in the new model are formed in
the same way as in the original model. Furthermore, we give a sufficient
observability condition for the existence of a state space transformation from
the original model to the new one. Our proof is constructive and can be
translated to an algorithm, which we illustrate by several examples. | Alexey Ovchinnikov, Anand Pillay, Gleb Pogudin, Thomas Scanlon | 2023-08-30T19:02:26Z | http://arxiv.org/abs/2308.16273v2 | # Identifiable specializations for ODE models
###### Abstract
The parameter identifiability problem for a dynamical system is to determine whether the parameters of the system can be found from data for the outputs of the system. Verifying whether the parameters are identifiable is a necessary first step before a meaningful parameter estimation can take place. Non-identifiability occurs in practical models. To reparametrize a model to achieve identifiability is a challenge. The existing approaches have been shown to be useful for many important examples. However, these approaches are either limited to linear models and scaling parametrizations or are not guaranteed to find a reparametrization even if it exists. In the present paper, we prove that there always exists a locally identifiable model with the same input-output behaviour as the original one obtained from a given one by a partial specialization of the parameters. Furthermore, we give a sufficient observability condition for the existence of a state space transformation from the original model to the new one. Our proof is constructive and can be translated to an algorithm, which we illustrate by several examples.
+
Footnote †: journal: to a journal for peer review
## 1 Introduction
### _Motivation_
Scientists and engineers often model a process under investigation using a parametric ODE, which is a system of ordinary differential equations involving some unspecified parametric constants. The unknown parameters are usually determined (identified) from the input and measured output data. However, for some parametric ODEs, due to their intrinsic structure, it might not be possible to identify the parameters uniquely from the input and measured data (even noise-free). Therefore, while designing a model, it is crucial to make sure that the parametric ODE model is identifiable.
If the initially designed model has non-identifiable parameters, the next natural step would be to find another model with the same input-output behavior but with all parameters identifiable. This is a problem we study in the present paper.
### _Prior work_
There exist efficient algorithms for finding reparametrizations of specific form such as scaling transformations [10] or linear reparametrizations [12, 18]. More refined results have been obtained for scaling reparametrizations of linear compartmental models [1, 16]. Several approaches have been proposed for producing locally identifiable reparametrizations [7, 13, 15] which succeed in finding nontrivial parametrizations for models from the literature but are not guaranteed to produce a reparametrization if it exists. The existence of identifiable reparametrisation was also not completely understood: Sussmann's theorem [25] guarantees the existence of an identifiable model with the same input-output behaviour at the cost of allowing models to be defined on a manifold, in other words, if we allow to replace ODEs with differential-algebraic equations.
### _Our contribution_
We prove that it is, in fact, always possible to replace the original ODE system with another one with the same input-output behaviour but all the parameters being locally identifiable by partially specializing parameters, in particular, without changing the shape of the system (Theorem 1). We also give an example showing that this statement is not true for global identifiability (see Section 0.A). Under an additional observability condition, we also show that there exists a state space transformation between the original model and the new one (Theorem 1). Our proofs are constructive and can be directly translated into algorithms, which we showcase by several examples (Section 0.H). Our Maple code for these examples can be found here in [20].
## 2 Main result
### _Preliminaries and setup_
In what follows, \(\tilde{x}\) means that \(\tilde{x}=(x_{1},\ldots,x_{n})\) is a tuple of length \(n\); we will write expressions like \(\tilde{x}\in A\) to mean that \(x_{i}\in A\) for each component \(x_{i}\) of \(\tilde{x}\). Our main object will be an ODE system
\[\Sigma(\tilde{\alpha}):=\begin{cases}\tilde{x}^{\prime}(t)=\bar{f}(\tilde{x}( t),\bar{\alpha},\bar{u}(t))\\ \tilde{y}(t)=\bar{g}(\tilde{x}(t),\bar{\alpha},\bar{u}(t)),\end{cases} \tag{1}\]
where \(\bar{\alpha}\) is a vector of scalar parameters, \(\bar{x}(t)\), \(\bar{y}(t)\), and \(\bar{u}(t)\) are the state, output, and input functions, respectively (in what follows we will omit the dependence on \(t\) for brevity). We will focus on rational ODE models, that is, the case of \(\bar{f}\) and \(\bar{g}\) being tuples of rational functions over \(\mathbb{Q}\) (or \(\mathbb{C}\)).
To formally define the main property of interest, input-output identifiability (IO-identifiability), we will introduce some notation from algebra.
1. A _differential ring_\((R,\cdot)\) is a commutative ring with a derivation \({}^{\prime}:R\to R\), that is, a map such that, for all \(a,b\in R\), \((a+b)^{\prime}=a^{\prime}+b^{\prime}\) and \((ab)^{\prime}=a^{\prime}b+ab^{\prime}\).
2. The _ring of differential polynomials_ in the variables \(x_{1},\ldots,x_{n}\) over a field \(K\) is the ring \(K[x_{j}^{(b)}\mid i\geqslant 0\), \(1\leqslant j\leqslant n]\) with a derivation defined on the ring by \((x_{j}^{(b)^{\prime}}):=x_{j}^{(i+1)}\). This differential ring is denoted by \(K[x_{1},\ldots,x_{n}]\).
3. An ideal \(I\) of a differential ring \((R,\cdot)\) is called a _differential ideal_ if, for all \(a\in I\), \(a^{\prime}\in I\). For \(F\subset R\), the smallest differential ideal containing the set \(F\) is denoted by \([F]\).
4. For an ideal \(I\) and element \(a\) in a ring \(R\), we denote \(I\colon a^{\infty}=\{r\in R\mid\exists\ell\colon a^{\ell}r\in I\}\). This set is also an ideal in \(R\).
5. Given \(\Sigma\) as in (1), we define the differential ideal of \(\Sigma\) as \[I_{\Sigma}:=[Q\bar{x}^{\prime}-Q\bar{f},Q\bar{y}-Q\bar{g}]:Q^{\infty}\subset \mathbb{Q}(\bar{\alpha})\{\bar{x},\bar{y},\bar{u}\},\] where \(Q\) is the common denominator of \(\bar{f}\) and \(\bar{g}\). The relations between the inputs and outputs of the system can be found by intersecting this ideal with the corresponding subring: \[I_{\Sigma}\cap\mathbb{Q}(\bar{\alpha})\{\bar{y},\bar{u}\}.\] (2)
6. For a field \(K\), let \(\overline{K}\) denote the algebraic closure of \(K\).
Roughly speaking, input-output identifiability is a property for a parameter to be determined from inputs and outputs using IO-equations. Here is a precise formal definition:
**Definition 1** (IO-identifiability).:
1. The smallest field \(k\) such that \(\mathbb{Q}\subset k\subset\mathbb{Q}(\bar{\alpha})\) and \(I_{\Sigma}\cap\mathbb{Q}(\bar{\alpha})\{\bar{y},\bar{u}\}\) is generated (as an ideal or, equivalently, as a differential ideal) by \(I_{\Sigma}\cap k\{\bar{y},\bar{u}\}\) is called _the field of IO-identifiable functions_.
2. We call \(h\in\mathbb{Q}(\bar{\alpha})\) _IO-identifiable_ if \(h\in k\). We also call \(h\in\mathbb{Q}(\bar{\alpha})\)_locally IO-identifiable_ if \(h\) is in the algebraic closure of the field \(k\).
3. The _IO-equations_ are defined as the monic characteristic presentation of the differential ideal \(I_{\Sigma}\cap\mathbb{Q}(\bar{\alpha})\{\bar{y},\bar{u}\}\) (see [21, Definition 6 and Section 5.2] for more details). For a fixed differential ranking, such a monic characteristic presentation is unique [3, Theorem 3].
In many cases, IO-identifiability is equivalent to identifiability. See, e.g., a rigorously written definition of identifiability [8, Definition 2.5], [21, Section 4] for a sufficient condition for the equivalence, and [21, Examples 2.6 and 2.7] for simple examples of non-equivalence. Additionally, it turns out that IO-identifiability is equivalent to multi-experimental identifiability [19, Theorem 19]. Finally, several software packages check IO-identifiability [2, 14, 17, 23, 24] and find all IO-identifiable functions of parameters [11].
**Remark 1**.: Some authors prefer to work only with differential fields \((K,\cdot)\) containing the field of complex numbers \(\mathbb{C}\) as a subfield of the field of constants, that is, the field of elements \(a\) of \(K\) satisfying \(a^{\prime}=0\). With this convention, the field of IO-identifiable functions is taken to be the smallest field \(k\) for which \(\mathbb{C}\subset k\subset\mathbb{C}(\bar{\alpha})\). For computational reasons, we prefer to work over \(\mathbb{Q}\) instead of \(\mathbb{C}\). All of what we discuss in this paper may be generalized to the case of \(\mathbb{C}\) as the base with no essential changes to the arguments.
### _Statement of the main result_
**Theorem 1**.: _Let \(\bar{\beta}\in\mathbb{Q}(\bar{\alpha})\) be any generating set of the field of IO-identifiable functions of an ODE system \(\Sigma(\bar{\alpha})\) as in (1). Then there exists a tuple \(\widetilde{\alpha}\in\overline{\mathbb{Q}(\bar{\alpha})}\) of the same length as \(\bar{\alpha}\) such that_
* _the entries of_ \(\widetilde{\alpha}\) _are are locally IO-identifiable;_
* _the system_ \(\Sigma(\widetilde{\alpha})\) _has the same input-output equations as the original_ \(\Sigma(\bar{\alpha})\)_._
_Furthermore, if the sum of the orders with respect to \(\bar{y}\) of the input-output equations is equal to the dimension of the model (\(n\) in the notation of (1)), the state variables of \(\Sigma(\widetilde{\alpha})\) can be expressed as algebraic functions of \(\bar{x}\) and \(\bar{\alpha}\)._
### _On the existence of the state transformation_
Let us make some remarks about the second part of the theorem.
**Remark 2** (On the last condition and observability).: The condition that the sum of the orders of input-output equations be equal to the dimension of the system is, in fact, equivalent to the fact that all the states are locally observable if the parameters are assumed to be _known_ (can be deduced from [9, Corollary 4.11] and [8, Proposition 3.4]). In particular, this restriction is significantly milder than observability of all the states.
**Example 1** (Non-existence of the state transformation).: _Let us give an example showing that the condition for the orders of IO-equations summing up to \(n\) cannot be removed. Consider the system_
\[\begin{cases}x_{1}^{\prime}=\alpha_{1},\\ x_{2}^{\prime}=\frac{x_{2}}{\alpha_{2}},\\ y=x_{1}.\end{cases}\]
_The IO-equation of this model is \(y^{\prime}-\alpha_{1}=0\), so only \(\alpha_{1}\) is (locally) IO-identifiable. Then any specialization described in Theorem 1 will be of the form_
\[\begin{cases}w_{1}^{\prime}=\alpha_{1},\\ w_{2}^{\prime}=f(\alpha_{1})w_{2},\\ y=x_{1}.\end{cases}\]
_for some nonzero algebraic function \(f\in\overline{\mathbb{Q}(\alpha_{1})}\). Any solution of the new system will be of the form_
\[\left(\alpha_{1}t+c_{1},\ c_{2}e^{f(\alpha_{1})t}\right),\]
_while any solution of the original system was_
\[\left(\alpha_{1}t+c_{3},\ c_{4}e^{\alpha_{2}t}\right).\]
_The existence of an algebraic state space transformation as in the theorem would imply that \(e^{f(\alpha_{1})t}\) is algebraic over \(\mathbb{Q}(\alpha_{1},e^{\alpha_{2}t})\), which is not the case._
**Remark 3** (On possible preprocessing).: While not all the models satisfy the condition of the second part of the theorem, one can use the approach from [6, Section 3] (see [22, Section 3.2] for the case with inputs) by constructing a realization of the input-output equation of the model of minimal dimension. The corresponding system (5) from our proof will be nonsingular in this case, providing a coordinate change. After that, Theorem 1 can be applied. Note that since the dimension of the model changes under this transformation, it is not possible to preserve the shape as in Theorem 1. We give an example of such preprocessing in Section E.
## 3 Constructive proof of Theorem 1
We break down the proof into several **steps**, each of which can be viewed as a step in an algorithm to solve the problem:
1. We first fix \(\bar{\beta}=(\beta_{1},\ldots,\beta_{N})\) as in the statement of the theorem and write them explicitly in terms of \(\bar{\alpha}\): \[\beta_{1}=p_{1}(\bar{\alpha}),\ \ldots,\ \beta_{N}=p_{N}(\bar{\alpha})\] for rational functions \(p_{1},\ldots,p_{N}\in\mathbb{Q}(\bar{\alpha})\).
2. Let \(n\) be the order of the original ODE system, that is, the length of the tuple \(\bar{x}\). Let \(E(\bar{\alpha},\bar{n})(\bar{y})=0\) be input-output equations with respect to any ranking on \(\bar{y}\). Then the orders of \(\bar{y}\) and \(\bar{u}\) in \(E\) do not exceed \(n\). Using Lie derivatives, we can write \(\bar{y},\ldots,\bar{y}^{(m)}\) as rational functions \(R_{0},\ldots,R_{m}\) in \(\bar{x},\bar{\alpha},\bar{u},\ldots,\bar{u}^{(m)}\) for any \(m\). Therefore, for all \(m\geqslant n,\bar{y},\ldots,\bar{y}^{(m)}\) are algebraically dependent over \(\mathbb{Q}(\bar{\alpha})\langle\bar{u}\rangle\), where \(E(a)\) denotes the differential field generated by \(a\) over \(E\), that is, \(E(a,a^{\prime},a^{\prime\prime},\ldots)\). Furthermore, by [8, Lemma 3.18], we have \[\mathbb{Q}(\bar{\alpha})\langle\bar{y},\bar{u}\rangle=\mathbb{Q}(\bar{\alpha},\bar{y},\bar{y}^{\prime},\ldots,\bar{y}^{(m)}\rangle\langle\bar{u}\rangle.\] (3) Let \(M\) be the Jacobian matrix of \(\bar{R}:=R_{0},\ldots,R_{n}\) with respect to \(\bar{x}\). Then, by the Jacobian criterion together with (3), its rank \(r\) will be equal to \[\operatorname{trdeg}_{\mathbb{Q}(\bar{\alpha})\langle\bar{u}\rangle}\mathbb{Q} (\bar{\alpha})\langle\bar{u}\rangle(\bar{R})=\operatorname{trdeg}_{\mathbb{Q}( \bar{\alpha})\langle\bar{u}\rangle}\mathbb{Q}(\bar{\alpha})\langle\bar{u} \rangle\langle\bar{y}\rangle.\] Let \(D\) be the determinant of a nonsingular \(r\times r\)-minor of \(M\). We consider \(D\) as a rational function in \(\bar{x}\), \(\bar{u}\), \(\bar{u}^{\prime},\ldots,\bar{u}^{(m)}\), and take any nonzero coefficient of its numerator, we denote it by \(D_{0}(\bar{\alpha})\). We will use this coefficient in the next step.
3. We now form a system of algebraic equations and inequations in \(\widetilde{\alpha}\) over \(\mathbb{Q}(\bar{\beta})\): \[\begin{cases}\beta_{1}=p_{1}(\widetilde{\alpha}),\\ \vdots\\ \beta_{N}=p_{N}(\widetilde{\alpha}),\\ D_{0}(\widetilde{\alpha})\cdot C(\widetilde{\alpha})\neq 0,\end{cases}\] (4) where \(C\) is the common denominator of all coefficients from \(\mathbb{Q}(\bar{\alpha})\) in (1). It has a solution \(\widetilde{\alpha}=\bar{\alpha}\) and, thus, by Hilbert's Nullstellensatz, has a solution \(\widetilde{\alpha}\) in the algebraic closure of the ground field, \(\overline{\mathbb{Q}(\bar{\beta})}\). For this solution, we have \(\bar{\beta}\in\mathbb{Q}(\widetilde{\alpha})\) and \(\widetilde{\alpha}\in\overline{\mathbb{Q}(\bar{\beta})}\) by construction. Furthermore, since the rank of \(M|_{\bar{u}=\widetilde{\alpha}}\) is also \(r\), the specialized ODE system has the same input-output equations. Indeed, the specialization is possible because \(C(\widetilde{\alpha})\neq 0\). Moreover, by [9, Corollary 4.11], \(\operatorname{trdeg}_{\mathbb{Q}(\bar{\alpha})\langle\bar{u}\rangle}(\mathbb{Q }(\bar{\alpha})\langle\bar{u},\bar{y}\rangle)\) is the sum of the orders of the input-output equations with respect to \(\bar{y}\), and it is equal to \(r\). Since, under the specialization to \(\widetilde{\alpha}\), the matrix rank is preserved (as shown above), the sum of the orders of the input-output equations of (1) after specializing \(\bar{\alpha}\to\widetilde{\alpha}\) is also \(r\).
4. Finally, let \(\bar{w}\) denote the state variables of \(\Sigma(\widetilde{\alpha})\) and assume that the sum of the orders of input-output equations is equal to \(n\). We will now see how \(\bar{w}\) can be obtained from the original \(\bar{x}\). Consider the irreducible affine variety \(V\) defined by the input-output equations in the space with coordinates \(\bar{y},\ldots,\bar{y}^{(m)}\) over the field \(\mathbb{Q}(\bar{\beta})\langle\bar{u}\rangle\). The dimension of this variety is equal to sum of the orders of the IO-equations [9, Corollary 4.11], and thus it is equal to \(n\). The equalities \[\bar{y}^{(i)}=R_{i}(\widetilde{\alpha},\bar{w},\bar{u}),\ \ i=0,\ldots,m,\] define a dominant rational map \(\psi\) from the affine \(n\)-space with coordinates \(\bar{w}\) to \(V\) over the field \(\overline{\mathbb{Q}(\bar{\beta})\langle\bar{u}\rangle}=\overline{\mathbb{Q}( \widetilde{\alpha})}\langle\bar{u}\rangle\). Consider the field \(F:=\overline{\mathbb{Q}(\bar{\beta})}\langle\bar{u}\rangle(V)=\overline{ \mathbb{Q}(\widetilde{\alpha})}\langle\bar{u}\rangle(V)\) and the generic point \(\bar{Y}=(Y_{0},\ldots,Y_{m})\) of \(V\) in \(F\). Since the morphism \(\psi\) is dominant and \(\bar{Y}\) is generic, the system of equations \(\psi(\bar{w})=\bar{Y}\) in the variables \(\bar{w}\) has a solution in \(\bar{F}\). On the other hand, the field \[\overline{\mathbb{Q}(\bar{\alpha})}\langle\bar{u}\rangle\left(R_{0}(\bar{\alpha},\bar{x},\bar{u}),\ldots,R_{m}(\bar{\alpha},\bar{x},\bar{u})\right)\] is isomorphic to the extension of \(F\) by \[R_{i}(\bar{\alpha},\bar{x},\bar{u})\mapsto Y_{i},\ \ 0\leqslant i\leqslant m.\] Denote this isomorphism by \(\varphi\). The desired correspondence between \(\bar{w}\) and \(\bar{x}\) can now be found by applying \(\varphi^{-1}\) to a solution of \(\psi(\bar{w})=\bar{Y}\). This way we get algebraic functions such \(\bar{w}=\bar{w}(\bar{\alpha},\bar{x},\bar{u},\bar{u}^{\prime},\ldots)\) such that \[R_{i}(\widetilde{\alpha},\bar{w},\bar{u})=R_{i}(\bar{\alpha},\bar{x},\bar{u}) \ \ \ \ \text{for}\ i=0,\ldots,m.\] (5) We will now show that \(w^{\prime}_{i}\) is indeed equal to \(f_{i}(\bar{w},\widetilde{\alpha},\bar{u})\). By differentiating (5), we establish that \(w^{\prime}_{1},\ldots,w^{\prime}_{n}\) satisfy
the following linear system (cf. [6, Section 3] and [22, Section 3.2]):
\[\sum_{j=1}^{n}w_{j}^{\prime}\frac{\partial R_{i}(\widetilde{\alpha},\bar{w},\bar{u})}{\partial w_{j}}=\sum_{j=1}^{n}f_{j}(\bar{x},\bar{\alpha}, \bar{u})\frac{\partial R_{i}(\bar{\alpha},\bar{x},\bar{u})}{\partial x_{j}}+\\ +\sum_{j=1}^{\ell}u_{j}^{\prime}\left(\frac{\partial R_{i}(\bar{ \alpha},\bar{x},\bar{u})}{\partial u_{j}}-\frac{\partial R_{i}(\widetilde{ \alpha},\bar{w},\bar{u})}{\partial u_{j}}\right)\]
for \(i=0,\ldots,m\). Since \(\dim V=n\), the rank of the matrix of this linear system is also \(n\), so it has a unique solution. On the other hand, \(f_{1}(\bar{w},\bar{\alpha},\bar{u}),\ldots,f_{n}(\bar{w},\bar{\alpha},\bar{u})\) is a solution of this system by construction of the \(R_{i}\)'s. Therefore,
\[w_{i}^{\prime}=f_{i}(\bar{w},\widetilde{\alpha},\bar{u}),\ \ 1\leqslant i \leqslant n,\]
for the found algebraic functions \(\bar{w}=\bar{w}(\bar{x},\bar{x},\bar{u},\bar{u}^{\prime},\ldots)\).
Finally, we will show that \(\bar{w}(\bar{\alpha},\bar{x},\bar{u},\bar{u}^{\prime},\ldots)\) in fact does not depend on \(\bar{u}\) (or any of its derivatives). Let \(h\) be the largest order of \(\bar{u}\) occurring in these expressions, say \(u_{1}^{(h)}\) occurs in \(w_{1}\). Then \(w_{1}^{\prime}\) will depend nontrivially on \(u_{1}^{(h+1)}\) while \(f_{1}(\bar{x},\bar{\alpha},\bar{u})\) does not.
To carry this out computationally, we solve the system of rational equations (5) for \(\bar{w}\) over \(\mathbb{Q}(\bar{\alpha},\bar{x})(\bar{u})\). This can be done, for instance, by computing a Grobner bases of the numerator of (5) with an elimination monomial ordering \(w_{i}>\bar{x}\) for each \(i\), \(1\leqslant i\leqslant n\), over \(\mathbb{Q}(\bar{\alpha})\backslash\bar{u}\)) to find a polynomial equation \(P(w_{i},\bar{x},\bar{\alpha},\bar{u})=0\) for each \(i\) (see Sections C, D, and E for concrete examples).
**Remark 4**.: The equations given by \(p_{1},\ldots,p_{N}\) in (4) can be very complicated. Our Maple code [https://github.com/pogudingleb/AllIdentifiable-based](https://github.com/pogudingleb/AllIdentifiable-based) on [19] can significantly simplify the system by applying the functions FieldToIdeal and then FilterGenerators to \(p_{1}(\bar{\alpha}),\ldots,p_{N}(\bar{\alpha})\).
## IV Examples
In this section, we will illustrate the approach
* by a series of simple examples that explain what can and cannot be achieved in principle and how the algorithm works in practice as well as
* by two systems from modeling, Lotka-Volterra and a chemical reaction network system.
### _Not possible to achieve global identifiability with any method_
We will begin by showing that an identifiable reparametrization does not always exist1 (cf. [5, Theorem 3.5]). For this, consider the system
Footnote 1: We are grateful to Sebastian Falkensteiner and Rafael Sendra for pointing out a mistake in a previous version of this example and offering a correction.
\[\begin{cases}\chi^{\prime}=\frac{\alpha_{2}}{2\alpha_{1}}(x^{2}+1),\\ \omit\span\@@LTX@noalign{\vskip 3.0pt plus 1.0pt minus 1.0pt}\omit\cr y=\frac{2x}{\alpha_{2}(1+x^{2})} \end{cases} \tag{6}\]
The corresponding input-output equation is
\[{\alpha_{1}}^{2}(y^{\prime})^{2}+{\alpha_{2}}^{2}y^{2}-1=0, \tag{7}\]
and so the field of input-output identifiable functions is \(\mathbb{Q}(\alpha_{1}^{2},\alpha_{2}^{2})\), and so neither \(\alpha_{1}\) nor \(\alpha_{2}\) are globally IO-identifiable. As in Theorem 1, we set \(\beta_{1}=\alpha_{1}^{2}\) and \(\beta_{2}=\alpha_{2}^{2}\). If system (6) had had a reparametrization over \(\mathbb{Q}(\beta_{1},\beta_{2})\) (and so had been IO-identifiable), then the curve \(C\) (ellipse) defined by
\[\beta_{1}x^{2}+\beta_{2}y^{2}=1\]
would have had a rational parametrization over \(\mathbb{Q}(\beta_{1},\beta_{2})\) but it does not. If it had had a parametrization over \(\mathbb{Q}(\beta_{1},\beta_{2})\), it would have had a point
\[(x_{0},y_{0})=(q_{1}(\beta_{1},\beta_{2}),q_{2}(\beta_{1},\beta_{2}))\in \mathbb{Q}(\beta_{1},\beta_{2}).\]
We write both \(q_{1}\) and \(q_{2}\) as Laurent series in \(\beta_{1}\) over the field \(\mathbb{Q}(\beta_{2})\). Then \(\beta_{1}q_{1}^{2}\) is a Laurent series of odd order. Since \(\beta_{2}q_{2}^{2}\) and \(1\) are of even order, their orders must be equal and the dominating terms must cancel. Hence,
\[q_{2}=c_{0}(\beta_{2})+\text{O}(\beta_{1}),\]
so the constant term of \(1-\beta_{2}q_{2}^{2}\) is \(1-\beta_{2}c_{0}(\beta_{2})^{2}\). This cannot be equal to zero because \(1/\beta_{2}\) is not a square, so we arrive at a contradiction.
### _Not possible to achieve global identifiability with this method_
Consider the system
\[\begin{cases}\chi_{1}^{\prime}=ax_{1},\\ \chi_{2}^{\prime}=bx_{2},\\ y=x_{1}+x_{2},\end{cases} \tag{8}\]
and so \(\bar{x}=(x_{1},x_{2})\), \(\bar{y}=y\), and \(\bar{\alpha}=(a,b)\). There is no \(\bar{u}\). The input-output equation is
\[y^{\prime\prime}-(a+b)y^{\prime}+ab\cdot y=0. \tag{9}\]
Therefore, \(\bar{\beta}=(a+b,a\cdot b)\) and the identifiable functions are \(K:=\mathbb{Q}(a+b,a\cdot b)\), and so \(a\) and \(b\) are algebraic of degree \(2\) over \(K\), therefore, are only locally identifiable. For \(i=0,1,2\), we will compute \(y^{(i)}\) as a function \(R_{i}(x_{1},x_{2},a,b)\):
\[\begin{split} y&=R_{0}(x_{1},x_{2},a,b)=x_{1}+x_{2},\\ y^{\prime}&=R_{1}(x_{1},x_{2},a,b)=x_{1}^{\prime}+x_{2}^{ \prime}=ax_{1}+bx_{2},\\ y^{\prime\prime}&=R_{2}(x_{1},x_{2},a,b)=x_{1}^{\prime \prime}+x_{2}^{\prime\prime}=a^{2}x_{1}+b^{2}x_{2},\end{split} \tag{10}\]
The Jacobian with respect to \(\bar{x}\) is
\[M=\begin{pmatrix}1&1\\ a&b\\ a^{2}&b^{2}\end{pmatrix}\]
Let \(r=\operatorname{rank}M=2\) and \(D=\det M=\begin{pmatrix}1&1\\ a&b\end{pmatrix}=b-a\) be a non-singular minor of \(M\). Considering \(D\) as a rational function in \(\bar{x}\), we pick a (the only, in fact) non-zero coefficient \(D_{0}(\bar{\alpha})\) of its numerator as \(b-a\). We now consider the following system of equations and inequations in \(\widetilde{\alpha}\) over \(\mathbb{Q}(\bar{\beta})\):
\[\begin{cases}a+b=\widetilde{\alpha}_{1}+\widetilde{\alpha}_{2}\\ a\cdot b=\widetilde{\alpha}_{1}\cdot\widetilde{\alpha}_{2}\\ \widetilde{\alpha}_{1}-\widetilde{\alpha}_{2}\neq 0,\end{cases}\]
which has two solutions: \((\widetilde{\alpha}_{1}=a,\widetilde{\alpha}_{2}=b)\) and \((\widetilde{\alpha}_{1}=b,\widetilde{\alpha}_{2}=a)\), neither of which make the locally identifiable model globally identifiable.
However, for example, the following reparametrization makes the model globally identifiable:
\[\begin{cases}w_{1}=x_{1}+x_{2},\\ w_{2}=ax_{1}+bx_{2},\end{cases}\]
resulting in this reparametrized ODE system:
\[\begin{cases}w_{1}^{\prime}=w_{2},\\ w_{2}^{\prime}=(a+b)\cdot w_{2}-a\cdot b\cdot w_{1},\\ y=w_{1},\end{cases}\]
whose IO-equation is also (9).
### _Choosing different solutions of (4)_
Consider the system
\[\begin{cases}x_{1}^{\prime}=ax_{2},\\ x_{2}^{\prime}=bx_{1},\\ y=x_{1},\end{cases} \tag{11}\]
so \(\bar{x}=(x_{1},x_{2})\), \(\bar{y}=(y)\), \(\bar{\alpha}=(a,b)\), and we have no \(\bar{u}\). The input-output equation is
\[y^{\prime\prime}-ab\cdot y=0. \tag{12}\]
Therefore, \(\bar{\beta}=(ab)\) and \(ab\) is globally identifiably but neither \(a\) nor \(b\) is identifiable. Let us begin by computing Lie derivatives of \(\bar{y}\). We have
\[y =R_{0}(x_{1},x_{2},a,b),=x_{1},\] \[y^{\prime} =R_{1}(x_{1},x_{2},a,b)=x_{1}^{\prime}=ax_{2}, \tag{13}\] \[y^{\prime\prime} =R_{2}(x_{1},x_{2},a,b)=x_{1}^{\prime\prime}=ax_{2}^{\prime}=abx_ {1}.\]
Following the the proof of Theorem 1, we have
\[\beta_{1}=ab=p_{1}(\bar{\alpha})=p_{1}(a,b).\]
We then find the Jacobian of (13) with respect to \(\bar{x}\):
\[M=\begin{pmatrix}1&0\\ 0&a\\ ab&0\end{pmatrix}.\]
Then \(r:=\operatorname{rank}M=2=\operatorname{trdeg}\mathbb{Q}(a,b)(\bar{y})/ \mathbb{Q}(a,b)\). Let \(D=\det\begin{pmatrix}1&0\\ 0&a\end{pmatrix}=a\), a non-singular maximal minor of \(M\). Considering \(D\) as a rational function of \(\bar{x}\), we pick a non-zero coefficient \(D_{0}(\bar{\alpha})\) of its numerator as \(a\). We now consider the following system of equations and inequations over \(\mathbb{Q}(\bar{\beta})\):
\[\begin{cases}ab=\widetilde{\alpha}_{1}\cdot\widetilde{\alpha}_{2},\\ \widetilde{\alpha}_{1}\neq 0,\end{cases}\]
which has infinitely many solutions (over the algebraic closure of \(\mathbb{Q}(ab)\)), including this one: \(\widetilde{\alpha}_{1}=ab\), \(\widetilde{\alpha}_{2}=1\). Specializing (11), we obtain the reparametrized system as:
\[\begin{cases}w_{1}^{\prime}=\beta_{1}w_{2},\\ w_{2}^{\prime}=w_{1},\\ y=w_{1},\end{cases}\]
whose input-output equation is still (12), and the corresponding change of variables, calculated by equating the old and new Lie derivatives, is
\[w_{1}=x_{1},\\ w_{2}=\frac{x_{2}}{b},\]
which is a scaling reparametrization. We could choose a different solution for \(\widetilde{\alpha}_{1},\widetilde{\alpha}_{2}\), for example, \(\widetilde{\alpha}_{1}=1,\widetilde{\alpha}_{2}=ab\) which would yield
\[w_{1}^{\prime}=w_{2},\ \ w_{2}^{\prime}=\beta_{1}w_{1},\ \ y=w_{1}.\]
### _Lotka-Volterra example_
Consider the system
\[\begin{cases}x_{1}^{\prime}=ax_{1}-bx_{1}x_{2},\\ x_{2}^{\prime}=-cx_{2}+dx_{1}x_{2},\\ y=x_{1}\end{cases} \tag{14}\]
with two state variables \(\bar{x}=(x_{1},x_{2})\), four parameters \(\bar{\alpha}=(a,b,c,d)\), and one output \(\bar{y}=y\). The input-output equation is
\[yy^{\prime\prime}-y^{\prime 2}-dy^{2}y^{\prime}+cxy^{\prime}+ady^{3 }-acy^{2}=\\ y^{\prime\prime}-y^{\prime 2}-y(c-dy)(ay-y^{\prime})=0. \tag{15}\]
So, we have that the field of IO-identifiable functions is \(\mathbb{Q}(d,c,ad,ac)=\mathbb{Q}(a,c,d)\). The Lie derivatives of the \(y\)-variable are as follows:
\[y =x_{1},\] \[y^{\prime} =-bx_{1}x_{2}+ax_{1}, \tag{16}\] \[y^{\prime\prime} =-bdx_{1}^{2}x_{2}+b^{2}x_{1}x_{2}^{2}+(bc-2ab)x_{1}x_{2}+a^{2}x_{1}\]
Following the proof of Theorem 1, we define
\[\beta_{1}=p_{1}(\bar{\alpha})=a,\ \ \beta_{2}=p_{2}(\bar{\alpha})=c,\ \ \beta_{3}=p_{3}(\bar{\alpha})=d.\]
The Jacobian of (16) w.r.t. \(\bar{x}\) is
\[M=\begin{pmatrix}1&0\\ a-bx_{2}&-bx_{1}\\ j^{2}x_{2}^{2}-2b(dx_{1}+a-c/2)x_{2}+a^{2}&-bx_{1}(dx_{1}-2bx_{2}+2a-c)\\ \end{pmatrix}.\]
Then
\[D=\det\begin{pmatrix}1&0\\ a-bx_{2}&-bx_{1}\\ \end{pmatrix}=-bx_{1}\]
is a maximal non-zero minor of \(M\). Considering \(D\) as a rational function of \(\bar{x}\), we pick a non-zero coefficient \(D_{0}(\bar{\alpha})\) of its numerator as \(-b\). We now arrive at the following system in \(\widetilde{\alpha}\) over \(\mathbb{Q}(\bar{\beta})\):
\[\begin{cases}a=\widetilde{\alpha}_{1},\\ c=\widetilde{\alpha}_{3},\\ d=\widetilde{\alpha}_{4},\\ -\widetilde{\alpha}_{2}\neq 0\end{cases},\]
and we pick the following solution (out of infinitely many solutions): \(\widetilde{\alpha}_{1}=a,\widetilde{\alpha}_{2}=1,\widetilde{\alpha}_{3}=c, \widetilde{\alpha}_{4}=d\), which results in the following reparametrized system of equations
\[\begin{cases}w_{1}^{\prime}=aw_{1}-w_{1}w_{2},\\ w_{2}^{\prime}=-cw_{2}+dw_{1}w_{2},\\ y=w_{1}\end{cases}\]
The corresponding change of variables, obtained by equating old and new Lie derivatives, is
\[w_{1}=x_{1},\] \[w_{2}=bx_{2},\]
which is a scaling reparametrization in this case.
### _Chemical reaction network_
Consider the following ODE model originating from a chemical reaction network, cf. [4, system (2.3)]
\[\begin{cases}X^{\prime}=k_{2}\cdot(A_{UX}+2A_{XX}+A_{XU})-k_{1}\cdot X\cdot(A _{UX}+A_{XU}+2A_{UU}),\\ A_{UU}^{\prime}=k_{2}\cdot(A_{UX}+A_{XU})-2k_{1}\cdot X\cdot A_{UU},\\ A_{UX}^{\prime}=k_{1}\cdot X\cdot(A_{UU}-A_{UX})+k_{2}\cdot(A_{XX}-A_{UX}), \\ A_{XX}^{\prime}=k_{1}\cdot X\cdot(A_{UX}+A_{XU})-2\cdot k_{2}\cdot A_{XX}, \\ A_{XU}^{\prime}=k_{1}\cdot X\cdot(A_{UU}-A_{XU})+k_{2}\cdot(A_{XX}-A_{XU}), \\ y=X.\end{cases}\]
We have \(\bar{x}=(X,A_{UU},A_{UX},A_{XX},A_{XU})\), \(\bar{\alpha}=(k_{1},k_{2})\), \(\bar{y}=(y)\), and there is no \(\bar{u}\). A computation shows that
\[y^{\prime}y^{\prime\prime\prime}-(y^{\prime\prime})^{2}+2k_{1}(y^{\prime})^{ 3}=0 \tag{17}\]
is the input-output equation, and so \(\bar{\beta}=(k_{1})\) generates the field of IO-identifiable parameters. Note that the order of the input-output equation is less than the dimension of the system, so we will first perform a preprocessing reduction as in Remark 3.
The Lie derivatives are as follows:
\[\begin{split} y&=X\\ y^{\prime}=k_{2}(A_{UX}+2A_{XX}+A_{XU})\\ &\quad-k_{1}X(A_{UX}+2A_{UU}+A_{XU})\\ y^{\prime\prime}&=-y^{\prime}\cdot(k_{1}X+2k_{1}A_{UU}+k_{1}A_{UX} +k_{1}A_{XU}+k_{2})\\ y^{\prime\prime\prime}&=y^{\prime}\cdot((8Xk_{1}^{2}+4k_{1}(k_{1}A _{UX}+k_{1}A_{XU}+k_{2}))A_{UU}\\ &\quad+(4k_{1}^{2}A_{UX}+4k_{1}^{2}A_{XU}+2k_{1}k_{2})X-4k_{1}k_{2}A_ {XX}+k_{2}^{2}\\ &\quad+k_{1}^{2}(A_{UX}^{2}+2A_{UX}A_{XU}+A_{XU}^{2}+4A_{UU}^{2}+X^ {2})).\end{split} \tag{18}\]
We will now search for a three-dimensional system in only three variables \(w_{1},w_{2},w_{3}\) having the same input-output equation. First we set up the desired Lie derivatives for this new system by replacing the original variables with arbitrary linear forms in \(w_{1},w_{2},w_{3}\), say:
\[X=w_{1},\ A_{UU}=w_{2},\ A_{UX}=w_{3},\ A_{XX}=A_{XU}=0.\]
We apply this substitution to (18), equate the results before and after the substitution, and solve for \(w_{1},w_{2},w_{3}\) (similarly to (5)). We get
\[w_{1}=X,\ w_{2}=A_{UU}-A_{XX},\ w_{3}=A_{UX}+2A_{XX}+A_{XU}. \tag{19}\]
Then we can set up a linear system on \(w_{1}^{\prime},w_{2}^{\prime},w_{3}^{\prime}\) as in [6, Section 3] and find the reduced model:
\[\begin{cases}w_{1}^{\prime}=k_{2}w_{3}-k_{1}w_{1}(w_{3}+2w_{2}),\\ w_{2}^{\prime}=-k_{1}w_{1}(w_{3}+2w_{2})+k_{2}w_{3},\\ w_{3}^{\prime}=k_{1}w_{1}(w_{3}+2w_{2})-k_{2}w_{3}.\end{cases} \tag{20}\]
To this model, we can apply both parts of Theorem 1. We still have the same IO-equation (17), so \(\beta_{1}=k_{1}\). The first three Lie derivatives
\[\begin{split} y&=w_{1},\\ y^{\prime}&=k_{2}w_{3}-k_{1}w_{1}(w_{3}+2w_{2}),\\ y^{\prime\prime}&=-y^{\prime}(k_{1}(w_{1}+2w_{2}+w_{3})k_{1}+k_{2}) \end{split}\]
have nonsingular Jacobian, and one of the coefficients of its determinant is \(k_{1}k_{2}^{2}\), so we set up a system
\[\begin{cases}k_{1}=\widetilde{\alpha}_{1},\\ \widetilde{\alpha}_{1}\widetilde{\alpha}_{2}\neq 0.\end{cases}\]
We take a solution \(\widetilde{\alpha}_{1}=k_{1}\) and \(\widetilde{\alpha}_{2}=1\) in \(\overline{\mathbb{Q}(\bar{\beta})}=\overline{\mathbb{Q}(k_{1})}\) and, substituting this solution into (20), obtain
\[\begin{cases}y_{1}^{\prime}=v_{3}-k_{1}v_{1}(v_{3}+2v_{2}),\\ y_{2}^{\prime}=-k_{1}v_{1}(v_{3}+2v_{2})+v_{3},\\ y_{3}^{\prime}=k_{1}v_{1}(v_{3}+2v_{2})-v_{3}.\end{cases}\]
The corresponding state transformation obtained at the last step of the proof of Theorem 1 is
\[v_{1} =w_{1},\quad v_{3}=k_{2}(w_{1}+w_{3})-w_{1},\] \[v_{2} =((-w_{1}-w_{3})k_{2}+w_{1}+2w_{2}+w_{3})/2+\frac{k_{2}-1}{2k_{1}}.\]
The overall state transformation between the new and original system can be obtained by composing this with (19):
\[v_{1} =X,\] \[v_{2} =X+2A_{UU}+A_{UX}+A_{XU}\] \[\quad-(X+A_{UX}+A_{XU}+2A_{XX})k_{2}+\frac{k_{2}-1}{2k_{1}},\] \[v_{3} =\frac{X+A_{UX}+A_{XU}+2A_{XX}}{k_{2}}-X.\]
## Acknowledgments
We are grateful to the CCiS at CUNY Queens College for the computational resources and to Julio Banga, Sebastian Falkenstein, Gemma Massonis, Nikki Meshkat, Rafael Sendra and Alejandro Villaverde for useful discussions.
|
2310.02058 | Extending the capabilities of vectorial ptychography to
circular-polarizing materials such as cholesteric liquid crystals | The problem of imaging materials with circular polarization properties is
discussed within the framework of vectorial ptychography. We demonstrate, both
theoretically and numerically, that using linear polarizations to investigate
such materials compromises the unicity of the solution provided by this
computational method. To overcome this limitation, an improved measurement
approach is proposed, which involves specific combinations of elliptical
polarizations. The effectiveness of this strategy is demonstrated by numerical
simulations and experimental measurements on cholesteric liquid crystals films,
which possess unique polarization properties. With the help of Pauli matrices
algebra, our results highlight the technique's ability to discern between
different types of circular polarizers, uniform vs. non-uniform, and determine
their handedness. | Patrick Ferrand, Michel Mitov | 2023-10-03T14:01:20Z | http://arxiv.org/abs/2310.02058v1 | Extending the capabilities of vectorial ptychography to circular-polarizing materials such as cholesteric liquid crystals
###### Abstract
The problem of imaging materials with circular polarization properties is discussed within the framework of vectorial ptychography. We demonstrate, both theoretically and numerically, that using linear polarizations to investigate such materials compromises the unicity of the solution provided by this computational method. To overcome this limitation, an improved measurement approach is proposed, which involves specific combinations of elliptical polarizations. The effectiveness of this strategy is demonstrated by numerical simulations and experimental measurements on cholesteric liquid crystals films, which possess unique polarization properties. With the help of Pauli matrices algebra, our results highlight the technique's ability to discern between different types of circular polarizers, uniform vs. non-uniform, and determine their handedness.
## 1 Introduction
Vectorial ptychography is a recent imaging technique that uses phase retrieval algorithms to provide quantitative maps of Jones matrices [1]. It is used in optical microscopy to study specimens that strongly affect the phase and polarization of transmitted light, and it has a robust reference-free experimental scheme [2]. This variant of optical ptychography [3] has been successfully applied to various materials, ranging from natural substances like biomineral calcareous shells [4, 5] or biological tissues [6] to advanced optical components such as holographic polarization-controlled metasurfaces [7]. In previous studies, vectorial ptychography has relied on linear polarization states for both illumination and detection [2]. While this approach has been effective in addressing various challenging situations, it may have limitations when dealing with materials that exhibit strong circular-polarization properties. Such materials are often encountered in chiral molecular assemblies, which can make reconstruction difficult.
This letter presents an extension to the capabilities of vectorial ptychography, demonstrating how it can be applied to circular-polarizing materials. Initially, we provide theoretical evidence of the underdetermination introduced by linear polarizations when investigating such materials. Subsequently, we propose an enhanced measurement scheme that relies on combinations of elliptical polarizations. We validate this approach through numerical simulations, where the results are analyzed with the help of Pauli matrices algebra. Finally, experimental vectorial ptychography measurements are conducted on cholesteric liquid crystal (CLC) films, which exhibit specific polarization properties.
## 2 Theory
### Principle of vectorial ptychography
In vectorial ptychography, the recorded intensity at the \(j\)-th scanning position is represented as the square modulus of the far field
\[I_{jkl}(\mathbf{q})=\left|\mathcal{F}\left(\psi_{jkl}(\mathbf{r})\right)\right| ^{2} \tag{1}\]
where where \(\psi_{jkl}\) denotes the scalar exit field after analysis for the \(k\)-th polarized probe and \(l\)-th polarization analysis [1]. The operator \(\mathcal{F}\) represents a propagation operator, while \(\mathbf{r}\) and \(\mathbf{q}\) are the direct and reciprocal space coordinates, respectively. The exit field can be expressed as
\[\psi_{jkl}(\mathbf{r})=\mathbf{h}_{l}^{\dagger}\mathbf{j}(\mathbf{r}-\mathbf{ r}_{l})\mathbf{p}_{k}(\mathbf{r}) \tag{2}\]
where \(\mathbf{h}_{l}\) is the polarization analysis operator, \(\mathbf{r}\) denotes the transpose operator, \(\mathbf{j}(\mathbf{r}-\mathbf{r}_{l})\) represents the Jones matrix map of the laterally shifted investigated object, and \(\mathbf{p}_{k}(\mathbf{r})\) corresponds to the vectorial field distribution of the polarized illumination probe.
The success of the iterative ptychography reconstruction depends on the changes observed in the recorded intensity patterns \(I_{jkl}(\mathbf{q})\) during spatial (\(j\)) and polarization (\(k,l\)) scanning [2]. Previous works have mainly focused on linear polarizations for both the illumination (at angle \(\alpha_{k}\)) and analysis (at angle \(\theta_{l}\)), given by
\[\mathbf{p}_{k}^{\mathrm{lin}}\propto\begin{bmatrix}\cos\alpha_{k}\\ \sin\alpha_{k}\end{bmatrix}\quad\text{and}\quad\mathbf{h}_{l}^{\mathrm{lin}} \propto\begin{bmatrix}\cos\theta_{l}\\ \sin\theta_{l}\end{bmatrix}. \tag{3}\]
While this measurement scheme has been employed to investigate a wide range of optical properties [4], the specific case of circular-polarizing materials has not been studied.
### Circular-polarizing materials
Circular-polarizing materials possess the property of transforming incident light into circularly polarized light, in transmission and/or in reflection. Additionally, if a circular polarization of a particular handedness (left or right) can be transmitted with the same handedness, the circular polarizer is referred to as homogeneous [8]. In this study, it is important to note that we will adopt a convention for which the term "left" refers to a polarized field that circulates in a counterclockwise direction when observed from the detector's viewpoint.
### Linear polarization scheme
For the sake of simplicity, let us consider a material that behaves as a homogeneous left-circular polarizer in transmission, along with transmittance properties \(\Gamma(\mathbf{r})\) that are independent of polarization. This material can be described by the following Jones matrix,
\[\mathbf{J}(\mathbf{r})=\frac{1}{2}T(\mathbf{r})\begin{bmatrix}1&-i\\ +i&1\end{bmatrix}. \tag{4}\]
Under a linear polarization scheme (Eq. 3), it can be demonstrated that the exit field, as given by Eq. 2, becomes
\[\psi^{\text{lin}}_{jkl}(\mathbf{r})\propto T(\mathbf{r}-\mathbf{r}_{j})e^{i(\theta_{l}-\bm {a}_{k})}. \tag{5}\]
It is important to note that the polarization properties contribute to the exit field solely as a spatially homogeneous phase factor in Eq. 5. Consequently, the recorded intensity remains unchanged according to Eq. 1, regardless of the combination of polarizations employed. In a more general context, it can be shown that different types of circular-polarizing materials, whether left or right, homogeneous or inhomogeneous, would yield the same set of diffracted intensities \(I_{jkl}(\mathbf{q})\) and, therefore, would be indistinguishable using this measurement scheme, compromising the unicity of the ptychographic reconstruction.
### Improved polarization scheme
The effective resolution of this ambiguity can be achieved by utilizing a wider range of polarizations. By substituting the operators \(\mathbf{p}_{k}\) and \(\mathbf{h}_{l}\) with the following expressions
\[\mathbf{p}_{k}^{\text{ell}}\propto\begin{bmatrix}\cos\alpha_{k}\\ -i\sin\alpha_{k}\end{bmatrix}\quad\text{and}\quad\mathbf{h}_{l}^{\text{ell}} \propto\begin{bmatrix}\cos\theta_{l}\\ -i\sin\theta_{l}\end{bmatrix}, \tag{6}\]
which correspond to general elliptical polarizations with an azimuth of \(0^{\circ}\) or \(90^{\circ}\) and ellipticities defined by \(\tan\alpha_{k}\) and \(\tan\theta_{l}\), respectively, the resulting exit field described by Eq. 2, in the example considered earlier, becomes
\[\psi^{\text{ell}}_{jkl}(\mathbf{r})\propto T(\mathbf{r}-\mathbf{r}_{j})(\cos\alpha_{k}- \sin\alpha_{k})(\cos\theta_{l}+\sin\theta_{l}). \tag{7}\]
As it will be illustrated later, the concept can be easily grasped by considering the Poincare sphere representation for polarizations. It involves investigating multiple points on a sphere's meridian for both illumination and analysis, involving spherical, elliptical, and linear polarizations. This situation allows us to overcome the barrier imposed by linear polarizations positioned on the sphere's equator. This modification introduces a clear signature of the circular polarization properties, resulting in a change in the amplitude of the exit field as given by Eq. 7, depending on the specific combination of polarizations. Furthermore, it can be demonstrated that the amplitude factors in Eq. 7 take different algebraic forms, facilitating the unequivocal identification of the type of circular polarizer. The relevance of this latter approach has been tested on both numerical and experimental datasets.
## 3 Materials and methods
### Numerical simulations
Numerical datasets were generated to simulate an experiment of vectorial ptychography at a wavelength \(\lambda=635\) nm using \(50\)-\(\upmu\)m-diameter cropped Gaussian-shaped illumination probes under which an object is raster-scanned with a step size of \(7\) um in both directions, resulting in an \(11\times 11\) grid. Far-field intensity patterns were computed as if they were captured at an infinite distance, within a numerical aperture of \(0.3\), and recorded on a camera sensor with dimensions of \(122\times 122\) pixels. To replicate the effects of shot noise on the sensor, a Poisson random number generator was employed. Typically, each frame had a total photon count of \(10^{6}\). The datasets were processed using the vectorial ptychographic iterative algorithm detailed in a previous publication [1]. The algorithm employed random distributions (modulus and phase) for the initial guesses and was run for \(30\) iterations with known probes.
### CLC films
Cholesteric liquid crystals (CLC) films were experimentally investigated at \(\lambda=635\) nm. These materials are characterized by a helical structure of pitch \(p\) that produces a specific optical response for wavelengths within a bandwidth centered at \(A_{0}=\hbar p\), at normal incidence, where \(\bar{n}\) is the mean refractive index. The reflected light is circularly polarized. When unpolarized light is incident on a CLC, a maximum of \(50\%\) of light is reflected (matching the helix handedness), and \(50\%\) of the light is transmitted (circularly polarized with the inverse handedness) [9]. In the following, this case will be referred to as "Bragg film", in analogy with X-ray diffraction. On the contrary, if the wavelength \(\lambda\) is outside the bandwidth, the polarization rule is ineffective and the material will be referred to as "off-Bragg". Wacker-Chemie GmbH provided us with CLCs polysiloxane-based oligomers. Details regarding their chemical and physical properties, as well as relevant references, are given in a prior paper [10]. Their glass transition temperature \(T_{\text{g}}\) ranges from \(40\) to \(55\)degC. The helix is left-handed. Thin films were produced between two plain glass substrates and annealed at \(140\)degC in their viscous state. They can be vitrified by quenching below \(T_{\text{g}}\), and their cholesteric structure is preserved at normal temperature in a solid state. The Silicon-Green chemical was used to fabricate the off-Bragg film (half height bandwidth \(430\)-\(500\) nm). It was annealed for \(90\) min without the top substrate, resulting in a blue-shift in the reflection band [11]. The thickness measures \(2.6\pm 0.1\) um. The Bragg film (half height bandwidth \(550\)-\(680\) nm) is a \(13\):\(87\) wt. % mixture of Silicon-Blue and Silicon-Red compounds. It was annealed for \(10\) min with both substrates present. The thickness measures \(15\pm 2\) um.
### Vectorial ptychography measurements
Measurements were carried out at \(\lambda=635\) nm on an optical setup described previously [2], adapted here by inserting two quarter waveplates properly oriented, one before the object, one after, as shown in Fig. S1 in Supplement 1. This modification
provides a simple solution to upgrade the linear polarization scheme to the improved one proposed in this work. Thus, the mechanical control of the polarization, formerly of the angles of linear polarizations (Eqs. 3) acts now as a control of the polarization ellipticities (Eqs. 6). Measurements were carried out with an illumination probe of effective diameter \(100\)\(\upmu\)m. The far-field was collected through a numerical aperture of 0.4 and recorded by a camera of effective dimensions \(320\times 240\) pixels. Object reconstructions were performed by means of 500 iterations of a conjugate-gradient algorithm described in a previous work [12], allowing the joint estimation of the three probes together with the Jones maps of the object, with a pixel size of about \(0.73\times 0.97\)\(\upmu\)m\({}^{2}\).
## 4 Results and discussion
We performed a simulation using an object described by the Jones matrix presented in Eq. 4. The transmittance properties \(T(\mathbf{r})\) of the object corresponded to a uniform region intersected by vertical and horizontal stripes with lower transmittance. The resulting Jones maps are depicted in Fig. 1a. To better identify the presence of circular polarization properties while preserving all information, we expanded them as a sum
\[\mathbf{J}(\mathbf{r})=\sum_{n=0}^{3}C_{n}(\mathbf{r})\,\boldsymbol{\sigma}_{ n}. \tag{8}\]
Here, \(\boldsymbol{\sigma}_{0}\) denotes the identity matrix, and \(\boldsymbol{\sigma}_{1},\boldsymbol{\sigma}_{2}\), and \(\boldsymbol{\sigma}_{3}\) represent the Pauli matrices [8]
\[\boldsymbol{\sigma}_{1}=\begin{bmatrix}1&0\\ 0&-1\end{bmatrix};\,\boldsymbol{\sigma}_{2}=\begin{bmatrix}0&1\\ 1&0\end{bmatrix};\,\boldsymbol{\sigma}_{3}=\begin{bmatrix}0&-i\\ i&0\end{bmatrix}. \tag{9}\]
The corresponding maps of the four complex Pauli coefficients \(C_{n}(\mathbf{r})\) are displayed in Fig. 1b. Their average values, obtained near the center of the image, are illustrated in the complex plane (Fig. 1c). These values highlight the homogeneous left-circular polarizer properties of the simulated object, where \(C_{0}\) and \(C_{3}\) are equal, positive, and real, while \(C_{1}=C_{2}=0\).
First, we conducted a simulation to replicate a measurement using the linear-polarization scheme. The specific angles employed were \(\alpha_{1}=0^{\circ}\), \(\alpha_{2}=60^{\circ}\), \(\alpha_{3}=120^{\circ}\), along with \(\theta_{1}=0^{\circ}\), \(\theta_{2}=60^{\circ}\), \(\theta_{3}=120^{\circ}\)[7] in Eqs. 3, as visually depicted in Fig. 2a,b. Multiple independent algorithm runs were executed, yielding diverse solutions. Figs 2c-e provide an overview by displaying maps of the Pauli coefficients for each solution. Alongside the correct solution (Fig. 2c), which indicated a homogeneous left-circular polarizer, several alternative solutions emerged. These alternative solutions comprised of either homogeneous right-circular polarizers (Fig. 2d) or inhomogeneous ones (Fig.2e), thereby confirming the theoretical ambiguity highlighted. Remarkably, the discovered solutions extended beyond circular-polarizing characteristics, exhibiting different properties. For
Figure 1: Description of the simulated numerical object. (a) Jones maps \(\mathbf{J}(\mathbf{r})\). Inset show the colour encoding for complex values in the complex plane. (b) Pauli coefficients maps. (c) Values of the complex Pauli coefficients represented in the complex plane. Values have been averaged in the square dotted area shown on the maps of panel b. Scale bars are \(20\)\(\upmu\)m.
Figure 2: Simulations of measurements with the linear polarization scheme. (a) Combinations of polarizations for illuminations (\(\mathbf{p}_{k}^{\text{lin}}\), solid circle) and analyses (\(\mathbf{h}_{l}^{\text{lin}}\), empty circle) shown on the Poincaré sphere. \(S_{1}\), \(S_{2}\) and \(S_{3}\) are the standard Stokes coefficients [8]. RCP and LCP stand for right and left circular polarization, respectively. (b) Illustration of these polarizations. The colour indicates the ellipticity, given by the latitude on the Poincaré sphere. (c-g) Pauli coefficient maps calculated from retrieved Jones maps, for several reconstruction, run independently, represented with the same convention as in Figs. 1b and c. Scale bars are \(20\)\(\upmu\)m.
example, Fig. 2f corresponded to a horizontal quarter waveplate, while the last case, Figs. 2g, represented more intricate combinations.
Subsequently, the simulation was executed using an improved polarization scheme. The chosen angles were \(\alpha_{1}=15^{\circ}\), \(\alpha_{2}=105^{\circ}\), \(\alpha_{3}=45^{\circ}\), in conjunction with \(\theta_{1}=0^{\circ}\), \(\theta_{2}=-45^{\circ}\), \(\theta_{3}=45^{\circ}\) within Eqs. 6, visually represented in Fig. 3a. These angles, chosen empirically, represented a combination of linear, elliptical, and circular polarization states, allowing to probe efficiently a large variety of optical properties. Once again, multiple runs of the algorithm were performed, resulting in consistent convergence towards the correct solution, now. The obtained solution is illustrated in Fig. 3b, further validating the effectiveness of this scheme.
Finally, Bragg and off-Bragg CLC films were investigated using vectorial ptychography with the improved polarization scheme. Fig. 4a illustrates the Jones and Pauli coefficients maps obtained from the reconstruction algorithm. For the off-Bragg film (Fig. 4a), the Pauli coefficients maps show that only the \(C_{0}(\mathbf{r})\) map has non-zero values. This confirms that the film has an optical response that is insensitive to polarization. On the contrary, the Bragg film (Fig. 4b) displays different maps of Pauli coefficients, with \(C_{0}(\mathbf{r})=C_{3}(\mathbf{r})\neq 0\), while \(C_{1}(\mathbf{r})=C_{2}(\mathbf{r})=0\). This confirms that the film functions as a uniform left-circular polarizer. Furthermore, variations in the Pauli coefficient allow to report a texturing of the films (polygonal texture) that has been discussed previously [11]. Addressing this aspect is beyond the scope of the present study and will be the subject of a future article. It is worth emphasizing the clear advantage of spatial resolution and intrinsic phase-imaging capabilities offered by vectorial ptychography [4]. In contrast, other interferometric methods typically provide spatially averaged measurements [13].
## 5 Conclusion
In conclusion, this letter presents a novel approach to vectorial ptychography, expanding its capabilities to circular-polarizing materials such as CLCs. By a measurement scheme based on elliptical polarizations, the technique overcomes the limitations of linear polarizations and enables the accurate characterization of circular polarizers at a micrometer-scale resolution. This progress in quantitative imaging has implications for various fields, including the study of chiral molecular assemblies and other materials or advanced components with strong circular-polarization properties.
Funding.We acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant agreement no. 724881).
Acknowledgments.We thank Dr. E. Hanelt from Wacker-Chemie GmbH (Munich, Germany) for providing us with cholesteric oligomers.
Disclosures.The authors declare no conflicts of interest.
Data Availability Statement.The datasets are available upon reasonable request from the corresponding author.
Supplemental document.See Supplement 1 for supporting content.
## References
* [1] P. Ferrand, M. Allain, and V. Chamard, Opt. Lett. **40**, 5144 (2015).
* [2] P. Ferrand, A. Baroni, M. Allain, and V. Chamard, Opt. Lett. **43**, 763 (2018).
* [3] T. Wang, S. Jiang, P. Song, R. Wang, L. Yang, T. Zhang, and G. Zheng, Biomed. Opt. Express **14**, 489 (2023).
* [4] A. Baroni, V. Chamard, and P. Ferrand, Phys. Rev. Appl. **13**, 054028 (2020).
* [5] J. Duboiset, P. Ferrand, A. Baroni, T. A. Grunewald, H. Dicko, O. Grauty, J. Vidal-Dupiol, D. Sauhier, L. M. Gilles, M. Rosenthal, M. Burghammer, J. Nouet, C. Chevallard, A. Baronnet, and V. Chamard, Acta Biomater. **142**, 194 (2022).
* [6] X. Dai, S. Xu, X. Yang, K. C. Zhou, C. Glass, P. C. Konda, and R. Horstmeyer, Biomed. Opt. Express **13**, 1457 (2022).
* [7] Q. Song, A. Baroni, R. Sawant, P. Ni, V. Brandil, S. Chenot, S. Vozian, B. Damilano, P. de Mierry, S. Khadir, P. Ferrand, and P. Genevet, Nat. Commun. **11**, 2651 (2020).
* [8] R. A. Chipman, W. S. T. Lam, G. Young, W. S. T. Lam, and G. Young, _Polarized Light and Optical Systems_ (CRC Press, 2018).
* [9] M. Mitrov, Adv. Mater. **24**, 6260 (2012).
* [10] A. Scarangella, V. Soldan, and M. Mitrov, Nat. Commun. **11**, 4108 (2020).
* [11] G. Agaz, R. Blar, and M. Mitrov, Soft Matter **7**, 2841 (2011).
* [12] A. Baroni, M. Allain, P. Li, V. Chamard, and P. Ferrand, Opt. Exp. **27**, 8143 (2019).
* [13] A. Sanchez-Castillo, S. Eslami, F. Giesselmann, and P. Fischer, Opt. Exp. **22**, 31227 (2014).
Figure 4: Investigation of two CLC films. Pauli coefficient maps. (a) Off-Bragg film. (b) Bragg film. Scale bars are 40 μm.
Figure 3: Simulations of measurements with the improved polarization scheme. (a) Combinations of polarizations for illuminations (\(\mathbf{p}_{k}^{\mathrm{eff}}\), solid circle) and analyses (\(\mathbf{h}_{l}^{\mathrm{eff}}\), empty circle) shown on the Poincaré sphere. (b) Corresponding polarizations. (c) Pauli coefficient maps calculated from retrieved Jones maps, as obtained systematically for any reconstruction run independently, represented with the same convention as in Figs. 1b and c. Scale bars are 20 μm. |
2303.02160 | Navigates Like Me: Understanding How People Evaluate Human-Like AI in
Video Games | We aim to understand how people assess human likeness in navigation produced
by people and artificially intelligent (AI) agents in a video game. To this
end, we propose a novel AI agent with the goal of generating more human-like
behavior. We collect hundreds of crowd-sourced assessments comparing the
human-likeness of navigation behavior generated by our agent and baseline AI
agents with human-generated behavior. Our proposed agent passes a Turing Test,
while the baseline agents do not. By passing a Turing Test, we mean that human
judges could not quantitatively distinguish between videos of a person and an
AI agent navigating. To understand what people believe constitutes human-like
navigation, we extensively analyze the justifications of these assessments.
This work provides insights into the characteristics that people consider
human-like in the context of goal-directed video game navigation, which is a
key step for further improving human interactions with AI agents. | Stephanie Milani, Arthur Juliani, Ida Momennejad, Raluca Georgescu, Jaroslaw Rzpecki, Alison Shaw, Gavin Costello, Fei Fang, Sam Devlin, Katja Hofmann | 2023-03-02T18:59:04Z | http://arxiv.org/abs/2303.02160v1 | # Navigates Like Me: Understanding How People Evaluate Human-Like AI in Video Games
###### Abstract.
We aim to understand how people assess human likeness in navigation produced by people and artificially intelligent (AI) agents in a video game. To this end, we propose a novel AI agent with the goal of generating more human-like behavior. We collect hundreds of crowd-sourced assessments comparing the human-likeness of navigation behavior generated by our agent and baseline AI agents with human-generated behavior. Our proposed agent passes a Turing Test, while the baseline agents do not. By passing a Turing Test, we mean that human judges could not quantitatively distinguish between videos of a person and an AI agent navigating. To understand what people believe constitutes human-like navigation, we extensively analyze the justifications of these assessments. This work provides insights into the characteristics that people consider human-like in the context of goal-directed video game navigation, which is a key step for further improving human interactions with AI agents.
human subject study, believable AI, games, navigation +
Footnote †: ccs: Computing methodologies – _Reinforcement learning_; **Human-centered computing** \(\rightarrow\) Empirical studies in HCI.
shared human-AI environments (Beng et al., 2017; Li et al., 2018; Wang et al., 2018) and various robotics applications (Wang et al., 2018). This goal is not satisfied by agents demonstrating a high proficiency level at the assigned task. For example, AI-powered vehicles must behave sufficiently human-like for human drivers to interpret, anticipate, and act in their presence (Li et al., 2018). As a result, understanding the behaviors that contribute to people's perceptions of human likeness is a foundational first step towards achieving general human-like behavior of artificial agents.
In this work, we contribute to the objective of developing human-like agents by identifying and understanding what constitutes human-like behaviors in a video game. To cope our study, we focus on a 3D video game where agents must navigate from one point to another. This form of navigation is pervasive in many video games, making it a key area of interest for game developers (Beng et al., 2017; Li et al., 2018): in embodied games, players must move from place to place to accomplish their goals or explore the world. More generally, it is considered fundamental to embodied biological intelligence (Li et al., 2018; Wang et al., 2018), making it of interest to cognitive scientists (Li et al., 2018; Wang et al., 2018) and researchers interested in intelligent behavior (Wang et al., 2018). It has also been a key area of interest in HCI (Wang et al., 2018) due to how people (or robots) navigate in real, augmented, or entirely virtual spaces.
To study navigation in video games, we leverage the recently-proposed Human Navigation Turing Test (HNTT) (Li et al., 2018), in which human judges indicate which of two videos demonstrates more human-like behavior. The judges then justify their decision and indicate their certainty about their choice. In that work, the authors compared the accuracy of the human-likeness assessments to random chance but did not instantiate a statistical test to definitively conclude whether an agent passed the HNTT. According to their assessment, both studied AI agents did not pass the HNTT. As a result, producing an agent that passes the HNTT is still an open challenge. To this end, we design a novel agent to pass the HNTT. To assist with our design of a human-like agent, we inspect the resulting behavior of the two baseline agents from prior work (Li et al., 2018). With these insights, we design our novel agent -- the _reward-shaping_ agent -- using simple and intuitive techniques.
We then conduct a behavioral study on Amazon Mechanical Turk (MTurk) of the HTTT to investigate the behavior of our agent and the baselines. To determine whether agents pass the HTTT, we propose a firm criterion: a statistical test that determines whether human judges distinguish between human and agent behavior at a level that is _significantly different_ from chance. We then validate the conclusion of previous work: the two baseline agents are not sufficiently human-like because they do not pass the HTTT. In contrast, human judges cannot reliably distinguish between the behavior of our _reward-shaping_ agent from one controlled by a person. To our knowledge, this agent is the first to pass the HTTT.
To understand these assessments, we analyze the free-form responses to determine which characteristics people believe are representative of human and AI navigation behavior. We annotate the responses with codes that summarize the provided rationale. Using these annotations, we find that there are key differences between how people characterize human-like and non-human-like behavior. Specifically, we find that people utilize the same high-level characteristics when describing human-like and non-human-like behavior, but the presence or absence of these characteristics strongly informs their judgments.
Based on the findings of our analysis, we summarize considerations when developing and evaluating the human likeness of AI agents. For example, considering the end use of the agent is critical for defining what is meant by human like and designing a study accordingly. In summary, we make the following contributions.
1. We contribute a novel _reward-shaping_ agent that exhibits more human-like navigation behavior.
2. We conduct a behavioral study to assess: a) whether people reliably distinguish the behavior produced by the AI agents from that generated by people and b) what characteristics people believe are indicative of human-like behavior.
3. We conduct an extensive analysis of the resulting data. We propose a firm criterion to determine whether an agent passes the HTTT and find that only our _reward-shaping_ agent passes the HTTT according to this metric. We analyze the free-form responses to determine the characteristics that people believe are representative of human-like behavior.
4. Based on our findings, we propose concrete suggestions for developing and evaluating human-like AI.
## 2. Related Work
Researchers have taken various approaches to address the challenge of developing believable AI agents in games, including learning from demonstrations (Shen et al., 2016; Wang et al., 2018; Li et al., 2018), reinforcement learning (Li et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), and more (Wang et al., 2018; Wang et al., 2018). We focus on _reinforcement learning_(Shen et al., 2016) because it provides a generally-applicable set of algorithms for learning to control agents in settings including (but not limited to) modern game environments (Beng et al., 2017; Li et al., 2018; Wang et al., 2018; Wang et al., 2018). It also offers significant benefits as an approach for generating navigation behavior (Beng et al., 2017). In particular, the use of reinforcement learning may enable more complex navigation abilities (such as grappling or teleportation) and alleviate game designers from the labor-intensive procedure of the most popular alternative method to produce this behavior (Wang et al., 2018).
In reinforcement learning, an agent learns to accomplish a task by maximizing a reward, or score, that tells the agent how well it is performing. Although agents learn effective navigation by maximizing this reward, they make no consideration for the _style_ with which they act (Beng et al., 2017). If these approaches are to be adopted in commercial game development, practitioners have firmly asserted that controlling style is essential (Wang et al., 2018). As an extreme example, reinforcement learning approaches that have recently defeated world champion human players at modern games demonstrated unusual behaviors (Wang et al., 2018) that made collaborative play between human and AI in mixed teams far less successful (Li et al., 2018). Simply maximizing the task-specific reward signal is unlikely to produce human-like agents.
Reward shaping (Wang et al., 2018; Wang et al., 2018) is a simple yet powerful technique that allows practitioners to clearly specify the desired agent behavior. This approach involves crafting a reward signal that provides dense feedback to the agent. It is an intuitive way for those without a machine learning background to control the agent's behavior by specifying objectives instead of dedicating time to optimizing unintuitive hyperparameters. Additionally, reward shaping can be used with any reinforcement learning algorithm, making it possible to swap in and out the underlying algorithm as needed. We utilize reward shaping to generate more human-like behavior.
There is no standard set of metrics for evaluating human-like AI. One paradigm involves measuring human similarity with proxy metrics for human judgments. Some work measures the task performance of the AI (Sundararajan et al., 2017; Sundararajan et al., 2018), but this metric is an insufficient proxy for human similarity. Other work assesses how well the AI agent can predict the following human action (Sundararajan et al., 2018) or align its behavior with people (Sundararajan et al., 2018; Sundararajan et al., 2018), but these metrics do not include actual human evaluations. They do not assess whether people can accurately distinguish the AI player from the human one, which is vital for assessing human likeness in games (Sundararajan et al., 2018) and beyond (Sundararajan et al., 2018).
Studies with human evaluations tend to be small-scale surveys to understand the opinions regarding human-likeness (Sundararajan et al., 2018; Sundararajan et al., 2018). They often offer only a preliminary investigation into the specific characteristics that inform these beliefs and typically do not include a form of Turing test (Sundararajan et al., 2018), a well-established framework for addressing these problems (Sundararajan et al., 2018). Work that uses a Turing test often does not investigate the behaviors or provide concrete metrics (Sundararajan et al., 2018), or it focuses on assessing the full spectrum of game behaviors (Sundararajan et al., 2018; Sundararajan et al., 2018; Sundararajan et al., 2018). Due to the complexity of these games and the resulting behaviors, providing concrete recommendations to game designers is challenging. In contrast, we focus on a specific but widely-used behavior: point-to-point navigation. To perform our assessment, we utilize the setup of the recently-proposed Human Navigation Turing Test (Sundararajan et al., 2018); however, we propose and perform a deeper evaluation of human assessments of AI and human behavior.
## 3. Background and Preliminaries
We utilize the navigation task from previous work (Sundararajan et al., 2018) and instantiate in the same modern AAA video game for our experiments. We first describe the game in more detail, then provide an overview of the navigation task.
### The Video Game
To enable the reuse of agent and human-generated videos in our study, we choose the same game as previous work. This game is a multiplayer online combat game that features 13 customizable characters, each with special abilities. The game is commonly compared with other popular team-based action games, such as Overwatch and DotA. Players compete against one another in two teams of four. The game has two game modes. One mode requires capturing and defending specific locations (called objectives) on the map, while the other involves collecting items called cells and deposit them to active platforms on the map. The game's team-based mechanics, objective balancing, and character customization offer a distinct multiplayer experience, making it an excellent choice for studying both AI behavior and human-AI interactions.
Underlying the game is the crucial mechanic of goal-directed navigation: players must move from one location to another to collect powerups or cells, go to drop-off platforms when they are active, and engage in combat with other players. As a result, navigation between points represents an abstraction of the most common task in the game. To allow us to concentrate on characteristics specific to navigation, we utilized a simplified version of the game that excludes other complex mechanics and objectives.
### The Navigation Task
We instantiate the navigation task in the same way as prior work: a single avatar must navigate to a target location. The left screenshot of Figure 1 shows this location, indicated by the three blue containers. Navigating to a goal is a subtask of the main game, in which players must balance navigating to target locations to collect cells or boost health while warding off other players.
Figure 1. Navigation task as observed by study participants (screenshot, left), and detail of the mini map of the game level (right). Agents spawn on the island outside of the main map, which is shown in the bottom portion of the mini map on the right. They must jump to the main area and navigate to the goal location. The light blue containers in the left screenshot represent the goal location.
Before the player moves, the navigation target spawns uniformly at random in one of 16 possible locations, denoted by the green crosses in the right-hand image of Figure 1. Then, the player spawns on an island outside the main map (shown in the bottom portion of the mini-map) and must jump to the map's main area using the available jump areas. Once the player is in the central region, they can move to the target location.
The HNTT asks human judges to identify which of two navigation behaviors more closely resemble how people navigate in _reality_. This phrasing aims to capture how _convincing_ an agent is (Kumar et al., 2017), in contrast to another interpretation of the Turing test: whether a human or AI agent _controls_ an entity. We chose this phrasing because we want to create _convincing_ NPCs that contribute to an immersive game experience. In contrast, we do not wish to deceive the player into thinking that an agent is controlled by a person when it is not.
### The Baseline Agents
Previous work (Kumar et al., 2017) conducted their study with two agent types: a _symbolic_ and a _hybrid_ agent. When presented with the two agents, participants accurately detected human players above chance, meaning that people did not perceive their behavior as sufficiently human-like. We utilize these agents as baselines in our experiments, so we describe their essential details.
To progress toward the goal location, the agents take actions from a prespecified set (called an action space). This action space consists of 8 possible actions: do nothing, move forward, and move left and right (30, 45, and 90 degrees on each side). To facilitate training, the agents receive a dense reward signal to encourage successful navigation to the goal. It consists of the following terms: a -0.01 per-step penalty to encourage the agent to efficiently reach the goal, a -1 one-time penalty for dying because the agent may fall off the map, an incremental reward for approaching the goal, and a +1 reward for reaching the goal. We observed that this reward signal only includes terms to encourage successfully reaching the goal as quickly as possible.
The main difference between these two agents is the observations that they take as input. The _symbolic_ agent receives only a semantic, low-dimensional representation as input; the _hybrid_ agent also receives an image input. For more details about the baseline agents, we refer an interested reader to Appendix A.1 and Devlin et al. (2016).
## 4. Building a More Human-Like AI
To help design our _reward-shaping_ agent, we analyze the _hybrid_ and _symbolic_ agents to find characteristics that may have influenced the previous judgments of human likeness. Based on this analysis, we introduce a novel agent for the HNTT: the _reward-shaping_ agent.
### Designing our _Reward-Shaping_ Agent
This agent extends the _hybrid_ agent with two critical changes to promote learning of human-like behavior. Specifically, we introduce additional terms to the reward signal and expand the action space available to the agent. To test whether our contributions result in differences in perceptions of human likeness, we fix all other components of our _reward-shaping_ agent to be the same as the _hybrid_ agent.
Because the _symbolic_ and _hybrid_ agents previously exhibited non-human-like behavior, we inspected examples of their generated navigation and isolated three classes of problematic behavior. Agents would:
* Wildly swing camera angles or make sudden turns,
* Frequently collide with walls, and
* Sometimes move more slowly than expected.
To correct these behaviors, we utilize reward shaping (Zhu et al., 2017) by including terms corresponding to desired or undesired behavior. We introduce the following terms. First, we include a camera angle difference penalty for swift camera angle changes over a set 0.15 difference threshold value to combat **P1**. Second, we introduce a penalty of -0.05 for any wall collisions to address **P2**. Third, to address **P3**, we provide a penalty of -0.01 if the distance traveled between steps is lower than an environment-specific threshold value of 220 map units. We choose these values in line with previous training rewards and expert assessments of the relative importance of each of the components.
To encourage smoother control and avoid abrupt turns, we utilize an approach similar to action-space shaping (Zhu et al., 2017) by introducing additional available actions to the agent. Intuitively, we anticipate that the introduction of finer-grain controls will yield more fluid navigation. We extend the action space to 14 actions from the previous 8. In addition to the 'do nothing' and'move forward' actions, we include 6 degrees of turning left and right, rather than the 3 used by the baselines. The updated list of turning degrees for this agent is: 18, 36, 45, 54, 72, and 90 on each side.
Taken together, these two components comprise the novel aspects of the _reward-shaping_ agent. We design this agent in a relatively _agnostic_ way to make it more accessible to those without expertise in deep reinforcement learning. Consequently, these two components can be applied to any state-of-the-art deep reinforcement learning algorithm. Depending on the underlying algorithm, the specific values, particularly those used for each term of the reward signal, may need to be set differently. However, we believe that adjusting these values is more intuitive than specifying complex parameters that are specific to a particular algorithm.
### Producing High-Quality Navigation
We train all agents to achieve a similar level of performance on the navigation task (see Appendix A.2 for the details of our training setup) to ensure that task skill is not responsible for the perceived differences in human likeness. We measure task proficiency using the number of steps needed to reach the goal. Each step corresponds to around 5 seconds of real-time play. Figure 2 confirms that the agent models are indeed representative of state-of-the-art techniques for learning navigation in complex, 3D games.
The _reward-shaping_ and _hybrid_ agents exhibit higher variance during training than the _symbolic_ agent. Because these agents must also learn from pixels, their learning task is more challenging than the _symbolic_ agent (that only takes in symbolic input). As a result, we expect higher variance during training as the agent learns this more complex task. Importantly, all agents learn to reliably reach the goal, indicated by the performance near the end of training. A skilled agent now takes approximately 60 steps to complete the task (about 12 seconds of real-time play). This result ensures that
differences in the human-likeness of assessments are not due to differences in the ability of the agents to solve the task.
## 5. Experimental Design
To understand what characteristics people believe are indicative of human likeness, we conducted a behavioral study with human participants. Our setup closely follows prior work (Krishnan et al., 2017); however, we introduce important extensions, including collecting assessments from a greater number of participants using a crowd-sourcing platform (MTurk) and additional data for a more thorough analysis. For completeness, we detail the full study design here.
### Experimental Task
We asked each human participant to act as a judge by completing a survey consisting of 6 HTTT trials. In each HTTT trial, the judge was presented with two side-by-side video stimuli of people or agents completing the navigation task. After watching these videos, the judge answered three questions to indicate which video they believed navigated more like a human would in the real world, a justification of their response, and an indication of their certainty. More specifically, participants answered the following questions:
1. **Which video navigates more like a human would in the real world?** The judge clicked the button underneath the video that they believed navigated more like a human would. This decision was a forced binary choice.
2. **Why do you think this is the case? Please provide details specific to the video.** The judge answered this question as a free-form response in the box below the question.
3. **How certain are you of your choice?** The judge answered this question on a 5-point Likert scale, with choices ranging from extremely certain to extremely uncertain.
To mitigate subject learning effects from sequentially viewing multiple videos, we did not reveal to the judges which of the videos was AI-generated. In other words, participants completed each task and, in the end, did not know which videos were human-generated.
### Experimental Procedure
We completed 3 studies; each study pitted a human-controlled agent against a different AI agent. Within each study, all judges viewed the same 6 trials. The trials were presented in a randomized order per judge. Within each trial, the ordering of the two videos was randomized, such that the human-generated video could not be inferred by presentation order. Table 1 outlines the conditions tested in each study.
Each participant first read through an introduction page with the required task instructions (see Appendix B for the full text). They then completed a consent form and read through a background page with brief details about the video game. They answered a series of questions to assess their comprehension of the task and familiarity with video games. Finally, participants engaged in the 6 HTTT trials. Figure 3 shows screenshots of the comprehension and familiarity questions (a) and an example HTTT trial (b).
### Navigation Video Generation and Sampling
A key part of the study is the videos that were shown to the human judges. For the human-generated navigation data and videos, we use the publicly-available sample published by previous work (Krishnan et al., 2017).1 We sampled human videos from the 40 published under their "study 1" protocol. To generate the AI navigation data, we select each agent's most recently saved version. Then, we instantiate a new session and deploy the agent in the game 100 times, producing 100 total
\begin{table}
\begin{tabular}{c c c} \hline \hline Study & Number of participants & Number of trials \\ \hline Human vs. _hybrid_ & 50 & 6 \\ Human vs. _symbolic_ & 50 & 6 \\ Human vs. _reward-shaping_ & 92 & 6 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Conditions tested in each study and the number of trials per condition. Importantly, note that the human vs. _hybrid_ and human vs. _symbolic_ studies are replications of prior work (Krishnan et al., 2017) to validate the switch to a crowd-sourcing platform.
Figure 2. Hybrid, symbolic, and reward-shaping agents successfully learn to navigate. This plot shows the average amount of time needed to solve the task (y-axis) as a function of the amount of time taken to train the agent. The shaded area shows the standard deviation. For reward-shaping, N=3; for hybrid and symbolic, N=4). All curves are smoothed with a rolling window of 200. Importantly, on average, all agents converge to solve the task in around 60 steps (around 12 seconds of in-game time). In contrast, agents start out needing around 140 steps (around 28 seconds of in-game time) on average to solve the task. The starting performance on this task is similar to how long an agent taking random actions would take to solve it. The main takeaways are that performance differences are not responsible for perceived differences in human likeness, and standard metrics of task performance are insufficient to assess human likeness.
navigation videos per agent. To produce the video stimuli used in the study, we sample the recordings uniformly at random.
We implemented several measures to standardize the videos and ensure that any measurement noise applied to all conditions. First, we checked that any changes in light applied similarly across conditions. Second, we designed the timing of the stimuli to ensure that participants had sufficient time to engage in and provide meaningful responses in all trials. As a result, we did not use videos that were too long and excluded videos shorter than 10 seconds (before post-processing) because they were deemed too short to assess navigation quality in pilot studies. Third, because the goal locations may differ depending on the game-controlled initialization, we matched the goal locations of the human videos with the AI agent videos. Consequently, we used different human videos for different studies. Fourth, we applied the post-processing steps from prior work (Han et al., 2017), including masking identifying information, adding a "For Research Purposes Only" watermark, and cutting out the last few seconds of the human videos. We implemented the last change to correct an effect of the data collection process, where the human players manually ended their recording, adding a few seconds at the end of the videos.
### Other Experimental Control
The MTurk crowd-sourcing platform (Krishna et al., 2017) is widely used for data collection and research due to its scalability, as long as researchers implement appropriate steps for quality control (Krishna et al., 2017). Here, we detail the study inclusion criteria that we implemented for quality control.
We set the following MTurk requirements for survey participation: location is United States, age is 18 or older, and language is English. We did not collect demographic information or any other personally identifiable information. To target more experienced MTurk Workers, we set the following Human Intelligence Task (HIT) qualifications: HIT Approval Rate greater than 98%, Number of HITs Approved greater than 500, and a qualification to prevent repeat responses. To incentivize quality, we included a bonus payment for each high-quality response. We reviewed the free-form answers to find low-quality or suspected bot responses; for example, we excluded from analysis responses with high instances of typos, copy/pasted answers, or nonsensical wording. We paid all participants who completed the task for the HIT, even if their response was identified as low-quality. The low-quality responses did not receive the bonus payment. We paid on average 15 USD per hour. We obtained approval for our studies from our Institutional Review Board (IRB) and informed consent from each participant.
Figure 3. Screenshots of HNTT survey questions. The screenshot in (a) shows the comprehension and familiarity questions (asked once per participant). We gauge the participant’s familiarity with the time the task will take, understanding of task completion, familiarity with third-person action video games, and familiarity with the video game used in the survey. The screenshot in (b) depicts one HNTT trial. We ask participants to choose their response to the human likeness question, justify it, and indicate their level of certainty.
We included details of the study and a description of any potential participant risks in the consent form.
## 6. Analysis
Our primary objective is to evaluate the human-likeness of the agents using both quantitative and qualitative measures. To quantify the ability of the human judges to distinguish between the human-like and non-human-like agents, we analyze their accuracy scores and self-reported uncertainty. To identify the factors that influence their perceptions of human likeness, we adopt a qualitative approach. We construct and use codes to summarize the reasons cited in the open-ended responses and compare the frequency of these codes across different settings.
### Assessing Human-Likeness
We first aim to identify which agents pass the HTTT according to our proposed criterion. Because existing work demonstrates differences in assessment ability depending on expertise, we seek to identify whether this phenomenon holds in our setting. We finally seek to investigate the relationship between self-reported uncertainty and accuracy when assessing the agents. We instantiate the following research questions:
1. [label=**RQ 0.**, ref=]
2. Which agents are judged as being human-like?
3. Do the judges exhibit greater accuracy in assessing human likeness as a function of their experience with games?
4. What is the relationship between the accuracy of human judges and their self-reported uncertainty?
To answer **RQ 1**, we propose a firm criterion for deciding whether an agent is sufficiently human-like, formalizing the question: _are human assessors unable to distinguish between agent and human behavior?_ We implement this criterion as a statistical test that determines whether human judges distinguish between human and agent behavior at a level significantly different from chance. We instantiate this test by computing the 95% confidence interval for the median of the human-agent comparisons using bootstrap sampling (a non-parametric approach). If the 95% confidence interval includes 0.5 (chance-level agreement), then the agent passes the HTTT.
For both **RQ 2** and **RQ 3**, we compare our variables of interest with _accuracy_. We define accuracy to mean that the participant identified that the human-generated behavior was more human-like than the AI-generated behavior. To answer **RQ 2**, we compare accuracy to the self-reported familiarity of the participants with action games in general and the specific game in the study. To answer **RQ 3**, we examine the self-reported uncertainty of the judges and its relationship to accuracy.
### Assessing Human-Like Characteristics
To analyze the _characteristics_ that correspond to assessments of human likeness, we instantiate the following research questions:
1. [label=**RQ 0.**, ref=]
2. Are there key differences between how people characterize human-like and non-human-like behavior? Does this differ when the agent does or does not pass the HTTT?
3. What is the relationship between the characteristics that people use to assess human likeness and their ability to accurately assess it?
We selected a sub-sample of the responses from the _hybrid_ agent and the _reward-shaping_ agent studies for analysis. We chose these studies to enable comparison between an agent that does not pass the HTTT with one that does (see Section 7.1). We first randomly sampled a set of 55 responses to compute the initial agreement, called the _agreement sample_. We filtered this sample to 53 after removing responses that were ambiguous or could not be categorized by any of our codes. We then constructed the sample for analysis by randomly sub-sampling three free-form responses per judge for each study. To minimize bias, we shuffled responses before sampling. We removed responses that were ambiguous or could not be categorized by any of our codes, resulting in a dataset of 395 responses for our analysis of human-like characteristics.
We followed a pair-coding approach to annotate the data. The annotator with more familiarity with the data proposed an initial list of codes derived from previous work (Shi et al., 2017). Following established notation (Beng et al., 2017), we have a set of \(I\) items (or responses), labeled as at least one of the \(K\) categories by \(C=2\) coders. We decompose each label as more or less human-like \(H=(\texttt{more, less})\) and quantify its direction \(D=(\texttt{more, less})\), when applicable. For example, if we label item \(i\) as _smoothness of movement_, we note whether the judge considered the behavior human-like and whether they noted it as being more + or less - smooth.
The two annotators then convened to discuss the meaning of the codes and jointly code a set of 5 responses. Table 2 illustrates an example of a coded response. After that, the two annotators separately coded the agreement sample with the initial set of codes. Optionally, the annotators could label responses as _other_ and provide specific examples to enable revisions of the codes if other themes emerged. The two annotators iteratively reconvened to discuss disagreements and refine the codes. After multiple rounds of discussion, independent coding, and disagreement resolution, the annotators fixed the set of codes (Table 3) and their inclusion criteria to label the full sample.
Because we aim to design human-like AI agents, we want to identify codes that could be utilized by AI designers. For that reason, when deciding on codes, we prioritize codes that refer to specific behaviors over more general ones. For example, a collision avoidance behavior could be coded as goal-directed; however, we code it only as collision avoidance. This protocol promotes the independence of categories while prioritizing specific, lower-level behaviors to use in designing agents. When coding, the annotators first consider whether the response could be categorized as a lower-level code, then move to more general codes if needed. Appendix C contains more details about this process.
The annotators achieved an overall average inter-annotator agreement of 0.84 on the agreement sample. We calculate inter-annotator agreement with binary Cohen's kappa \(\kappa\)(Koh et al., 2017) over \(K\), \(D\), and \(H\), as previously defined. See Table 4 for more details. After fixing the list of codes, the annotators divided the data sub-sample such that there was overlap on 25% of the data (99 items). We report Cohen's kappa in Table 4 for the overlapping sample to ensure that
our understanding of the codes did not overfit the specific examples in the agreement sample.
We provide a more detailed discussion of the annotation codes and inclusion criteria. Table 3 includes these definitions and phrases that helped us identify the presence of each code. For each code, we provide a supporting example to give the reader a sense of what common responses may look like. _Smoothness of movement_ refers to the quality of the agent's navigation or camera movement. This code considers both immediate jerky actions and temporally-extended zig-zagging behavior. _Goal directed_ refers to how intentional the agent's behavior appears. We include descriptions of behavior that pertain to a perceived goal, even if that goal is not the primary one. We include the code _collision avoidance_ because it is a long-standing area of research in the robotics community (Krishnam et al., 2017). This code refers to intentional behavior to redirect from a potential crash. _Environment receptivity_ aims to capture the agent's relationship with the game environment, its contextual understanding, and adherence to norms. In a real-world setting, this might look like a person walking on a path instead of the grass or crossing the street when permitted by a pedestrian signal. Any responses that refer to non-specific feeling that a behavior was more human-like are categorized as _intuition_. We include this code to capture instances where participants can identify what they believe is more human-like behavior but struggle to express it. Finally, we include _self-reference_ as a code to capture when judges relate the agent's behavior to their own play.
During the iterative coding process, the annotators assessed the likely causes of disagreements. After resolving mistakes and other easy-to-resolve issues, the annotators determined that the remaining disagreements arose from individual differences in interpreting ambiguous natural-language responses. This cause means that neither annotator can be treated as more correct for disagreement resolution. The annotators, therefore, decided on the following disagreement resolution scheme. When a disagreement arises in at least one label for an item annotated by both annotators, we randomly choose an annotator to treat as correct and use their labels.
## 7. Results
We first present the results from our analysis described in Section 6.1; in particular, we demonstrate that our reward-shaping agent passes the HNTT while other agents do not. We then present the results from our analysis described in Section 6.1 by highlighting characteristic behaviors and key differences in how human judges perceive AI vs human players. We find that people tend to utilize
\begin{table}
\begin{tabular}{|c|l|l|l|} \hline Response & Free-Form Response & More Human-Like & Less Human-Like \\ \hline B & The character in Video B runs in straight lines and goes to where & Smoothness of movement +; & Collision avoidance –; \\ & he needs to be going. The character in Video A is running in & Goal directed + & Goal directed – \\ & circles,into objects, etc. & & \\ \hline \end{tabular}
\end{table}
Table 2. Example coded response to the question, “Which video navigates more like a human would in the real world?”. The leftmost column indicates that this judge believed Video B to exhibit the more human-like behavior. The highlighted text illustrates the annotation process. The judge identifies that the more human-like character runs in straight lines (more human-like code: smoothness of movement +) and navigates to the goal (more human-like code: goal directed +), while the character that they believe is less human-like runs in circles (less human-like code: goal directed –) and into objects (less human-like code: collision avoidance –).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Annotation Code** & **Shorthand** & **Definition** & **Key Words and Phrases** & **Example Snippet** \\ \hline Smoothness of & smooth & The quality of the agent’s navigation & Smooth, jerky, straight, & Movements are way \\ movement & & or camera movement & swerve, steady, fluid & more smooth \\ \hline Goal directed & goal & How goal-directed the agent’s behavior & Intention, focus, & Deliberate camera \\ & & seems & knew where to go & movements \\ \hline Collision & avoidance & Whether the agent avoids collisions & Collide, avoid, crash & Runs into a box \\ avoidance & & & runs into obstacle & \\ \hline Environment & receptivity & Whether the agent understands and/or & Explore, stay on path, & Ignores all the \\ receptivity & & properly interacts with the environment & collect power-ups & health/mana/etc \\ \hline Intuition & intuition & The judge cannot pinpoint behaviors & Natural, feeling, & Just a feeling \\ & & seems to be & & \\ \hline Self-reference & self-reference & Relationship to the judge’s own & Like I play & [Like] how I navigate \\ & & movement or play & with that... view \\ \hline \end{tabular}
\end{table}
Table 3. Annotation code definitions. The codes used to label the free-form responses are presented in the leftmost column. The middle-left column shows the corresponding shorthand for the codes, used later in the paper. In the middle column, a brief definition of each code is presented. The middle-right column lists the keywords and phrases that the annotators used to determine if a response could be labeled as containing a particular code. An example snippet of a response that would be labeled with that code is provided in the rightmost column. Although the included examples are fairly clear, the free-form responses often contain more ambiguous content.
similar high-level characteristics when characterizing human-like behavior. However, their beliefs about AI capabilities may inform whether they think AI agents more or less strongly exhibit these characteristics.
### Analysis of Human Likeness
**Only the _reward-shaping_ agent passes the HNTT.** Table 5 shows the full summary statistics, which are computed over the full dataset from our survey. Each bootstrap calculation is run over 10000 iterations. The _symbolic_ and _hybrid_ baseline agents do not pass the HNTT according to our criterion. The judges had median accuracies of 0.83 (_symbolic_ agent, 95% CI=[0.67, 1.0]) and 0.83 (_hybrid_ agent, 95% CI=[0.83, 1.0]), indicating that they distinguish the agents from humans significantly higher than chance level. In contrast, our _reward shaping_ agent passes this test of human-likeness: the median accuracy has a 95% confidence interval that includes 0.5 (chance-level agreement). This result suggests that the judges cannot consistently differentiate between the _reward shaping_ agent and the human player (_reward shaping_ agent, median accuracy=0.50, 95% CI=[0.50, 0.50]).
Because the sample sizes of the trials differ (50 samples for the human vs. _hybrid_ and human vs. _symbolic_ conditions; 92 samples for the human vs. _reward-shaping_ condition), we validate our results by subsampling the data for the _reward-shaping_ agent to 50 samples, then run the bootstrap sampling procedure 100 times. We find that the computed CI always contains 0.5, or chance-level agreement, in each run of the bootstrap. The average median accuracy is 0.50, with a variance of 0.00; the averaged CI is [0.44, 0.63], with a variance of 0.01 for the lower bound and 0.00 for the upper bound. We, therefore, answer our **RQ 1**: the _reward-shaping_ agent is the only agent that is judged as human-like according to this proposed metric.
**There is no relationship between game familiarity and ability to accurately assess the human likeness of the AI agents.** For each study, we perform a multiple linear regression analysis to test whether specific game familiarity and general game familiarity significantly predicted accuracy in assessing human-likeness. There is no relationship between either of the self-reported familiarities and accuracy for all agents. For the _symbolic_ agent, the fitted regression model was:
\[\begin{split}\text{accuracy}&=0.68-0.01(\text{ specific game familiarity})\\ &+0.03(\text{general game familiarity}).\end{split}\]
The overall regression was not statistically significant (\(R^{2}=0.01\), \(F(2,47)=0.21\), \(p=0.814\)). Decomposing the results further, neither specific game familiarity (\(\beta=-0.01\), \(p=0.878\)) nor general game familiarity (\(\beta=0.03\), \(p=0.525\)) predicted accuracy.
For the _hybrid_ agent, the fitted regression model was:
\[\begin{split}\text{accuracy}&=0.67-0.03(\text{ specific game familiarity})\\ &+0.06(\text{general game familiarity}).\end{split}\]
The overall regression was not statistically significant (\(R^{2}=0.06\), \(F(2,47)=0.26\), \(p=0.261\)). We found that specific game familiarity did not significantly predict accuracy (\(\beta=-0.03\), \(p=0.377\)). General game familiarity also did not significantly predict accuracy (\(\beta=0.06\), \(p=0.109\)).
Turning our attention to the _reward-shaping_ agent, the fitted regression model was:
\[\begin{split}\text{accuracy}&=0.40-0.02(\text{ specific game familiarity})\\ &+0.04(\text{general game familiarity}).\end{split}\]
The overall regression was again not statistically significant (\(R^{2}=0.02\), \(F(2,89)=0.94\), \(p=0.393\)). This result holds for both specific game familiarity (\(\beta=-0.02\), \(p=0.522\)) and general game familiarity (\(\beta=0.04\), \(p=0.191\)).
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Annotation** & **Direction** & **Cohen’s \(\kappa\)** & **Cohen’s \(\kappa\)** \\
**Codes** & & **Agreement** & **Overlapping** \\ & & **Sample** & **Sample** \\ \hline Smoothness of movement & More + & 0.90 & 0.79 \\ movement & Less - & 0.64 & 0.64 \\ \hline Goal directed & More + & 0.82 & 0.78 \\ & Less - & 0.82 & 0.63 \\ \hline Collision & More + & 1.00 & 1.00 \\ avoidance & Less - & 0.64 & 0.73 \\ \hline Environment & More + & 0.82 & 0.93 \\ receptivity & Less - & 0.73 & 0.67 \\ \hline Intuition & & 1.00 & 0.87 \\ \hline Self-reference & & 1.00 & 1.00 \\ \hline \multicolumn{3}{c}{Average} & **0.84** & **0.78** \\ \hline \hline \end{tabular}
\end{table}
Table 4. Per-code Cohen’s \(\kappa\) score. The two annotators achieved an average Cohen’s \(\kappa\) score of \(0.84\) over all of the codes for the _agreement sample_. According to Cohen’s suggested interpretation, we achieve at least moderate agreement on each category and achieve almost-perfect agreement on 7 of the 10 categories when annotating the agreement sample. When annotating the _overlapping sample_, the two annotators achieved an average Cohen’s \(\kappa\) score of \(0.78\) over all of the codes. According to Cohen’s suggested interpretation, we achieve at least substantial agreement on each category. There was only a small overall decrease in agreement between these two settings, indicating that our coding process is fairly general.
\begin{table}
\begin{tabular}{c c} \hline \hline Agent & Median Accuracy (IQR) [95\% CI] \\ \hline _symbolic_ & 0.83 (\(0.67-1.00\)) [\(0.67,1.00\)] \\ _hybrid_ & 0.83 (\(0.67-1.00\)) [\(0.83,1.00\)] \\ _reward-shaping_ & 0.50 (\(0.33-0.67\)) [\(0.50,0.50\)] \\ \hline Agent & Median Uncertainty (IQR) \\ \hline _symbolic_ & 2.17 (\(1.67-2.42\)) \\ _hybrid_ & 1.92 (\(1.33-2.25\)) \\ _reward-shaping_ & 2.17 (\(1.75-2.67\)) \\ \hline \hline \end{tabular}
\end{table}
Table 5. Full summary statistics of accuracy and uncertainty. We show the median accuracy (IQR=Q1-Q3) for each agent, reported as non-parametric measures of central tendency and spread; we report 95% confidence interval and median uncertainty (IQR=Q1-Q3) of the human-agent comparisons for each agent. Only the _reward-shaping_ agent passes the HNTT according to our proposed metric.
These findings suggest that game familiarity is generally _not_ predictive of accuracy for this specific task, answering **RQ 2**. In contrast, previous findings have demonstrated a relationship between the ability to assess human likeness and familiarity with the domain of study. We suspect that this result differs because we are studying a relatively simple setting, in which most people have strong priors about what constitutes human likeness. Navigating by walking or running is an activity that most people either perform or observe daily, meaning we will likely have a strong internal sense of human-like movement -- even if we are not familiar with games that require navigation. In contrast, we hypothesize that game familiarity would be predictive of accuracy in the full game setting, implicating the importance of assessing human likeness in more complex settings as an important next step.
**Human judges exhibit less false confidence in their assessments of the _reward-shaping_ agent.** We assess the median uncertainties of participants; lower values correspond to more certainty and higher values correspond to less certainty. Table 5 depicts the results of this analysis. Participants reported similar levels of uncertainty when assessing our _reward-shaping_ agent (median=2.17, IQR=(1.75-2.67)) and the symbolic agent (median=2.17, IQR=(1.67-2.42)). In comparison, participants reported higher certainty when assessing the _hybrid_ agent (median=1.92, IQR=(1.33-2.25)).
People felt less confident about their assessments of the _symbolic_ and _reward-shaping_ agents compared to the _hybrid_ agent; however, participants more accurately detected human-generated behavior in the presence of the _symbolic_ and _hybrid_ agents. This result is surprising because it suggests that self-reported uncertainty and accurate assessments are not necessarily correlated. In other words, participants may exhibit false confidence in their ability to assess the human likeness of agents. We believe that participants may have been less certain about their assessments of the _symbolic_ agent due to differences in the lengths of the videos: on average, the videos of the symbolic agents were 8.3 seconds long, whereas the hybrid agent videos were 15.3 seconds long. Participants may have not had enough time with the agent to accurately assess it. Taken together, the accuracy and uncertainty results indicate that, when presented with behavior from the _reward-shaping_ agent, participants exhibited less false confidence in their assessment ability compared to when they were presented with behavior generated by the _hybrid_ agent. This result answers **RQ 3**.
### Analysis of Human-Like Characteristics
In all plots, we use the shorthand version of the codes, noted in Table 3, along with the + and \(-\) notation. The + and \(-\) notation indicate the degree, or direction, of the code. For example, smooth + indicates that the participant referenced more smooth movement, and smooth - indicates that the participant referenced less smooth movement.
**Human judges rely on similar high-level characteristics when assessing human-like behavior.** Figure 4 shows the codes that participants use to describe human-like and non-human-like behavior. We investigate the relative number of times a code was used compared to all codes used to describe either human-like or non-human-like behavior (human-like and non-human-like code proportions should sum to 1). Judges tend to rely on similar high-level characteristics when characterizing human-like behavior. Overall, they most often reference the following high-level codes: smoothness of movement, environment receptivity, and goal-directedness. When we decompose the responses based on whether the behavior was assessed as human-like or not, we find that people more frequently characterize human-like behavior as more smooth, receptive and responsive to the environment, and goal-directed. In contrast, participants more frequently describe non-human-like behavior as being less smooth, receptive and responsive to the environment, and goal-directed. They rely on intuition and self-reference to a similar degree when describing human-like and non-human-like behavior.
We investigated these responses based on agent type but did not find a difference between the resulting proportions of codes. This result supports the assertion that people may have relatively stable beliefs what constitutes human-like behavior. Therefore, the rationale is only sometimes useful: in other words, looking for the jerkier agent only makes sense if the AI has not been designed to be less jerky than the person. We, therefore, conclude that, although people rely on different specific characteristics to determine human likeness, the general characteristics are relatively stable across different AI agents, which answers **RQ 4**.
**Human judges that more accurately assess human likeness exhibit different beliefs about characteristics than human judges that less accurately assess human likeness, despite relying on the high-level characteristics to similar degrees.** We divide the participants into two groups: high-accuracy
Figure 4. Codes used to describe human-like and non-human-like behavior. We compare the proportion of codes used to describe human-like and non-human-like behavior by human judges in their assessment of human likeness. People more frequently characterize human-like behavior as being more smooth, receptive and responsive to the environment, and goal-directed. In contrast, participants more frequently describe non-human-like behavior as being less smooth, receptive and responsive to the environment, and goal-directed.
(greater than 80% of responses indicating the more human-like agents aligned with the human-generated video) and low accuracy (less than or equal to 80% of responses indicating the more human-like agents aligned with the human-generated video). We examine which codes are more frequently used to describe human-likeness by the participants in each group. Figure 5 shows this decomposition. Although high- and low-accuracy judges generally rely on similar characteristics, they do so to different degrees. For example, both types of judges refer to the high-level code of smoothness of movement in 40% of their codes when describing human-like behavior. Similarly, they both refer to the high-level code of smoothness of movement in around 49% of their codes when describing behavior that they do not perceive as human like. These results indicate that there is no difference in their tendency to rely on this characteristic to explain behavior. However, high-accuracy judges more commonly describe smooth motion when describing human-like behavior. In contrast, low-accuracy judges more often mention smoothness as a characteristic of behavior that is not human-like. This result further supports the idea that people's beliefs about AI capabilities may inform their assessments. In our case, the low-accuracy participants seem to share a similar belief that an AI agent is more capable than a person (by producing more smooth or "perfect" navigation).
Low-accuracy judges more often describe human-like agents as exhibiting more receptivity and responsiveness to their surroundings. A similar pattern emerges with less receptivity to justify non-human-likeness. This result indicates that low-accuracy participants may incorrectly attribute behaviors to interacting with the environment. As an example, a human judge that incorrectly identified Video A as being more human-like claims,
Video A takes the more obvious route to the finish while B takes the longest possible one. A human generally would take the easiest route.
Interestingly, both high- and low-accuracy judges utilize intuition and self-reference to a similar level of frequency when assessing human-like behavior. In combination with the previous results showing that assessments of human likeness are influenced by stereotyped beliefs about AI capabilities, this finding suggests that some participants have better intuition because it aligns with the actual capabilities of AI agents.
## 8. Discussion and Future Directions
Although conducted in a limited scope, our findings should assist with future work on designing and evaluating human-like agents.
### Limitations
Our study specifically evaluates the human-likeness of third-person perspective point-to-point navigation behavior in agents. Although this type of navigation is present in many settings, like pedestrian navigation in driving simulator (Wang et al., 2018) there are many other forms of navigation that exist in both real-world and virtual environments. Each of these types presents unique challenges and requires different strategies for designing human-like behavior. Although our study does not address all types of navigation, it provides a valuable starting point for evaluating the human-likeness of agents in one specific type of navigation. The codes that we identify are general enough to provide a starting point for researchers to analyze different forms of navigation. For instance, collision avoidance is a general characteristic that is persistent in many domains featuring
Figure 5. Codes used to describe human-like and non-human-like behavior, further decomposed by high- and low-accuracy judges. We compare the proportions of codes that are used to describe human-like behavior (left) and non-human-like behavior (right). We further decompose these codes by high- and low-accuracy judges to determine whether individuals who are more accurate rely on different features to rationalize their decisions. Interestingly, we see that the judges rely on similar characteristics to different degrees.
diverse types of navigation, like driving and running. Future work should consider expanding these evaluations to provide a more comprehensive understanding of how to design agents that behave in a more human-like manner.
Additionally, the analysis of the free-form responses revealed that there were different interpretations of the human likeness question. Some judges related the movement directly to human navigation in the real world. One judge said,
In real life a human would almost certainly not jump down as far as the character in video A did without severely hurting themselves.
However, others related the movement to how human players would _control_ an agent in a video game. Another judge mentioned,
... in Video A, the player bumps into a wall briefly before readjusting. This is something humans do when they get distracted and look away for a moment.
To investigate this disagreement, we annotated the agreement sample with which interpretation of the question the subject answered: real-world human navigation, video game navigation, and unclear. The two annotators had a high agreement for this annotation (Cohen's kappa: \(\kappa=0.94\)). It was largely unclear which question the subjects were answering (40 out of 53 responses). However 11 responses referred to _video-game_ navigation, while only 2 responses were clearly about real-world human navigation. We suspect that including the video game familiarity questions primed subjects to believe that the question was about video-game-specific navigation, rather than general human-like navigation. In future studies, we recommend that the study designers clarify which question is asked of participants by including an additional question that asks the participants the other interpretation of the question to provide an obvious contrast or describing the situation in which they would like the participants to envision themselves.
### Designing and Evaluating Human-Like AI
Our study revealed that only the agent designed to display more human-like behavior passed our test of human likeness, highlighting the importance of explicitly incorporating these objectives when designing agents. However, determining what exactly constitutes human likeness requires careful consideration from designers. This assertion is further supported by the different interpretations of the human likeness question by the human judges. One interpretation of human likeness is acting as if the agent is controlled by a person, while the other refers to exhibiting more realistic behaviors. Both perspectives can be useful in different contexts.
When designers seek to automate parts of the development process, such as playtesting, it is more important to create agents that appear to be human-controlled. In automated playtesting of games [23, 68], AI agents that act like real users would enable video game designers to expedite the iterative development process while also alleviating the burden of game players to extensively evaluate new content. Users could provide feedback only after obvious bugs, like those related to movement, have been corrected, which may enhance their enjoyment of the feedback process. In _shared autonomy_[2], developing agents that behave like that user would enable a more seamless integration of semi-autonomous control with user inputs. For example, we observed that the judges called out strafing as an example of what a human would do in a video game. Strafing is a tactical, sideways maneuver that would not be performed by a person navigating in the real world. Incorporating these game-specific movements would likely increase the perception of the agent being human-controlled, especially by expert players. The creation of such agents would enable players who experience disruptions, like network issues, to still play cloud games [57]. When the system detects a disruption, it can take control and begin emulating human-like behavior. When the user can take back control, they can do so seamlessly. This can also be included as an option for players who desire in-game assistance for other reasons, such as mobility issues. Conversely, when the objective is immersion, producing more realistic navigation is essential.
In our study, we focused on producing more realistic navigation. To that end, we identified a set of high-level characteristics, such as smoothness of movement, that the judges relied on to assess human likeness. As a result, game AI designers can first focus on adjusting these characteristics. As we demonstrate with our _reward-shaping_ agent, these characteristics may be targeted using simple techniques and assessed with an _automated_ Turing test [16]. After handling the most frequently mentioned characteristics, designers can then focus on more fine-grained details, such as agents not walking in puddles, to reflect more real-world navigation.
Furthermore, we employed a _third-person_ Turing test where participants watched videos of the agents navigating. Although the ability to pause, rewind, and replay the videos provided a means of interrogation, it was based solely on observation, and lacked the intervention-based approach of a typical Turing test. Intervention-based approaches could include changing the camera perspective, adversarially interrupting the AI agent's intended path, and more. These forms of interaction may yield different insights.
There are some downsides, however, to deploying a more interactive test, particularly at scale. Recruiting a human evaluator and a human player to interact requires their simultaneous availability for real-time feedback. One solution is in-person studies, which can be challenging to scale and deploy. For instance, at the time of this study, we could not run in-person studies due to the ongoing global pandemic or distribute our proprietary game build to remote participants. Future work could take advantage of advances in game streaming, which may enable interactive remote studies with proprietary game builds. This solution can also incorporate previous work on simultaneous recruitment of participants [7, 82]. However, constructing the architecture to incorporate these different technologies may require significant engineering effort.
Importantly, previous work has demonstrated that the inclusion of more direct ways to interrogate the agent by embodying the player and agent in the same virtual space can lead to limited insight [78]. Indeed, work that included an in-game assessment of the human or bot introduced the side effect of an additional game mechanic causing some players to prioritize either gameplay or on the believability assessment [28]. This division of attention yields unreliable results, leading to other researchers adopting third-person variants of these assessments [5, 16, 70]. As a result, we believe that the following pipeline could be useful for evaluating human-like agents. Designers can initially deploy a third-person Turing test to evaluate the human likeness of specific behaviors. The resulting characteristics can then be used to design a set of
agents that exhibit different behavior that depends on the most common beliefs of the participants. For example, agents could move more smoothly if the participant believes smooth movement to be a feature of human likeness. Players could then choose the characters that they want to interact with in the game, which would enable them to tailor the game to their own subjective experience and enjoyment. This approach may offer more reliable insights into the effectiveness of the agent's design without sacrificing the integrity of the assessment process. It could also empower game players by enabling them to exert control over their experience.
### Toward More General Human-Like Agents
Although the specific agent created for our study may not generalize to different games, this is a common and open challenge in the field of AI (Han et al., 2017; Goyal et al., 2018). Instead, we offer suggestions for using feedback from designers and players (e.g., through user research) to train human-like agents more efficiently and effectively.
The differences in how more or less skilled human judges characterize human likeness suggests that different people have different interpretations of what constitutes human-like behavior. This supports the idea that the believability of NPCs in games is highly subject to the prior beliefs and expectations of the players. This finding aligns with the fundamental principle of _familiarity_(Krishnan, 2017) that centers the real-world personal experience and knowledge of the user and implicates the importance of _player-centered_ design and customization (Sundundhi et al., 2017). Rather than producing monolithic human-like agents, we should strive to understand the beliefs of the player and tailor their experiences accordingly.
When moving to more complex settings, an additional difficulty is introduced. The evaluations of human likeness become even more subjective, varying based on individual differences and cultural factors (Sundhi et al., 2017). This result underscores the importance of involving diverse groups of people in the evaluation of AI agents to obtain a more comprehensive understanding of how people perceive these agents. In the context of games, this could look like utilizing participatory design methods (Sundhi et al., 2017) to involve game players in the design of the AI agents themselves. With the consent of the players, we could use techniques in the area of learning from human feedback (Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018), which provide additional channels for people to communicate what they want from AI agents. With these techniques, players can provide training data to the agents in the form of preferences over paired demonstrations generated by the agent, demonstrations of the desired behavior, and more. This can help to ensure that the AI agents are designed with the needs and preferences of diverse groups in mind.
This approach can also be used to help reduce the burden on video game designers: in complex domains, it is often challenging to specify reward signals by hand (Goyal et al., 2018; Sundhi et al., 2017; Sundhi et al., 2017). In part, this difficulty stems from the complexity of the desired behavior: as we have shown, human-like behavior is multi-faceted and necessitates optimizing over multiple objectives. Furthermore, it is sometimes challenging to write down exactly what we mean when specifying a task. For instance, how do we construct a reward signal that captures the task of _build a house in a video game in the same style as surrounding houses_? (Krishnan, 2017). When designing a reward signal for this task, we would need to encode what counts as a house, what components are most important to emulate in the style, and which structures count as houses. A person can quickly understand the intention of this instruction, but it is challenging to make explicit this implicit understanding.
As a result, an exciting avenue for future work involves developing more effective techniques for learning from people, evaluating user experiences of these techniques, and incorporating them into a flexible, user-friendly tool. This tool can also help extend this work to more general game settings. To more easily enable this line of work, assessments of human likeness could be incorporated into commonly-used game engines, like Unity (Han et al., 2017). This tool would enable game developers to easily evaluate the human likeness of their AI agents using metrics and benchmarks that have been validated in previous research. Additionally, this tool could contain libraries of pretrained _human-like_ AI agents, which developers could use as a starting point for their own work. For example, developers could utilize a pretrained human-like navigation agent to perform navigation but develop their own algorithm to use for different tasks. Using this tool could save developers time and effort by enabling them to quickly and easily create more believable and engaging agents to enhance the player experience.
## 9. Conclusion
In this work, we aimed to understand how people assess human likeness in human- and AI-generated behavior in the context of navigation in a 3D video game. Toward this goal, we designed and implemented a novel AI agent to produce human-like navigation behavior. We deployed a large-scale study of human-generated navigation behavior with three AI agents, including our novel _reward-shaping_ agent. We find that our proposed agent passes a Turing test, while the other two agents do not. We further investigated the justifications people provided when assessing these agents and found that people rely on similar higher-level characteristics when determining human similarity. In this context, we suspect that differences in the accuracy of assessing these agents are based more on fixed beliefs about the capabilities of AI systems rather than familiarity with the assessment domain of games. We conclude by discussing the limitations of the work, suggesting concrete design considerations for video game designers, and identifying a few critical areas for future research.
By highlighting design considerations and challenges, we hope that this paper will serve as a call for work that integrates perspectives and techniques from the HCI and AI communities. Building more general human-like agents requires careful design of both the agents and the evaluation protocol. Developing tools that can be incorporated into games and other settings enables quick iterations of these designs and the incorporation of these different techniques. At the highest level, we hope researchers can develop and evaluate agents that exhibit human-like behavior that improves human interaction with AI agents.
###### Acknowledgements.
We would like to thank Evelyn Zuniga, Guy Leroy, Mikhail Jacob, Mingfei Sun, and Dave Bignell for their contributions and feedback to an earlier study in this project. We would also like to thank Cecily Morrison, Youngseog Chung, and Max Meijer for their helpful
comments and feedback. We additionally thank the anonymous CHI reviewers for their detailed comments; the paper is significantly improved thanks to their suggestions. Co-author Fang is supported in part by NSF grant IIS-2046640 (CAREER).
|
2305.06088 | Building Interoperable Electronic Health Records as Purpose-Driven
Knowledge Graphs | When building a new application we are increasingly confronted with the need
of reusing and integrating pre-existing knowledge. Nevertheless, it is a fact
that this prior knowledge is virtually impossible to reuse as-is. This is true
also in domains, e.g., eHealth, where a lot of effort has been put into
developing high-quality standards and reference ontologies, e.g. FHIR1. In this
paper, we propose an integrated methodology, called iTelos, which enables data
and knowledge reuse towards the construction of Interoperable Electronic Health
Records (iEHR). The key intuition is that the data level and the schema level
of an application should be developed independently, thus allowing for maximum
flexibility in the reuse of the prior knowledge, but under the overall guidance
of the needs to be satisfied, formalized as competence queries. This intuition
is implemented by codifying all the requirements, including those concerning
reuse, as part of a purpose defined a priori, which is then used to drive a
middle-out development process where the application schema and data are
continuously aligned. The proposed methodology is validated through its
application to a large-scale case study. | Simone Bocca, Alessio Zamboni, Gabor Bella, Yamini Chandrashekar, Mayukh Bagchi, Gabriel Kuper, Paolo Bouquet, Fausto Giunchiglia | 2023-05-10T12:11:42Z | http://arxiv.org/abs/2305.06088v1 | # Building Interoperable Electronic Health Records as Purpose Driven Knowledge Graphs+
###### Abstract
When building a new application we are increasingly confronted with the need of reusing and integrating pre-existing knowledge. Nevertheless, it is a fact that this prior knowledge is virtually impossible to reuse _as-is_. This is true also in domains, e.g., _eHealth_, where a lot of effort has been put into developing high-quality standards and reference ontologies, e.g. FHIR1. In this paper, we propose an integrated methodology, called _iTelos_, which enables data and knowledge reuse towards the construction of _Interoperable Electronic Health Records_ (iEHR). The key intuition is that the _data level_ and the _schema level_ of an application should be developed independently, thus allowing for maximum flexibility in the reuse of the prior knowledge, but under the overall guidance of the needs to be satisfied, formalized as _competence queries_. This intuition is implemented by codifying all the requirements, including those concerning _reuse_, as part of a _purpose_ defined _a priori_, which is then used to drive a _middle-out_ development process where the application schema and data are continuously aligned. The proposed methodology is validated through its application to a large-scale case study.
Footnote 1: [https://hl7.org/fhir/](https://hl7.org/fhir/).
Keywords:Interoperable Electronic Health Record Knowledge and Data Reuse Knowledge Graphs.
## 1 Introduction
Once upon a time, one would design an application _top-down_ starting from the requirements down to implementation, without thinking of the data: they would be generated by the system in production with, at most, the need of initializing them with data from the legacy systems being substituted. Nowadays, more and more, we are designing systems which, at the beginning but also when in production, must be integrated with data coming from other systems, possibly from third parties. Some examples are health systems that integrate personal
data coming from multiple institutions and B2C (Business-to-Consumer) applications exploiting the big data available on the Web, e.g., open data or streaming data.
The key aspect of this reuse problem is how to handle the _semantic heterogeneity_ which arise any time there is the need to perform data integration across multiple sources [12]. This problem has been extensively studied in the past and two main approaches have been proposed. The first is using _ontologies_ to agree on a fixed language or schema to be shared across applications [6]. The second is the use of _Knowledge Graphs (KGs)_ and the exploitation of the intrinsic flexibility and extensibility they provide [15], as the means for facilitating the adaptation and integration of pre-existing heterogeneous data. However, the problem is still largely unsolved. When developing an application, no matter whether one exploits ontologies or KGs, it is impossible to reuse the pre-existing knowledge _as-is_. There is always some specificity which makes the current problem in need of dedicated development, with the further drawback that the resulting application is, again, hardly reusable.
We propose a general purpose methodology, called _iTelos_, whose main goal is to minimize as much as possible the negative effects of the above phenomenon. _iTelos_ exploits all the previous results, in particular, it is crucially based on the use of ontologies and KGs. At the same time, _iTelos_ takes a step ahead by providing a precise specification of the process by which an application should be developed, focusing on how to effectively reuse data from multiple sources. _iTelos_ is based on three key assumptions:
* the _data level_ and the _schema level_ of an application should be developed independently, thus allowing for maximum flexibility in the reuse of the prior data and schemas, e.g., ontologies, but under the overall guidance of the needs to be satisfied, formalized as _competence queries_;
* Data and schemas to be reused, as well as competence queries, should be decided before starting the development, as precisely as possible, and defined _a priori_ as part of an application _purpose_. Additionally, the purpose is assumed to specify a set of constraints specifying how much the satisfaction of each of its three elements is allowed to influence the satisfaction of the other two;
* the purpose should be used to drive a _middle-out_ development process where the successive evolutions of the application schema and data, both modeled as KGs, are continuously aligned _upwards_ (bottom-up) with the reference schemas to be reused and _downwards_ (top-down) with the data to be reused.
The three assumptions listed above, which are at the core of the _iTelos_ methodology, fit well with applications which arise in the _eHealth_ domain and in particular in the generation, integration and adaptation of _Interoperable Electronic Health Records_ (i EHRs for short). First of all, the clear separation between data and schemas is intrinsic to all health applications for at least two reasons. The first is the richness of standards which exist both at the schema level and at the data level (see, e.g., [2, 3]). Keeping them separated allows for the incremental
alignment of the KG modeling an iEHR. The second is that it allows for the incremental integration, first of the schemas and then of the data, the latter being much more critical being, differently from the schemas, highly sensible data, subject to very strict privacy rules. Moving to the second assumption listed above, the need for correctness and precision in the _eHealth_ domain, is supported by _iTelos_ through the definition of the application _purpose_. The definition of the purpose allows to define precisely which data and schema resources have to be considered for the production of a KG able to represent, and exploit an iEHR. Finally, the third assumption simply provides the guidelines for implementing the incremental generation of iEHRs, mentioned above, enabled by the separation between data and schemas.
The main goal of this paper is to report the lessons learned in the application of the _iTelos_ methodology in the Health domain and in particular in the generation of iEHRs, as part of the European project InteroEHRate2. This paper is organized as follows. In Section 2 we provide a description of the purpose and of how it is organized. In Section 3 we provide a highlight of the _iTelos_ middle-out process. In Section 4 we provide a detailed description of how _iTelos_ enables the reusability of the available data. In Section 5 we describe how _iTelos_ enables the sharability and future reusability of the application KG. In Section 6 we provide the main alignment metrics used to ensure that the _iTelos_ middle-out process stays within the constraints specified by the purpose. Section 7 describes how _iTelos_ has been adopted in the InteropEHRate EU project. Finally, Section 8 closes the paper with the conclusions.
Footnote 2: The details of the project can be found at the URL [https://www.interophehrate.eu/](https://www.interophehrate.eu/).
## 2 The purpose
The _iTelos_ process is depicted at a very high level in Figure 1. The logical sequence, represented by the dashed lines, shows the _User_ providing in input an informal specification of the problem she wants to solve, the _Purpose_, while receiving in output a KG, named, in Figure 1, the _Entity Graph_(EG). The concrete
Figure 1: The _iTelos_ approach.
process is represented by the four solid lines, indicating how the purpose leads to the reuse of prior knowledge, represented at the data level as _Datasets_ and at the schema level as _Ontologies_, with the aim of building the Entity Graph.
The purpose, as the main input of the process, is composed of three elements:
* the functional requirements of the final application, in other words, the information that the final EG must be able to provide to satisfy the user's need. In the concrete example of the IEHR project, these requirements list the medical data to be included in an iEHR. We assume that such needs are formalized as a list of _competency queries (CQ)_[14].
* a set of datasets to be reused as already existing knowledge, thus, in turn, to be integrated into the final EG. A key assumption, as part of the overall _iTelos_ strategy, is to handle such datasets as KGs, thus facilitating their future reuse. In the IEHR project, four different hospital partners provided the datasets to be considered for the purpose.
* a set of existing well-known _reference schemas_, i.e., ontologies, but not only, to be reused in order to develop the EG's purpose-specific schema, which can then be shared for future applications. It is important to notice that well-known schemas are already available; for instance, LOV, LOV4IoT, and DATAHUB,3 are three among the most relevant repositories. However, for the IEHR project, the reference schemas of the datasets provided by hospitals have been mainly considered with a strong adoption of the FHIR reference ontology. In line with the reuse approach, to support the _iTelos_ process, a new repository, called _LiveSchema_,4 is under construction, where reference schemas are annotated by a very rich set of metadata, see, e.g., [5, 11], with the goal of automating as much as possible the _iTelos_ process. Footnote 3: See, respectively: [https://lov.linkeddata.es/](https://lov.linkeddata.es/), [http://lov4iot.appspot.com/](http://lov4iot.appspot.com/), [https://old.datahub.io/](https://old.datahub.io/).
Footnote 4: See: [http://liveschema.eu/](http://liveschema.eu/).
A crucial design decision in the structure of the purpose, which reflects the _iTelos_ process structure depicted in Figure 1, is the separation of the data and schema level, by keeping them distinct and independent, as well as modelled as two different types of KG. This major aspect allows splitting the problem of reusing the existing datasets during the integration process from the problem of developing a unique schema for the final EG which can be easily reused in future applications. The KGs considered for the data level, that we call _Entity Graphs (EGs)_, are graphs having _entities_ as nodes (e.g., my dog _Pluto_). The entities are composed of data property values used to describe them. The links in the data layer KGs are object properties describing the relations between any two entities. The schema level KGs, instead, are called _Entity Type (etype) Graphs (ETGs)_, namely graphs defining the schema of an EG. Therefore, for each EG there is a corresponding ETG which defines its schema. The nodes of an ETG are _etypes_, namely classes of entities (e.g., the class _dog_), Each type is described (and actually, represented) by a set of data properties and by a set of object properties, defining the schema of each single type and the possible relations among them, respectively.
_Datasets_ and _Ontologies_ depicted in Figure 1 are examples of EGs and ETGs, respectively. _iTelos_ use EGs to represent: (i) the entity graph produced as the outcome of the whole process (see Figure 1) and, if they are available in this specific form, (ii) the input datasets. The ETGs, instead are exploited to represent: (i) the schema of the final entity graph, (ii) the input reference schemas to be reused, as well as, after a formalization process, (iii) the functional requirements extracted from the initial competency queries. _iTelos_ maintains this uniformity of representation in order to exploit, for both data and schema layer, the high capacity of KGs to be composed of each other, as well as to be extended and adapted to different purposes. In line with the underlying approach, these KG features allow _iTelos_ to reuse not only existing knowledge but also produce reusable and interoperable data which, in turn, reduces the effort in building future applications.
## 3 _iTelos_ as a process
Figure 2, freely adapted from [9], describes the key logical phases of the _iTelos_ process (dashed line in Figure 1). The process depicted in Figure 2 is articulated in four phases generating an Entity Graph starting from the initial Purpose. The objective of each phase can be synthetically described as follows:
* _Inception_: the purpose, provided in input by the user, is formalized into a set of CQs, as well as used to identify and collect a set of existing, reusable, datasets and ontologies;
* _Modeling_: a purpose-specific model of the ETG is built by taking into account CQs and datasets;
* _Knowledge Alignment_: a reusable ETG is built, based on the model designed previously, by reusing the selected reference ontologies;
* _Data Integration_: the final EG is built by integrating the input datasets into the ETG.
Figure 2: The _iTelos_ Process.
Let us analyze these phases in detail. The _Inception_ phase takes the purpose from the user, initially specified as a natural language description of the desired objective. The functional requirements are extracted from the purpose and formalized into a list of CQs. In the _Ontology Collection_ and _Dataset Collection_ activities, shown in Figure 2, the reusable datasets and ontologies, being part of the purpose, are matched with the CQs in order to select the most suitable resources to build up the final EG. The final collection of data and schema resources may be extended by resources initially not considered in the list provided by the initial purpose. The key observation is that matching CQs and (the schemas of the) datasets is crucial for the success of the project. A little coverage would mean that there are not enough data for the implementation of the EG, therefore a revision of the CQs, or a more refined data collection is required. If the former is the case the process we follow is inspired by the work in [13].
The _Modeling_ phase receives in input the ontologies and datasets previously collected, as well as the CQ list. The main objective of this phase is to extract from the CQs a set of etypes, described by their relative properties, which are then used to build up the most suitable model for the ETG to be used as the schema of the final EG, which in Figure 2 is called the _ETG model_. The ETG model designed using the etypes and properties extracted from CQs can be then extended by extra etypes and properties suggested by the datasets. This extension is optional but suggested to allow for future expansions, as well as to increase the reuse of the available datasets. Notice that the availability of data would make this step low cost, in particular, if taken into account since the early stages. The _ETG Modeling_ activity aims to design the ETG model by shaping it as an EER diagram, in other words transforming the CQs into the ETG model. In parallel, the _Dataset Selection_ activity finalizes the selection of datasets, previously collected, by filtering out those that don't match the ETG model produced.
The _Knowledge Alignment_ phase takes as input the ETG model previously generated, plus the selected set of datasets and reference ontologies. The main objective of this phase is to create the ETG for the final EG. The key observation is that the ETG is built to be as much as possible reusable, thus in turn enhancing the shareability of the whole EG. To this end, the input ETG model is itself a possible solution, nevertheless, it fits too much the CQs, by including definitions of etypes and properties less reusable for different purposes. To achieve the desired level of shareability, the more reused etypes and properties taken from the reference ontologies in inputs, are used to build the ETG (called ETG in Figure 2, without losing the purpose-specific semantic structure designed in the ETG model. This step is performed by the _ETG Alignment_ activity, implemented via the Machine Learning algorithm described in [11]. The algorithm takes as input the set of reference ontologies and the ETG model while producing in output the final ETG. The resulting ETG is verified for compliance with the input datasets before the final approval. In parallel, the _Dataset Cleaning_ activity applies cleaning and formatting operations over the datasets in order to align data types and data formats with the ETG produced.
In the last phase, called _Data Integration_, the inputs considered are the ETG and the datasets previously cleaned and formatted. The objective of this phase is to build the EG by integrating together the schema and data resources handled along the previous phases, according to the initial purpose. A single activity is present in this phase, called _EG Generation_, which aims to merge the ETG and the datasets, by using a data mapping tool called _KarmaLinker_, which consists of the _Karma_ data integration tool [17] extended to perform Natural Language Processing on short sentences (i.e., what we usually call _the language of data_) [1]. Using the tool the data values, in the datasets, are mapped over the types and properties of the ETG. In this step, the final EG's entities are generated and, whenever they are discovered to be different representations of the same real-world entity, merged into a single valid representation. These activities are fully supported by Karmalinker. The process described in this phase is iteratively executed for the list of datasets selected in the previous phase, processed sequentially. The process concludes with the export of the EG into an RDF file.
The key observation is that the desired middle-out convergence, as described in the introduction, has been implemented in two separate sub-processes, executed in parallel within each phase, one operating at the schema level, the other at the data level (blue and green boxes in Figure 2). During this process, the initial purpose keeps evolving building the bridge between CQs, datasets and reference ontologies. To enforce the convergence of this process, and also to avoid making costly mistakes, each phase ends with an evaluation activity (_Eval_ boxes in Figure 2). The specifics of this activity are described in Section 6. Here it is worth making two observations. The first is that this evaluation is driven by the non-functional requirements provided by the purpose. The second is that, within each phase, the evaluation aims to verify whether the target of that phase is met, namely: aligning CQs with datasets and ontologies in phase 1, thus maximizing the reuse of existing resources; aligning the ETG model with the datasets in phase 2, thus guaranteeing the success (purpose's specificity) of the project, and aligning ETG and ontologies in phase 3, thus maximizing EG sharability. The evaluation in phase 4 has the goal of checking that the final EG satisfies the requirements specified by the purpose. As from Figure 2, if the evaluation results don't satisfy acceptable thresholds, in any of the steps in which is expected, the process goes back to the evaluation step of the previous phase. In the extreme case of a major early design mistake, it is possible to go back from phase 4 to phase 1.
## 4 Data reuse in _iTelos_
During the Inception and Modeling phases, iTelos aims at enhancing the reuse of existing resources in order to reduce the effort in building the EG. The main goal of the first two phases, in this perspective, is to transform the specifications provided by the input purpose into the ETG model. The process achieves this objective by following five different steps, along the two phases:
1. initial formalization of the purpose into a list of natural language sentences, each informally defining a CQ;
2. extraction, from each CQ, of a list of relevant types, and corresponding properties, thus formalizing slightly more the initial purpose;
3. selection of the datasets whose schema matches the etypes extracted from the CQs in the previous step;
4. generation of the list of etypes and relative properties extracted from the dataset selected previously, by matching them with those extracted from the CQs;
5. construction of the ETG model by using the etypes, and properties, from the previous step.
Most of the work is done during the inception phase, which covers steps 1-4, while step 5 happens during the modeling phase, where the choices made during the previous phase are selected and exploited to build the ETG model. Nevertheless, if the modeling phase's evaluation doesn't produce the desired results, there is the opportunity of backtracking in order to fix wrong choices made during the inception phase. With the aim of enhancing the reusability, the key idea is to classify the three types of resources (CQs, ontologies, datasets) handled during the first two phases, into three categories defining how reusable such resources are. Moreover, along the execution of the two phases, the resources are handled through a series of three iterative executions each corresponding to a specific category, following a decreasing level of reusability. The categories are defined as follows (see [9] for a first description of this specific three-level categorization):
_Common_: the resources classified in this category are those used to express aspects that are common to all domains, even outside the purpose-specific domain of interest. Usually, these knowledge resources correspond to abstract etypes specified in _upper-level ontologies_[7], e.g., _person_, _organization_, _event_, _location_, or even etypes from very common domains, usually needed in most applications, e.g., _Space_ and _Time_. Etypes and properties classified as common by _iTelos_ correspond to what in knowledge organization are called _Common Isolates_[18]. Moreover, _iTelos_ classifies as common data resources those that are provided by Open Data sites.
_Core_: the resources classified in the core category express the most core aspects in the purpose-specific domain of interest. The information carried by these resources is fundamental, given its relevance to the purpose, it would not be possible to develop EG without it. Consider for instance the following purpose:
"_There is a need to support the health of European citizens by opening up new ways to access their health data where needed (independently of the specific country's healthcare system). To this end, interoperable health data needs to be produced, by integrating local data from different countries, thus represented through different medical standards and languages._"5
In this example, core resources could be those data values reporting patient's health information, like medication details, drugs, medical codes (e.g., Health, interoperability standards of various types). Examples of common resources, instead, are general information about the patient, like name, surname and date of birth, as well as upper-level ontologies that can be found in the repositories mentioned above. It is important to notice that in general, data are harder to find than ontologies, in particular when they are about sensible sectors like healthcare, where personal data is strongly considered.
_Contextual_: the resources classified in this category carry, possibly unique, information of the purpose-specific domain of interest. While the core resources contribute towards having a meaningful application, the contextual ones create added value in the EG developed, by making explicit the difference with respect to the competitors. In the example above, the resources classified as contextual can be the translations of health data which need to be included in the output interoperable health data in order to be exploited in different countries. At the schema level, contextual types and properties are those which differentiate the ontologies which, while covering the same domain, actually present major differences. [10] presents a detailed quantitative analysis of how to compare these ontologies. Data-level contextual resources are usually not trivial to find, given their specificity and intrinsic. In the various applications that we have developed in the past, this type of data has turned out to be a new set of resources that had to be generated on purpose for the application under development, in some cases while in production. In the above example, some contextual data resources are the mappings between different medical standards codes. They are not always available, with the direct consequence that the mappings have to be produced on purpose by hand.
The overall conclusive observation is that the availability, and thus the reusability, of resources, and of data in particular, decreases from common to contextual category, or in other words from more infrastructural data to more application-specific data. Moreover, as described in [9], the decrease in reusability goes in parallel with the increase of pre-processing required to create and handle more contextual data.
Let us see an example of how the five elaboration steps listed at the beginning of the section intermix with the three categories above. Table 1 shows some CQs, extracted from the case study introduced above (step 1). Notice how, already in
\begin{table}
\begin{tabular}{|c|l|l|l|} \hline Number & Question & Action & Category \\ \hline
1 & Which is the patient’s general information & & Return the interoperable Patient Summary & Common \\ \hline
2 & Which are the medical information for a patient’s medication? & & Return the medication information, & Core \\ \hline
3 & Which is the international version of an Italian medication? & & Return multilingual intermediate medication information & Contextual \\ \hline \end{tabular}
\end{table}
Table 1: CQs categorized in the three reusability categories.
this step, CQs are categorized as being common, core or contextual (last column) and how this is done after transforming the text from the purpose (left column) into something much closer to a requirement for the EG (central column). Table 2 then reports the etypes and properties extracted from the CQs (step 2). While this is not reported in the table for lack of space, these etypes and properties inherit the category from the CQs. Notice that it is possible to have an etype that is core or common or contextual with properties in all three categories; this being a consequence of the fact that an etype can be mentioned in multiple (types of) CQs.
The next step is to select the datasets (step 3). Let us assume that the dataset used to (partially) answer the queries in Table 1 generates the properties matching the CQs as in Table 3 (step 4). As an example of matching, compare the attribute _CD-ATC_ in Table 3 with the property _Code_value_ of CQ n"2 in Table 2. Notice how the ordering of the analysis (from common to core to contextual) creates dependencies that may drive the choice of one dataset over another. As an example, _Medication_dosage_instruction_ might be needed as a contextual property, but to properly define it, we also need the core etype _Medication_. Analysing the resources required following such an order over the three reusability categories implies a major usage of more reusable resources. Concluding the process in the first two _iTelos_ phases, steps 5 builds the ETG model, reported in Figure 3, containing etypes and properties from the tables above.
\begin{table}
\begin{tabular}{|c|l|l|} \hline CQ Number & Etypes & Properties \\ \hline
1 & Patient, Vital\_signs, Care\_plan & Patient\_identifier, Name, Surname, Date\_of\_birth, Blood\_pressure, Care\_plan\_category \\ \hline
2 & Medication, Drug & Medication\_subject, Medication\_date, Drug\_identifier, Coding\_system, Code\_value \\ \hline
3 & Medication, Translation & Target\_language, Source\_language, Medication\_dosage\_instruction,, Medication\_text\_note, \\ \hline \end{tabular}
\end{table}
Table 2: Etypes and properties from the CQs.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Attributes & Description & Type & Category \\ \hline ID-PATIENT & identifier of a patient into the dataset & string & Common \\ \hline firstname & name of a patient into the dataset & string & Common \\ \hline familyname & surname of a patient into the dataset & string & Common \\ \hline CD-ATC & drug’s medical code specified in the dataset & string & Core \\ \hline beginmoment/date & date of the medication specified in the dataset & date-time & Core \\ \hline content/text & textual information about a medication & string & Contextual \\ \hline \end{tabular}
\end{table}
Table 3: Dataset’s attributes classified according to the reusability categories.
Data sharing in _iTelos_
In the knowledge alignment phase, the main objective is to enhance the shareability of the EG to be produced, by building an ETG that can be shared for different future purposes. The approach followed by _iTelos_ in order to achieve such a shareability is to build the ETG by reusing as much as possible the etypes and properties coming from well-formed standard reference ontologies, but under the overall guidance of the ETG model previously built. While the ETG model keeps the generation of the ETG focused on the initial purpose, the exploitation of reference ontologies improve the possibility to share it in different domains, where such ontologies are already involved in. The key observation is that the alignment mainly concerns the common and, possibly, the core types with much smaller expectations on contextual etypes. Notice that, in retrospect, the alignment with the most suitable ontology can enable the reuse of the data produced. As an example, the selection of FHIR [6] as the reference ontology for medical data, ensures the compliance of the ETG produced with a huge amount of healthcare information which is already structured using FHIR. This type of decision should be made during the inception phase; if discovered here, it might generate backtracking.
_iTelos_ implements the construction of a shareable ETG by adapting the _Entity Type Recognition (ETR)_ process proposed in [11]. This process happens in three steps. The first step is the _selection of ontologies_. This step aims at selecting the set of reference ontologies that best fit the ETG model. As from [11], this selection step occurs by measuring each reference ontology according to two metrics, which allow:
* to identify how many etypes of the reference ontologies are in common with those defined in the ETG model, and
* to measure a property shareability value for each ontology etype, indicating how many properties are shared with the ETG model etypes.
The output of this first step is a set of selected ontologies that best cover the ETG model, and that have been verified to fit the dataset's schema, at both etypes and etype properties levels. The second step is the _Entity Type Recognition_(ETR). The main goal of this step is to predict, for each etype of the ETG model, which etype of the previously selected ontologies, analyzed one at a time, best fits the ETG. In practice, the ETG model's etypes are used as labels of a classification task. Such execution, as mentioned in [11], exploits techniques that are very similar to those used in ontology matching (see, e.g., [8]). The final result for this step is a vector of prediction values, returning a similarity score between the ETG model's etypes and the selected ontology etypes. The third step is the _ETG generation_. This step identifies, by using the prediction vector produced in the previous step, those etypes and properties from the reference ontologies which will compose the final version of the ETG. It is important, in this activity, to preserve the mapping with the datasets' schemas; whenever this is problematic, this becomes a possible source of backtracking.
The distinction, among common, core and contextual types and properties, as well as their processing order, plays an important role in this phase and can be articulated as follows:
* The common types should be adopted from the reference ontology, in percentage as close as possible to 100%. This usually results in an enrichment of the top level of the ETG model by reusing existing representations of general cross-domain types (e.g., _thing_, _product_, _event_, _location_) that are usually less considered by developers due to their abstract nature, but which are fundamental for building an ETG, having all properties positioned in the right place, that can be shared among different domains and for different purposes. Moreover, this allows for a cross-domain alignment of those _common isolates_ (see Section 4) for which usually a lot of (open) data are publicly available (e.g., _street_);
* The core types are tentatively treated in the same way as common etypes, but the results highly depend on the reference ontologies available. Think for instance of the FHIR example above;
* Contextual etypes and, in particular, contextual properties are those most difficult to be found in existing reference ontologies. For this reason, such etypes and properties are mainly used for the high-level selection among the available ontologies, more than for the selection of single etypes inside those ontologies [11].
Figure 4: Portion of the _FHIR_ reference ontology.
Figure 3: Portion of ETG model.
As an example of the process described above, compare the portion of ETG model as from Figure 3, with the portion of the _FHIR_ reference ontology in Figure 4.7 As it can be noticed, the _FHIR_ etypes _MedicationStatement_, _Medication_ and _Patient_ can be matched to the ETG model etypes _Medication_, _Drug_ and _Patient_, respectively. As a consequence of such a matching, _iTelos_ adopts the FHIR etypes and properties to compose the final version of the ETG. It is important to notice how the contextual etype, called _Translation_ in the ETG model, don't have any reusable counterpart in the FHIR reference ontology. For this reason, such a purpose-specific etype is taken from the ETG model to be part of the final ETG.
Footnote 7: In the InteropEHRate project, reported as a use case in this paper, the matching between _FHIR_ and the ETG model has been done manually.
## 6 Alignment Evaluation
The evaluation activities implemented in the _iTelos_ process (see Section 3) are based on a set of metrics applied to the intermediate outputs of the different phases, such as CQs, ETG and datasets, as well as to the final EG. For lack of space, below we describe only the application of three metrics to the specific phases in which they are exploited. In order to describe the evaluation metrics, let \(\alpha\) and \(\beta\) be two generic _element_ sets, where an element can be an etype or a property. We have the following:
_Coverage_. This metric is used to measure the overlap between two element sets. In detail, how many etypes and properties can be found in both the sets are analyzed. Such an evaluation is computed as the ratio between the intersection of \(\alpha\) and \(\beta\) and the whole \(\alpha\).
\[Cov=(\alpha\cap\beta)/\alpha \tag{1}\]
For instance, during the inception phase (_Eval(a)_ in Figure 2), \(Cov\) plays a central role in evaluating the _reusability_ of potential datasets (via dataset schemas) with respect to the CQs. For each dataset, a high value of \(Cov\), both applied to etypes and properties, implies that the dataset is highly appropriate for the purpose. A low value of \(Cov\) implies minimal overlap between the purpose and the dataset, the consequence being non-consideration of the dataset for reuse, and possible modification of the (underspecified) CQs.
_Extensiveness_. This metric quantifies the proportional amount of knowledge provided by any element set (such as \(\beta\)), in terms of sets of etypes or sets of properties, with respect to the entire knowledge considered (here \(\alpha\) and \(\beta\))
\[Ext=(\beta-(\alpha\cap\beta))/((\alpha+\beta)-(\alpha\cap\beta)) \tag{2}\]
During the Modeling phase (_Eval(b)_ in Figure 2), the evaluation utilizes \(Ext\) to measure how much the ETG model extends the set of CQs, with the objective of building the most suitable model for the purpose. To that end, a high value of \(Ext\) is evaluative of the fact that the ETG extends the scope of the CQs,
by indicating a limited contribution of the CQs in generating the ETG model. On the other hand, low values of \(Ext\) are evaluative of the fact that CQs have contributed significantly towards the construction of the ETG model.
_Sparsity_. This metric quantifies the element-level difference between any number of similar element sets and is defined as the sum of the percentage of \(\alpha\) not in \(\beta\), and vice versa.
\[Spr=((\alpha+\beta)-2(\alpha\cap\beta))/((\alpha+\beta)-(\alpha\cap\beta)) \tag{3}\]
In the knowledge alignment phase (_Eval(c)_ in Figure 2), our principal focus is to utilize \(Spr\) for ensuring the _shareability_ of the ETG. We incrementally enforce shareability by ensuring a required threshold of \(Spr\) between the ETG and each of the reference ontology. Such a threshold indicates that the ETG contains axioms reflective of _contextual knowledge_. Nevertheless, this evaluation aims at maximizing the adoption of reference ontology's knowledge for common and core elements.
All the above metrics operate at the schema level. But they do not say anything about the results of integrating the datasets as caused by the semantic heterogeneity existing among them. Let's assume the following situation, in order to describe the data level evaluation criteria adopted by _iTelos_. Using _Karma-Linker_, we are going to integrate a new dataset \(D_{1}\) into a partially built EG. Moreover, \(D_{1}\) has an etype \(E_{1}\), with its property set \(A_{1}\) and the EG already contains an etype \(E_{2}\), with its property set \(A_{2}\). Then we have the following possible situations:
* (\(E_{1}=E_{2}\)) The etype \(E_{1}\) in \(D_{1}\) is already integrated in the EG. As a consequence of integrating \(D_{1}\) into the EG, there is an increase in the number of entities represented through the etype \(E_{1}\), and the EG, after the integration, will be enriched by the connections of the new entities. Nevertheless, this situation includes two different sub-cases: (i) When \(A_{1}=A_{2}\), i.e., the two etypes share the same set of properties, conflicts (different values for the same property) are possible between the value set of \(A_{1}\) and \(A_{2}\). In this case, _iTelos_ aims at identifying how many of such conflicts appear during the integration, as well as the number of properties remaining without a value (null or insignificant value) as a result of the integration. (ii) In the second sub case, i.e., \(A_{1}\neq A_{2}\), the two etypes have different sets of properties. As a consequence, there are no conflicts between the value set of \(A_{1}\) and \(A_{2}\), and we obtain a greater integration over the entities of \(E_{1}\).
* (\(E_{1}\neq E_{2}\)) The etype \(E_{1}\) in \(D_{1}\) is not yet present in EG. The consequence is that by integrating \(D_{1}\) into the EG, we are increasing the number of etypes (and of the entities of such an etype) in the final EG. Once again, in this situation, we can differentiate two sub-cases: (i) When \(E_{1}\) and \(E_{2}\) are linked by at least one object property, the resulting EG, after the integration of \(D_{1}\), will be a connected graph. In this sub-case, _iTelos_ aims at evaluating the level of connectivity of the graph by identifying how many entities of \(E_{1}\) have not null values for the object properties linking \(E_{1}\) with the rest of the EG's entities. In the second sub-case, there are no object properties linking
\(E_{1}\) and \(E_{2}\) (or \(E_{1}\) with any other type in EG). As a consequence, the EG after the integration of \(D_{1}\), will not be connected and the information carried by \(D_{1}\) cannot be reached navigating the EG. Therefore, the integration of \(D_{1}\) doesn't increase the EG's connectivity.
The data driven criteria briefly introduced above are crucial for the evaluation of the quality of the final EG. At the moment, these characteristics of the EG have been evaluated by considering the above criteria during the process as well as over the final EG. However, a more precise set of metrics for the evaluation of the quality of the final EG is under definition.
## 7 The InteropEHRate case study
The _iTelos_ methodology has been applied to the InteropEHRate EU project. The objective of the project was to keep European citizens fully in control of their own medical data. To achieve this goal, the partners put their focus on two key aspects:
1. To support the citizens and healthcare practitioners through a cross-country digital health infrastructure, composed of mobile applications as well as hospital and third-party digital services [16]. Nevertheless, such an infrastructure is not sufficient to completely achieve the objective of the project.
2. The second key aspect considered in the project was the production and exploitation of interoperable Electronic Health Records (iEHRs). An iEHR is a medical data resource that can be exploited by citizens and healthcare practitioners (as well as researchers), regardless of the European country they are in, even if the country is different from what they are used to living. In detail, IEHR aims at aligning the heterogeneity currently present in medical data of different countries (or even within a single country) into a single multilingual data format (adopting the FHIR medical standard) easily exploitable in Europe thanks to the infrastructure provided by the project.
In the context described above, _iTelos_ has been exploited for the integration of local medical data coming from different European countries. In detail, some test sets of local data provided by four different hospitals located in Italy, Belgium, Romania, and Greece, respectively, have been used as input for the methodology, together with their reference schemas. The main problem to be solved, dealing with such local data, was the high level of heterogeneity between the data provided. A concrete example was the differences between the data provided by the Italian hospital, expressed using the HL7-CDA standard 8, and the Belgian hospital's data expressed through a Belgian medical standard called _SumEHR9_. The CDA data are structured in XML format, while the SumEHR are instead JSON files using different attributes to express the same medical information.
Another major difference, between the two kinds of data, concerns the medical coding system adopted by the different standards. While the CDA supports the LOINC 10 codes, the SumEHR uses a local coding system not recognized outside Belgium. Moreover, as described in [3], another issue to be considered, in such a context, is the natural language heterogeneity leading to the need to produce multilingual iEHRs. As the main input regarding the schema layer, the FHIR medical ontology has been adopted as the reference ontology for the construction of an ETG structuring a KG able to maintain iEHRs. _iTelos_ has been implemented in the project, by a semi-automatic process of building a KG able to maintain an interoperable and multilingual version of the local information provided by the involved hospitals. The data, extracted from such a KG, form iEHRs satisfying the project's requirements.
Footnote 10: [https://loinc.org/](https://loinc.org/)
During the final phase of IEHR, the project's pilots have been executed to test, in real-case scenarios, if the project's results have been able to meet the initial requirements. From the partners involved in the pilots, we collected feedback, which helped us to understand some strengths and weak points regarding the methodology applied in the IEHR context. The descriptions of such feedback are reported below:
* _(Strength)_ The methodology, thanks to its focus on the reuse of existing data, helped in discovering data heterogeneity, over the medical data, between different countries, as well as in finding solutions to make such data interoperable within Europe. During the project, some suggestions, about how to improve the FHIR standard for a better level of interoperability, have been submitted to the HL7 association.
* _(Strength)_ The mapping between local and international knowledge, and data, existing resources are onerous to bootstrap but lightweight to maintain. In other words from a methodological point of view, a good modeling phase is hard to be executed, but it leads to building KGs easy to maintain and evolve. This enforces the reuse and share approach adopted by _iTelos_ in building KG.
* (_Weakness_) The input standards knowledge and the support of the tools, provided by _iTelos_, are necessary but not sufficient. In the healthcare domain, the need for precision and correctness in producing and converting data from one context to one another (local to international) always requires human supervision.
* (_Weakness_) The required human intervention limits the scalability of systematic approaches for KGs building. To this end, the _iTelos_ methodology needs to be improved in order to provide the highest level of (semi)automation with the least human intervention required.
Conclusion
In this paper we have introduced _iTelos_, a novel methodology for the creation of purpose-specific KGs, adopting a _circular_ development process. By this, we mean that the implicit goal of _iTelos_ is to enable the development of KGs via the _reuse_ of already existing resources, thereby, reducing the effort in building new data (KG based) which can be, in turn, highly _reused_ by other applications in future. Further, we have also described how _iTelos_ has been used in the context of the InteropEHRate EU project, with the objective of producing multilingual iEHRs.
## Acknowledgements
The research described in this paper was supported by the InteropEHRate project, a project of the EC Horizon 2020 programme, grant number 826106. We thank all the people from the University of Trento who supported us in the execution of this project, in particular: Danish Asghar Cheema, Ronald Chenu Abente. The acronym IEHR from the InteropEHRate project, has been freely adapted in this paper as iEHR which stands for _interoperable Electronic Health Records_.
|
2307.10093 | Revisiting invariances and introducing priors in Gromov-Wasserstein
distances | Gromov-Wasserstein distance has found many applications in machine learning
due to its ability to compare measures across metric spaces and its invariance
to isometric transformations. However, in certain applications, this invariance
property can be too flexible, thus undesirable. Moreover, the
Gromov-Wasserstein distance solely considers pairwise sample similarities in
input datasets, disregarding the raw feature representations. We propose a new
optimal transport-based distance, called Augmented Gromov-Wasserstein, that
allows for some control over the level of rigidity to transformations. It also
incorporates feature alignments, enabling us to better leverage prior knowledge
on the input data for improved performance. We present theoretical insights
into the proposed metric. We then demonstrate its usefulness for single-cell
multi-omic alignment tasks and a transfer learning scenario in machine
learning. | Pinar Demetci, Quang Huy Tran, Ievgen Redko, Ritambhara Singh | 2023-07-19T16:00:29Z | http://arxiv.org/abs/2307.10093v1 | # Revisiting invariances and introducing priors in Gromov-Wasserstein distances
###### Abstract
Gromov-Wasserstein distance has found many applications in machine learning due to its ability to compare measures across metric spaces and its invariance to isometric transformations. However, in certain applications, this invariance property can be too flexible, thus undesirable. Moreover, the Gromov-Wasserstein distance solely considers pairwise sample similarities in input datasets, disregarding the raw feature representations. We propose a new optimal transport-based distance, called Augmented Gromov-Wasserstein, that allows for some control over the level of rigidity to transformations. It also incorporates feature alignments, enabling us to better leverage prior knowledge on the input data for improved performance. We present theoretical insights into the proposed metric. We then demonstrate its usefulness for single-cell multi-omic alignment tasks and a transfer learning scenario in machine learning.
## 1 Introduction
Optimal transport (OT) theory provides a fundamental tool for comparing and aligning probability measures omnipresent in machine learning (ML) tasks. Following the least effort principle, OT and its associated metrics offer many attractive properties that other divergences, such as the popular Kullback-Leibler or Jensen-Shannon divergences, lack. For instance, OT borrows key geometric properties of the underlying "ground" space on which the distributions are defined [1] and enjoys non-vanishing gradients in case of measures having disjoint support [2]. OT theory has also been extended to a much more challenging case of probability measures supported on different metric-measure spaces. In this scenario, Gromov-Wasserstein (GW) distance [3] seeks an optimal matching between points in the supports of the considered distributions by using the information about the distortion of intra-domain distances after such matching. Since its proposal by Memoli [3] and further extensions by Peyre _et al_[4], GW has been successfully used in a wide range of applications, including computational biology [5; 6; 7; 8; 9; 10; 11], generative modeling [12], and reinforcement learning [13; 14].
Limitations of prior workSuccessful applications of GW distance are often attributed to its invariance to distance-preserving transformations (also called isometries) of the input domains. Since GW considers only intra-domain distances, it is naturally invariant to any transformation that does not change them. While these invariances can be a blessing in many applications, for example, comparing graphs with the unknown ordering of nodes, they may also become a curse when one must choose the "right" isometry from those for which GW attains the same value. How one would break such ties while keeping the attractive properties of the GW distance? To the best of our knowledge, there are no prior works addressing this question.
Additionally, GW distances are often used in tasks where one may have some _a priori_ knowledge about the mapping between the two considered spaces. For example, in single-cell applications, mapping a group of cells in similar tissues across species helps understand evolutionarily conserved and diverged cell types and functions [15]. This cross-species cell mapping, when performed using OT, may benefit from the knowledge about an overlapping set of orthologous genes 2. GW formulation does not offer any straightforward way of incorporating this knowledge, which may lead to its suboptimal performance in the above-mentioned tasks.
Footnote 2: Genes that diverge after a speciation event from a single gene in a common ancestor, but their main functions, as well as the genetic sequences, are largely conserved across the different species.
Our contributionsIn this paper, we aim to address the drawbacks of the GW distance mentioned above. We propose to augment GW distance with an additional loss term that allows tightening its invariances and incorporating prior knowledge on how the two input spaces should be compared. Overall, our contributions can be summarized as follows:
1. We present a new metric on the space of probability measures that allows for better control over the isometric transformations of the GW distance;
2. We provide some theoretical analysis of the properties of the proposed distance, as well as an experimental example that illustrates its unique features vividly;
3. We empirically demonstrate that such a new metric is more efficient than previously proposed cross-domain OT distances in several single-cell data integration tasks and its generalizability to the ML domain.
The paper is organized as follows. Section 2 presents key notions from the OT theory utilized in the rest of the paper. Section 3 presents our proposed distance and analyzes its theoretical properties. In Section 4, we present several empirical studies for the single-cell alignment task and demonstrate the applicability of our metric in another ML domain. We conclude our paper in Section 5 by discussing limitations and potential future work.
NotationsIn what follows, we denote by \(\Delta_{n}=\{w\in(\mathbb{R}_{+})^{n}:\;\sum_{i=1}^{n}w_{i}=1\}\) the simplex histogram with \(n\) bins. We use \(\otimes\) for tensor-matrix multiplication, _i.e._, for a tensor \(L=\left(L_{i,j,k,l}\right)_{i,j,k,l}\) and a matrix \(B=(B_{i,j})_{i,j}\), the tensor-matrix multiplication \(L\otimes B\) is the matrix \((\sum_{k,l}L_{i,j,k,l}B_{k,l})_{i,j}\). We use \(\langle\cdot,\cdot\rangle\) for the matrix scalar product associated with the Frobenius norm \(\|\cdot\|_{F}\). Finally, we write \(\mathbf{1}_{d}\in\mathbb{R}^{d}\) for a \(d\)-dimensional vector of ones. We use the terms "coupling matrix", "transport plan" and "correspondence matrix" interchangeably. A point in the space can also be called "an example" or "a sample". Given an integer \(n\geq 1\), denote \([n]:=\{1,...,n\}\).
## 2 Preliminary knowledge
In this section, we briefly present the necessary background knowledge required to understand the rest of this paper. This includes introducing the Kantorovich formulation of the OT problem and two OT-based distances proposed to match samples across incomparable spaces.
Kantrovich OT and Wasserstein distanceLet \(\mathbf{X}\in\mathbb{R}^{n\times d}\) and \(\mathbf{Y}\in\mathbb{R}^{m\times d}\) be two input matrices, \(\mathbf{C}_{ij}=c(\mathbf{x}_{i},\mathbf{y}_{j})\) be a cost (or ground) matrix defined using some lower semi-continuous cost function \(c:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}_{\geq 0}\). Given two discrete probability measures \(\mu\in\Delta_{n}\) and \(\nu\in\Delta_{m}\), Kantorovich formulation of OT seeks a coupling \(\gamma\) minimizing the following quantity:
\[W_{\mathbf{C}}(\mu,\nu)=\min_{\boldsymbol{\gamma}\in\Pi(\mu,\nu)}\langle \mathbf{C},\boldsymbol{\gamma}\rangle, \tag{1}\]
where \(\Pi(\mu,\nu)\) is the space of probability distributions over \(\mathbb{R}^{2}\) with marginals \(\mu\) and \(\nu\). Such an optimization problem defines a proper metric on the space of probability distributions called the Wasserstein distance.
Gromov-Wasserstein distanceWhen samples of input matrices live in different spaces, _i.e._, \(\mathbf{X}\in\mathbb{R}^{n\times d}\) and \(\mathbf{Y}\in\mathbb{R}^{m\times d^{\prime}}\) with \(d\neq d^{\prime}\), they become incomparable, _i.e._ it is no longer possible to define a cost function \(c\) as the distance between two points across the available samples. In this case, one cannot use the Wasserstein distance defined above. To circumvent this limitation, one can use the Gromov-Wasserstein (GW) distance [3], defined as follows:
\[\text{GW}(\mathbf{X},\mathbf{Y},\mu,\nu,d_{X},d_{Y}):=\min_{\boldsymbol{\gamma }\in\Pi(\mu,\nu)}\mathcal{L}_{GW}(\boldsymbol{\gamma}) \tag{2}\]
where
\[\mathcal{L}_{\text{GW}}(\boldsymbol{\gamma}):=\sum_{i,j,k,l}\bigl{(}d_{X}(x_{ i},x_{k})-d_{Y}(y_{j},y_{l})\bigr{)}^{2}\gamma_{i,j}\gamma_{k,l}=\langle L( \mathbf{D}_{X},\mathbf{D}_{Y})\otimes\boldsymbol{\gamma},\boldsymbol{\gamma}\rangle. \tag{3}\]
Here, the tensor \(L(\mathbf{D}_{X},\mathbf{D}_{Y})\) is defined by \(\bigl{(}L(\mathbf{D}_{X},\mathbf{D}_{Y})\bigr{)}_{i,j,k,l}=\bigl{(}d_{X}(x_{ i},x_{k})-d_{Y}(y_{j},y_{l})\bigr{)}^{2}\), where \((x_{i},x_{k})\in\mathbb{R}^{2}\) and \((y_{j},y_{k})\in\mathbb{R}^{2}\) are tuples of 1D coordinates of samples in \(\mathbf{X}\) and \(\mathbf{Y}\) and \(d_{X}\) and \(d_{Y}\) are proper metrics so that \((\mathbf{D}_{X})_{ik}=d_{X}(x_{i},x_{k})\) and \((\mathbf{D}_{Y})_{jl}=d_{Y}(y_{j},y_{l})\).
CO-Optimal transportRedko _et al._[16] introduced an alternative to GW distance, termed CO-Optimal transport (COOT). Rather than relying on the intra-domain distance matrices \(\mathbf{D}_{X}\) and \(\mathbf{D}_{Y}\) like GW does, COOT instead takes into account the feature information (_i.e._ the coordinates of the samples) and jointly learns two couplings, corresponding to the sample and feature alignments. More precisely, COOT distance between two input matrices \(\mathbf{X}\) and \(\mathbf{Y}\) is defined by
\[\text{COOT}(\mathbf{X},\mathbf{Y},\mu,\nu,\mu^{\prime},\nu^{\prime}):=\min_{ \boldsymbol{\gamma}^{*}\in\Pi(\mu,\nu),\boldsymbol{\gamma}^{*}\in\Pi(\mu^{ \prime},\nu^{\prime})}\mathcal{L}_{\text{COOT}}(\boldsymbol{\gamma}^{s}, \boldsymbol{\gamma}^{v}) \tag{4}\]
where
\[\mathcal{L}_{\text{COOT}}(\boldsymbol{\gamma}^{s},\boldsymbol{\gamma}^{v}):= \sum_{i,j,k,l}L(x_{ik},y_{jl})\boldsymbol{\gamma}^{s}_{i,j}\boldsymbol{\gamma }^{v}_{k,l}=\langle L(\mathbf{X},\mathbf{Y})\otimes\boldsymbol{\gamma}^{v}, \boldsymbol{\gamma}^{s}\rangle\]
with \(\mu^{\prime}\in\Delta_{d}\) and \(\nu^{\prime}\in\Delta_{d^{\prime}}\) being empirical distributions associated with the features (columns) of \(\mathbf{X}\) and \(\mathbf{Y}\). In what follows, we consider \(L(x_{ik},y_{jl})=(x_{ik}-y_{jl})^{2}\) and write simply \(\text{GW}(\mathbf{X},\mathbf{Y})\) and \(\text{COOT}(\mathbf{X},\mathbf{Y})\) when \(\mu,\nu,\mu^{\prime},\nu^{\prime}\) are uniform and when the choice of \(d_{X}\) and \(d_{Y}\) is of no importance.
## 3 Our contributions
Here, we first start by outlining the motivation for our proposed divergence, highlighting the different properties of GW distance and COOT. Then, we detail our new metric that interpolates between the two, followed by a theoretical study of its properties.
### Motivation
Invariances of GW distanceThe invariances encoded by GW distance are characterized by the condition where \(\text{GW}(\mathbf{X},\mathbf{Y})=0\). In the discrete setting, this is equivalent to the existence of a measure-preserving isometry \(f:\mathbb{R}^{d}\to\mathbb{R}^{d^{\prime}}\), that is \(f_{\#}\mu=\nu\) and \(d_{X}(\cdot,\cdot)=d_{f(X)}(\cdot,\cdot)\). In particular, this also implies that \(\mathbf{X}\) and \(\mathbf{Y}\) have the same cardinality.
In other words, GW distance remains unchanged under any isometric transformation of input data. This favorable property has contributed much to the success and popularity of GW distance, where in many applications, the isometries naturally appear. However, since there are infinitely many isometries, not all are equally desirable. For instance, a rotation of the digit \(6\) seen as a discrete measure can either lead to its slight variation for small angles or to a digit \(9\) when the angle is close to \(180\) degrees. In both cases, however, the GW distance remains unchanged, although it is clearly detrimental for telling the two distinct objects apart.
Invariances of COOTUnlike GW, COOT has fewer degrees of freedom in terms of invariance to global isometric transformations as it is limited to permutations of rows and columns of the two matrices, and not all isometric transformations can be achieved via such permutations. As an example, Appendix Figure 1 shows the empirical effect of the sign change and image rotation in a handwritten digit matching task, where GW is invariant to such transformations, while COOT is not. Additionally, as shown in [16], COOT distance vanishes precisely when there exist two bijections \(\sigma_{s}:[n]\rightarrow[m]\) and \(\sigma_{f}:[d]\rightarrow[d^{\prime}]\) such that
* \((\sigma_{s})_{\#}\mu=\nu\) and \((\sigma_{f})_{\#}\mu^{\prime}=\nu^{\prime}\) (which also imply that \(n=m,d=d^{\prime}\)).
* \(x_{ij}=y_{\sigma_{s}(i)\sigma_{f}(j)}\), for every \((i,j)\in[n]\times[d]\).
Therefore, COOT is strictly positive for any two datasets of different sizes both in terms of features and samples, making it much more restrictive than GW. It thus provides a fine-grained control when comparing complex objects, yet it lacks the robustness of GW to frequently encountered transformations between the two datasets.
### Augmented Gromov-Wasserstein distance
Given the above discussion on the invariances of COOT and GW distance, it appears natural to propose a novel distance, that we term **augmented GW** (AGW), interpolating between them as follows:
\[\text{AGW}_{\alpha}(\mathbf{X},\mathbf{Y}) :=\min_{\begin{subarray}{c}\boldsymbol{\gamma}^{s}\in\Pi(\mu,\, \nu),\\ \boldsymbol{\gamma}^{v}\in\Pi(\mu^{\prime},\,\nu^{\prime})\end{subarray}} \alpha\mathcal{L}_{\text{GW}}(\boldsymbol{\gamma}^{s})+(1-\alpha)\mathcal{L} _{\text{COOT}}(\boldsymbol{\gamma}^{s},\boldsymbol{\gamma}^{v}) \tag{5}\] \[=\min_{\begin{subarray}{c}\boldsymbol{\gamma}^{s}\in\Pi(\mu,\, \nu),\\ \boldsymbol{\gamma}^{v}\in\Pi(\mu^{\prime},\,\nu^{\prime})\end{subarray}} \langle\alpha L(\mathbf{D}_{X},\mathbf{D}_{Y})\otimes\boldsymbol{\gamma}^{s} +(1-\alpha)L(\mathbf{X},\mathbf{Y})\otimes\boldsymbol{\gamma}^{v},\boldsymbol {\gamma}^{s}\rangle. \tag{6}\]
One may see that the AGW problem always admits a solution. Indeed, as the objective function is continuous and the sets of admissible couplings are compact, the existence of minimum and minimizer is then guaranteed.
Our proposed interpolation between COOT and GW distance offers several important benefits. First, the loss term in COOT ensures that AGW will take different values for any two isometries whenever \(d\neq d^{\prime}\). Intuitively, in such a case, AGW's value will depend on how "far" a given isometry is from a permutation of rows and columns of the input matrices. Thus, we restrict a very broad class of
Figure 1: Aligning digits from MNIST and USPS datasets. **(A)** Confusion matrices of GW, AGW with \(\alpha=0.5\) and COOT; **(B)** Feature coupling \(\boldsymbol{\gamma}^{v}\) of AGW compared to COOT; **(C)** Difference between the sample couplings obtained with AGW and GW or COOT; **(D)** Illustration of a case from (C) where GW’s and COOT’s invariances are detrimental for obtaining a meaningful comparison, while AGW remains informative.
(infinitely many) transformations that GW cannot distinguish and tell them apart by assessing whether or not they can be approximately obtained by simply swapping 1D elements in input matrices.
Second, combining COOT and GW distance allows us to effectively influence the optimization of \(\mathbf{\gamma}^{s}\) by introducing priors on feature matchings through \(\mathbf{\gamma}^{v}\) and vice versa. This can be achieved by penalizing the costs of matching certain features in the COOT loss term to force the optimization of \(\mathbf{\gamma}^{s}\) and take it into account. These two key properties explain our choice of calling it "augmented": on the one hand, we equip GW distance with an ability to provide more fine-grained comparisons between objects, while on the other hand, we incorporate into it a possibility of guiding the matching using available prior knowledge on the feature level.
IllustrationWe illustrate AGW on a problem of aligning handwritten digits from MNIST dataset (28\(\times\)28 pixels) with those from USPS dataset (16\(\times\)16 pixels) in Figure 1, where AGW with \(\alpha=0.5\) outperforms both GW and COOT in alignment accuracy (Panel A). When we investigate the digit pairs that benefit the most from the interpolation in AGW, we notice that misalignment between 6 and 2 in Gromov-Wasserstein OT, and misalignment between 3 and 5 in COOT improves the most (Panel C, highlighted by white asterisks). Panel D visualizes examples of digit pairs misaligned 3 by GW distance or COOT but correctly aligned with their own digits by AGW. We observe from these examples that 6-2 misalignment by GW distance is likely due to the fact that one is close to the reflection of the other across the y-axis. Similarly, COOT confuses 3 and 5 as one can easily obtain 3 from 5 by a local pixel permutation of the upper half of the images. Panel B visualizes the feature couplings obtained by AGW (on left) and COOT (on right). The feature coupling by COOT confirms that COOT allows for a reflection across the y-axis on the upper half of the image, but not on the lower half. With the interpolation in AGW, both of these misalignment cases improve, likely because (1) the correct feature alignments in the lower half of the images prevent 6 and 2 from being matched to each other and (2) GW distance is non-zero for 5-3 matches since GW will only be invariant to global isometries. In Appendix, we also show that providing supervision on feature alignments to restrict local reflections further improves AGW's performance.
Footnote 3: Here, we define “aligned pairs” as pairs of digits with the highest coupling probabilities
OptimizationFor simplicity, let us suppose \(n=m\) and \(d=d^{\prime}\). Thanks to the squared loss in the GW and COOT loss terms, the computational trick in [4] can be applied, which helps reduce the overall complexity of AGW from \(O(n^{4}+n^{2}d^{2})\) to \(O(n^{3}+dn^{2}+nd^{2})\). To solve the AGW problem, we use the block coordinate descent (BCD) algorithm, where one alternatively fixes one coupling and minimizes with respect to the other. Thus, each iteration consists in solving two optimal transport problems. To further accelerate the optimization, entropic regularization can be used [17] on either \(\mathbf{\gamma}^{s}\), \(\mathbf{\gamma}^{v}\), or both. Details of the algorithm can be found in Algorithm 1. In practice, we use POT package [18], which contains built-in functions to solve the OT, GW and COOT problems.
```
Initialize \(\mathbf{\gamma}^{s}\) and \(\mathbf{\gamma}^{v}\) repeat Calculate \(L_{v}=L(\mathbf{X},\mathbf{Y})\otimes\mathbf{\gamma}^{s}\). For fixed \(\mathbf{\gamma}^{s}\), solve the OT problem: \(\mathbf{\gamma}^{v}\in\arg\min_{\mathbf{\gamma}\in\Pi(\mathbf{\mu}^{\prime},\mathbf{\nu}^{ \prime})}\langle L_{v},\mathbf{\gamma}\rangle\). Calculate \(L_{s}=L(\mathbf{X},\mathbf{Y})\otimes\mathbf{\gamma}^{v}\). For fixed \(\mathbf{\gamma}^{v}\), solve the AGW problem: \(\mathbf{\gamma}^{s}\in\arg\min_{\mathbf{\gamma}\in\Pi(\mathbf{\mu},\mathbf{\nu})}\alpha\mathcal{ L}_{\text{GW}}(\mathbf{\gamma}^{s})+(1-\alpha)\langle L_{s},\mathbf{\gamma}^{s}\rangle\). until convergence
```
**Algorithm 1** BCD algorithm to solve AGW
### Theoretical analysis
Intuitively, given the structure of the objective function, we expect that AGW should share similar properties with GW and COOT, namely the existence of a minimizer, the interpolation between GW distance and COOT when the interpolation parameter varies, and the relaxed triangle inequality (since COOT and GW distance are both metrics). The following result summarizes these basic properties, and their proofs are in Appendix Section 1.
**Proposition 1**.: _For every \(\alpha\in[0,1]\)._
1. _Given two input matrices_ \(\mathbf{X}\) _and_ \(\mathbf{Y}\)_, when_ \(\alpha\to 0\) _(or_ \(1\)_), one has_ \(\text{AGW}_{\alpha}(\mathbf{X},\mathbf{Y})\to\text{COOT}(\mathbf{X},\mathbf{Y})\) _(or_ \(\text{GW}(\mathbf{X},\mathbf{Y})\)_)._
2. _AGW satisfies the relaxed triangle inequality: for any input matrices_ \(\mathbf{X},\mathbf{Y},\mathbf{Z}\)_, one has_ \(\text{AGW}_{\alpha}(\mathbf{X},\mathbf{Y})\leq 2\big{(}\text{AGW}_{\alpha}( \mathbf{X},\mathbf{Z})+\text{AGW}_{\alpha}(\mathbf{Z},\mathbf{Y})\big{)}\)_._
These basic properties ensure that our new divergence is well-posed. However, the most intriguing question is what invariances a convex combination of GW and COOT exhibit? Intuitively, we expect that AGW inherits the common invariants of both. Formally, we first introduce a necessary notion of weak invariance below.
**Definition 1**.: _We call \(D=\inf_{\pi\in\Pi}F(\pi,\mathbf{X},\mathbf{Y})\), where \(\mathbf{X},\mathbf{Y}\) are input data and \(\Pi\) is a set of feasible couplings, an OT-based divergence. Then \(D\) is weakly invariant to translation if for every \(a,b\in\mathbb{R}\), we have \(\inf_{\pi\in\Pi}F(\pi,\mathbf{X},\mathbf{Y})=C+\inf_{\pi\in\Pi}F(\pi,\mathbf{ X}+a,\mathbf{Y}+b)\), for some constant \(C\) depending on \(a,b,\mathbf{X},\mathbf{Y}\) and \(\Pi\)._
Here, we denote the translation of \(\mathbf{X}\) as \(\mathbf{X}+a\), whose elements are of the form \(\mathbf{X}_{i,j}+a\). In other words, an OT-based divergence is weakly invariant to translation if only the optimal transport plan is preserved under translation, but not necessarily the divergence itself. We now state our result regarding the weak invariance of AGW below.
**Theorem 1** (Invariant property).:
1. _AGW is weakly invariant to translation._
2. _Given two input matrices_ \(\mathbf{X}\) _and_ \(\mathbf{Y}\)_, if_ \(\mu=\nu\) _and_ \(\mathbf{Y}\) _is obtained by permuting columns (features) of_ \(\mathbf{X}\) _via the permutation_ \(\sigma_{c}\) _(so_ \(\nu^{\prime}=(\sigma_{c})_{\#}\mu^{\prime}\)_), then_ \(\text{AGW}(\mathbf{X},\mathbf{Y})=0\)_._
It is well known that, in Euclidean space, there are only three types of isometry: translation, rotation, and reflection (see [19], for example). AGW inherits the weak invariant to translation from COOT. In practice, we would argue that the ability to preserve the optimal plan under translation is much more important than preserving the distance itself. In other words, the translation only shifts the minimum but has no impact on the optimization procedure, meaning that the minimizer remains unchanged. Similar to GW distance, AGW also covers basic isometries, for example feature swap. Logically, AGW covers much fewer isometries than GW distance, since AGW only has at most as finitely many isometries as COOT, whereas GW distance has infinitely many isometries. Given the superior performance of AGW over GW and COOT in the experiments, we conjecture that there may be other relevant isometries. We leave a more detailed understanding of the isometries induced by AGW to future work.
### Related work
In optimal transport, a common approach to incorporate prior knowledge is integrating it into the cost function. For example, when all classification labels are available, [20] proposed the Optimal Transport Dataset Distance to compare two datasets by adding the Wasserstein distance between the histograms associated with the labels, to the transport cost. However, this approach is only applicable when the data lives in the same ground spaces. Given some known matching between samples across domains, [21] used the Keypoint-Guided Optimal Transport to capture more efficiently the discrepancy between two datasets, by attaching a mask matrix containing alignment information to the transport plan via the element-wise multiplication. Another line of work is the Fused Gromov-Wasserstein (FGW) distance [22] used for comparing structured objects at both structural and feature levels. The objective function of FGW is a convex combination between the GW term defined based on the intra-domain structural information, and the Wasserstein term that takes into account the information about the features associated to the corresponding structural elements.
Despite the resemblance to FGW, AGW serves a very different purpose and covers different use cases. First and foremost, AGW is a divergence that tackles cross-domain applications that are inaccessible to FGW. Second, FGW is mostly used for structured objects endowed with additional feature information, while AGW can be used on empirical measures defined for any set of objects. Finally, the feature space in the case of FGW is associated with the sample space, whereas in AGW the two spaces are independent. We would also like to stress that the notion of feature space in FGW _does not_ have the same meaning as the one in AGW (and COOT). Each element of the former is
associated with a point in the sample space; for example, each node of the graph may be colored by a specific color (feature). By contrast, the feature information in AGW is precisely the coordinates of a point, in addition to its representation in the original and dissimilarity-induced spaces.
## 4 Experimental evaluations
In this section, we present the empirical evaluations of the proposed divergence for the single-cell multi-omics alignment task and a heterogeneous domain adaptation task in ML. Overall, our experiments answer the following questions:
1. Does tightening the invariances of GW improve the performance in downstream tasks where it was previously used?
2. Does prior knowledge introduced in GW help in obtaining better cross-domain matchings?
We particularly focus on the emerging OT-driven single-cell alignment task for two reasons. First, GW imposed itself as a state-of-the-art method for this task [6; 10; 7], and thus it is important to see whether we can improve upon it using AGW. Second, several single-cell benchmark datasets provide ground-truth matchings on the feature level in addition to the common sample alignments. This information allows us to assess the importance of guiding cross-domain matching with partial or full knowledge of the relationships between the features in the two domains.
In the following experiments, we also consider the entropic regularization on both sample and feature couplings optimized by AGW. For all experiments, we detail our experimental setup, including the hyperparameter tuning procedure, as well as the runtime of the algorithms, in the Appendix.
### Integrating single-cell multi-omics datasets
Integration of data from different single-cell sequencing experiments is an important task in biology for which OT has proven to be useful[6; 7; 9]. Single-cell experiments measure various genomic features and events at the individual cell resolution. Jointly studying these can give scientists insight into the mechanisms regulating cells [6; 9; 23]. However, experimentally combining multiple types of measurements for the same cell is challenging for most combinations. To study the relationships and interactions between different aspects of the genome, scientists rely on computational integration of multi-modal data taken on different but related cells (e.g., by cell type or tissue).
Single-cell alignmentBelow, we follow [9] and align samples (i.e., cells) of simulated and real-world single-cell datasets from different measurement modalities in order to perform the integration. For all datasets, we have ground-truth information on cell-cell alignments, which we only use for benchmarking. We demonstrate in Table 1 that our proposed framework yields higher quality cell alignments (with lower alignment error) compared to both GW and COOT.
Alignment of genomic featuresAGW augments GW formulation with a feature coupling matrix. Therefore, we also jointly align features and investigate AGW's use for revealing potential biological
\begin{table}
\begin{tabular}{l|c c c c c c c} & **Sim 1** & **Sim 2** & **Sim 3** &
\begin{tabular}{c} **Slatter Simulation** \\ **(Synthetic RNA-seq)** \\ \end{tabular} & **scGEM** & **SNARE-seq** & **CITE-seq** \\ \hline
**AGW** & **0.073** & **0.0041** & **0.0082** & **0.0** & **0.183** & **0.136** & **0.091** \\
**GW (SCOT)** & 0.0866 & 0.0216 & 0.0084 & 7.1 e-5 & 0.198 & 0.150 & 0.131 \\
**COOT** & 0.0752 & **0.0041** & 0.0088 & **0.0** & 0.206 & 0.153 & 0.132 \\
**bindSC** & N/A & N/A & N/A & 3.8 e-4 & 0.204 & 0.242 & 0.144 \\ \end{tabular}
\end{table}
Table 1: **Single-cell alignment error**, as quantified by the average ‘fraction of samples closer than true match’ (FOSCTTM) metric (lower values are better, metric defined in the Appendix). Cell alignment performance of AGW is compared against the alignments by GW (SCOT), COOT, and bindSC, which performs bi-order canonical correlation analysis for alignment (detailed in the next section). It requires prior information on feature relationships, which we do not have for the first three simulations (thus the N/A).
relationships. All current single-cell alignment methods can only align samples (i.e., cells). A nuanced exception is bindSC [23], which performs bi-order canonical correlation analysis to integrate cells. As a result, it internally generates a feature correlation matrix that users can extract. Among all the real-world datasets in Table 1, CITE-seq [24] is the only one with ground-truth information on feature correspondences. This dataset has paired single-cell measurements on the abundance levels of 25 antibodies, as well as activity (i.e., "expression") levels of genes, including the genes that encode these 25 antibodies. So, we first present unsupervised feature alignment results on the CITE-seq dataset. For completion, we also report the biological relevance of our feature alignments on SNARE-seq [25] and scGEM [26] datasets in Appendix Section 4. However, note that these datasets (unlike CITE-seq) do not have clear ground-truth feature correspondences. We compare our feature alignments with bindSC and COOT in Figure 2. The entries in the feature alignment matrices are arranged such that the "ground-truth" correspondences lie in the diagonal, marked by green squares. While AGW correctly assigns 19 out of 25 antibodies to their encoding genes with the highest alignment probability, this number is 15 for COOT and 13 for bindSC (which yields correlation coefficients instead of alignment probabilities). Additionally, the OT methods yield more sparse alignments thanks to the "least effort" requirement in their formulation.
Importance of prior knowledgeFinally, we demonstrate the advantage of providing prior information by aligning a multi-species gene expression dataset, which contains measurements from the adult mouse prefrontal cortex[27] and pallium of bearded lizard [28]. Since measurements come from two different species, the feature space (i.e. genes) differs, and there is also no 1-1 correspondence between the samples (i.e. cells). However, there is a shared subset within the features, i.e., paralogous genes, which are genes that descend from a common ancestor of the two species and have similar biological functions. We also have some domain knowledge about the cells that belong to similar cell types across the two species. Thus, we expect AGW to recover these relationships in both the sample and the feature alignment matrices.
Figure 3 visualizes the cell-type alignment probabilities yielded by AGW when full supervision is provided on the \(10,816\) paralogous genes. The green boxes indicate alignment between similar types of cells. This matrix is obtained by averaging the sample alignment matrix (i.e., cell-cell alignments) into cell-type groups. Figure 3 demonstrates that AGW yielded biologically plausible alignments, as all the six cell types that have a natural match across the two species are correctly matched. We additionally show in Tables 2 and 3 that providing supervision on one level of alignment (e.g., features) improves the alignment quality on the other level (e.g., samples). Supervision scheme is detailed in the "Experimental Set-up" section of the Appendix.
Figure 3: **Cell-type alignment results on cross-species dataset.**
Figure 2: Feature alignment matrices in the CITE-seq dataset. AGW provides more qualitative results than previously proposed methods.
### Heterogeneous domain adaptation
We now demonstrate the generalizability of our approach and turn our attention to an ML task, heterogeneous domain adaptation, where COOT and GW were previously successfully used. Domain adaptation (DA) refers to the problem in which a classifier learned on one domain (called _source_) can generalise well to the other one (called _target_). Here, we illustrate an application of AGW in unsupervised and semi-supervised heterogeneous DA (HDA), where the samples in source and target domains live in different spaces, and we only have access to as few as zero labeled target samples.
SetupWe follow the evaluation setup from [16]: AGW, GW, and COOT are evaluated on source-target pairs from the Caltech-Office dataset [29]. We consider all pairs between the three domains: Amazon (A), Caltech-\(256\) (C), and Webcam (W), whose images are embeddings extracted from the second last layer in the GoogleNet [30] (vectors in \(\mathbb{R}^{4096}\)) and CaffeNet [31] (vectors in \(\mathbb{R}^{1024}\)) neural network architectures. In the semi-supervised setting, we incorporate the prior knowledge of the target labels by adding an additional cost matrix to the training of sample coupling, so that a source sample will be penalized if it transfers mass to the target samples in the different classes. Once the sample coupling \(\mathbf{\gamma}^{s}\) is learned, we obtain the final prediction using label propagation: \(\widehat{y}_{t}=\operatorname*{arg\,max}_{k}L_{k}\). where \(L=D_{s}\mathbf{\gamma}^{s}\) and \(D_{s}\) denotes one-hot encodings of the source labels \(y_{s}\). The interpolation and entropic regularization hyperparameters are tuned, using accuracy as the evaluation metric.
ResultsTable 4 demonstrates each method's performance averaged across ten runs, after hyperparameter tuning with the same grid of values for equivalent hyperparameters (e.g. entropic regularization, details in Appendix). We consider an unsupervised case, and a semi-supervised case with 3 samples used for supervision. Table 4 shows that AGW tends to outperform both GW and COOT, which supports our claim about its capacity to properly adjust the invariance to the datasets at hand.
\begin{table}
\begin{tabular}{l c c c c|c c c c} \hline \hline \multicolumn{8}{c|}{**Unsupervised**} & \multicolumn{8}{c}{**Semi-supervised (t=3)**} \\ \hline \hline & \multicolumn{2}{c}{**AGW**} & \multicolumn{2}{c}{**AGW**} & \multicolumn{2}{c}{**AGW**} & \multicolumn{2}{c}{**AGW**} \\ & **COOT** & **GW** & \multicolumn{2}{c}{**(\(\alpha\)=0.5)**} & \multicolumn{2}{c}{**(Best \(\alpha\))**} & \multicolumn{2}{c}{**COOT**} & \multicolumn{2}{c}{**GW**} & \multicolumn{2}{c}{**(\(\alpha\)=0.5)**} & \multicolumn{2}{c}{**(Best \(\alpha\))**} \\ \hline \(\mathbf{A}\rightarrow\mathbf{A}\) & 50.3 \(\pm\) 15.9 & 86.2 \(\pm\) 2.3 & 90.5 \(\pm\) 2.4 & **93.1 \(\pm\) 1.6** & 91.1 \(\pm\) 2.0 & 93.2 \(\pm\) 0.9 & 93.8 \(\pm\) 1.3 & **96.0 \(\pm\) 0.8** \\ \(\mathbf{A}\rightarrow\mathbf{C}\) & 35.0 \(\pm\) 6.4 & 64.1 \(\pm\) 6.2 & 68.2 \(\pm\) 7.4 & **68.3 \(\pm\) 14.1** & 59.7 \(\pm\) 3.6 & 92.8 \(\pm\) 2.1 & 90.7 \(\pm\) 1.9 & **93.5 \(\pm\) 1.8** \\ \(\mathbf{A}\rightarrow\mathbf{W}\) & 39.8 \(\pm\) 14.5 & 79.6 \(\pm\) 11.1 & 75.5 \(\pm\) 3.1 & **79.8 \(\pm\) 3.5** & 72.6 \(\pm\) 4.4 & 91.6 \(\pm\) 1.8 & 91.4 \(\pm\) 1.1 & **93.8 \(\pm\) 0.7** \\ \(\mathbf{C}\rightarrow\mathbf{A}\) & 40.8 \(\pm\) 15.8 & 53.0 \(\pm\) 13.2 & 48.5 \(\pm\) 6.9 & **55.4 \(\pm\) 7.1** & 83.1 \(\pm\) 5.1 & 81.2 \(\pm\) 1.2 & 84.3 \(\pm\) 1.6 & **85.6 \(\pm\) 1.2** \\ \(\mathbf{C}\rightarrow\mathbf{C}\) & 33.4 \(\pm\) 10.7 & **81.9 \(\pm\) 30.5** & 68.5 \(\pm\) 5.5 & 76.4 \(\pm\) 5.6 & 59.3 \(\pm\) 8.4 & 85.3 \(\pm\) 2.8 & 83.4 \(\pm\) 2.3 & **86.5 \(\pm\) 2.1** \\ \(\mathbf{C}\rightarrow\mathbf{W}\) & 37.5 \(\pm\) 10.4 & 53.5 \(\pm\) 15.9 & 56.6 \(\pm\) 7.6 & **57.7 \(\pm\) 14.3** & 64.6 \(\pm\) 6.2 & 79.7 \(\pm\) 2.5 & 81.3 \(\pm\) 4.3 & **83.2 \(\pm\) 2.4** \\ \(\mathbf{W}\rightarrow\mathbf{A}\) & 44.3 \(\pm\) 14.0 & 50.4 \(\pm\) 22.1 & 52.1 \(\pm\) 3.8 & **60.1 \(\pm\) 9.1** & 94.3 \(\pm\) 2.2 & 93.4 \(\pm\) 5.2 & 92.3 \(\pm\) 1.5 & **97.1 \(\pm\) 0.8** \\ \(\mathbf{W}\rightarrow\mathbf{C}\) & 27.4 \(\pm\) 10.2 & 54.3 \(\pm\) 14.7 & 53.6 \(\pm\) 17.3 & **60.9 \(\pm\) 13.3** & 55.0 \(\pm\) 7.1 & 90.9 \(\pm\) 3.5 & 90.9 \(\pm\) 2.0 & **94.7 \(\pm\) 1.1** \\ \(\mathbf{W}\rightarrow\mathbf{W}\) & 57.9 \(\pm\) 13.4 & 92.5 \(\pm\) 2.6 & 90.3 \(\pm\) 5.4 & **97.2 \(\pm\) 0.9** & 87.4 \(\pm\) 4.4 & 97.4 \(\pm\) 2.6 & 98.5 \(\pm\) 0.7 & **98.7 \(\pm\) 0.5** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Heterogeneous domain adaptation results**. Best results are bolded. In the “AGW (Best \(\alpha\))” column, the \(\alpha\) values used are: \(0.6,0.9,0.7,0.9,0.3,0.8,0.7,0.2,0.6\) top to bottom for the unsupervised setting, and \(0.2,0.1,0.2,0.7,0.2,0.9,0.8,0.9,0.4\) for the semi-supervised setting.
\begin{table}
\begin{tabular}{l|c c c c c|c} \hline \hline
**Supervision on orthologous genes** & 0\% & 20\% & 40\% & 60\% & 80\% & 100\% & **bindSC** \\ \hline
**\% Accuracy of Cell-type alignment** & 66.67 & 83.34 & 83.34 & 100 & 100 & 100 & 66.67 \\ \hline \hline \end{tabular}
\end{table}
Table 2: AGW’s sample (i.e. cell) alignment performance with increasing supervision on feature (i.e. gene) alignments
Discussion and conclusion
We present a new OT-based distance for incomparable spaces called augmented Gromov-Wasserstein (AGW), which relies on the GW distance and CO-Optimal transport. This novel metric allows us to narrow down the choices of isometries induced by GW distance, while better exploiting the prior knowledge or the input data. We study its basic properties and empirically show that such restriction results in better performance for single-cell multi-omic alignment tasks and transfer learning. Future work will focus on refining the theoretical analysis of the isometries induced by AGW distance, which may shed light on understanding why they are useful and relevant in various learning tasks. It would also be interesting to extend this framework to the unbalanced, and/or continuous setting and other tasks where feature supervision proposed by domain experts may be incorporated in OT framework.
LimitationsOne limitation of AGW is the inherent computational burden of the GW component. Possible solutions can be considering low-rank coupling and cost matrix [32], or using the divide and conquer strategy [33], which allows us to scale the GW distance up to a million points.
Appendix
### Proofs
Proof of proposition 1.: The proof of this proposition can be adapted directly from [22]. For self-contained purpose, we give the proof here. Denote
* \((\mathbf{\gamma}_{\alpha}^{s},\mathbf{\gamma}_{\alpha}^{v})\) the optimal sample and feature couplings for \(\text{AGW}_{\alpha}(\mathbf{X},\mathbf{Y})\).
* \((\mathbf{\gamma}_{\delta}^{s},\mathbf{\gamma}_{0}^{v})\) the optimal sample and feature couplings for \(\text{COOT}(\mathbf{X},\mathbf{Y})\).
* \(\mathbf{\gamma}_{1}^{s}\) the optimal sample coupling for \(\text{GW}(\mathbf{X},\mathbf{Y})\).
Due to the suboptimality of \(\mathbf{\gamma}_{\alpha}^{s}\) for GW and \((\mathbf{\gamma}_{1}^{s},\mathbf{\gamma}_{0}^{v})\) for AGW, we have
\[\alpha\langle L(\mathbf{D}_{X},\mathbf{D}_{Y})\otimes\mathbf{\gamma }_{1}^{s},\mathbf{\gamma}_{1}^{v}\rangle \leq\alpha\langle L(\mathbf{D}_{X},\mathbf{D}_{Y})\otimes\mathbf{ \gamma}_{\alpha}^{s},\mathbf{\gamma}_{\alpha}^{s}\rangle+(1-\alpha)\langle L( \mathbf{X},\mathbf{Y})\otimes\mathbf{\gamma}_{\alpha}^{v},\mathbf{\gamma}_{\alpha}^{ s}\rangle\] \[\leq\alpha\langle L(\mathbf{D}_{X},\mathbf{D}_{Y})\otimes\mathbf{ \gamma}_{1}^{s},\mathbf{\gamma}_{1}^{s}\rangle+(1-\alpha)\langle L(\mathbf{X}, \mathbf{Y})\otimes\mathbf{\gamma}_{0}^{v},\mathbf{\gamma}_{1}^{s}\rangle,\]
or equivalently
\[\alpha\text{GW}(\mathbf{X},\mathbf{Y})\leq\text{AGW}_{\alpha}(\mathbf{X}, \mathbf{Y})\leq\alpha\text{GW}(\mathbf{X},\mathbf{Y})+(1-\alpha)\langle L( \mathbf{X},\mathbf{Y})\otimes\mathbf{\gamma}_{0}^{v},\mathbf{\gamma}_{1}^{s}\rangle. \tag{7}\]
Similarly, we have
\[(1-\alpha)\text{COOT}(\mathbf{X},\mathbf{Y})\leq\text{AGW}_{\alpha}(\mathbf{X},\mathbf{Y})\leq(1-\alpha)\text{COOT}(\mathbf{X},\mathbf{Y})+\alpha\langle L (\mathbf{D}_{X},\mathbf{D}_{Y})\otimes\mathbf{\gamma}_{0}^{s},\mathbf{\gamma}_{0}^{s}\rangle. \tag{8}\]
The interpolation property then follows by the sandwich theorem.
Regarding the relaxed triangle inequality, given three triples \((\mathbf{X},\mu_{sx},\mu_{fx}),(\mathbf{Y},\mu_{sy},\mu_{fy})\) and \((\mathbf{Z},\mu_{sz},\mu_{fx})\), let \((\pi^{XY},\gamma^{XY}),(\pi^{YZ},\gamma^{YZ})\) and \((\pi^{XZ},\gamma^{XZ})\) be solutions of the problems \(\text{AGW}_{\alpha}(\mathbf{X},\mathbf{Y}),\text{AGW}_{\alpha}(\mathbf{Y}, \mathbf{Z})\) and \(\text{AGW}_{\alpha}(\mathbf{X},\mathbf{Z})\), respectively. Denote \(P=\pi^{XY}\text{diag}\left(\frac{1}{\mu_{sy}}\right)\pi^{YZ}\) and \(Q=\gamma^{XY}\text{diag}\left(\frac{1}{\mu_{fy}}\right)\gamma^{YZ}\). Then, it is not difficult to see that \(P\in\Pi(\mu_{sx},\mu_{sz})\) and \(Q\in\Pi(\mu_{fx},\mu_{fx})\). The suboptimality of \((P,Q)\) implies that
\[\frac{\text{AGW}_{\alpha}(\mathbf{X},\mathbf{Z})}{2}\] \[\leq\alpha\sum_{i,j,k,l}\frac{|\mathbf{D}_{X}(i,j)-\mathbf{D}_{Z }(k,l)|^{2}}{2}P_{i,k}P_{j,l}+(1-\alpha)\sum_{i,j,k,l}\frac{|\mathbf{X}_{i,j} -\mathbf{Z}_{k,l}|^{2}}{2}P_{i,k}Q_{j,l}\] \[=\alpha\sum_{i,j,k,l}\frac{|\mathbf{D}_{X}(i,j)-\mathbf{D}_{Z}(k,l)|^{2}}{2}\left(\sum_{e}\frac{\pi_{i,e}^{XY}\pi_{e,k}^{YZ}}{(\mu_{sy})_{e}} \right)\left(\sum_{o}\frac{\pi_{j,o}^{XY}\pi_{o,l}^{YZ}}{(\mu_{sy})_{o}}\right)\] \[+(1-\alpha)\sum_{i,j,k,l}\frac{|\mathbf{X}_{i,j}-\mathbf{Z}_{k,l} |^{2}}{2}\left(\sum_{e}\frac{\pi_{i,e}^{XY}\pi_{e,k}^{YZ}}{(\mu_{sy})_{e}} \right)\left(\sum_{o}\frac{\gamma_{j,o}^{XY}\gamma_{o,l}^{YZ}}{(\mu_{fy})_{o}}\right)\] \[\leq\alpha\sum_{i,j,k,l,e,o}|\mathbf{D}_{X}(i,j)-\mathbf{D}_{Y}(e,o)|^{2}\frac{\pi_{i,e}^{XY}\pi_{e,k}^{YZ}}{(\mu_{sy})_{e}}\frac{\pi_{j,o}^{ XY}\pi_{o,l}^{YZ}}{(\mu_{sy})_{o}}+(1-\alpha)\sum_{i,j,k,l,e,o}|\mathbf{X}_{i,j}- \mathbf{Y}_{e,o}|^{2}\frac{\pi_{i,e}^{XY}\pi_{e,k}^{YZ}}{(\mu_{sy})_{e}}\frac{ \gamma_{j,o}^{XY}\gamma_{o,l}^{YZ}}{(\mu_{fy})_{o}}\] \[+\alpha\sum_{i,j,k,l,e,o}|\mathbf{D}_{Y}(e,o)-\mathbf{D}_{Z}(k,l)|^{2}\frac{\pi_{i,e}^{XY}\pi_{e,k}^{YZ}}{(\mu_{sy})_{e}}\frac{\pi_{j,o}^{ XY}\pi_{o,l}^{YZ}}{(\mu_{sy})_{o}}+(1-\alpha)\sum_{i,j,k,l,e,o}|\mathbf{Y}_{e,o}- \mathbf{Z}_{k,l}|^{2}\frac{\pi_{i,e}^{XY}\pi_{e,k}^{YZ}}{(\mu_{sy})_{e}}\frac{ \gamma_{j,o}^{XY}\gamma_{o,l}^{YZ}}{(\mu_{fy})_{o}}\] \[=\alpha\sum_{i,j,e,o}|\mathbf{D}_{X}(i,j)-\mathbf{D}_{Y}(e,o)|^{2 }\pi_{i,e}^{XY}\pi_{j,o}^{XY}+(1-\alpha)\sum_{i,j,e,o}|\mathbf{X}_{i,j}- \mathbf{Y}_{e,o}|^{2}\pi_{i,e}^{XY}\gamma_{j,o}^{XY}\] \[+\alpha\sum_{k,l,e,o}|\mathbf{D}_{Y}(e,o)-\mathbf{D}_{Z}(k,l)|^{2 }\pi_{e,k}^{YZ}\pi_{o,l}^{YZ}+(1-\alpha)\sum_{k,l,e,o}|\mathbf{Y}_{e,o}- \mathbf{Z}_{k,l}|^{2}\pi_{e,k}^{YZ}\gamma_{o,l}^{YZ}\] \[=\text{AGW}_{\alpha}(\mathbf{X},\mathbf{Y})+\text{AGW}_{\alpha}( \mathbf{Y},\mathbf{Z}).\]
where the second inequality follows from the inequality: \((x+y)^{2}\leq 2(x^{2}+y^{2})\).
**Corollary 1**.: _COOT is weakly invariant to translation._
Proof of corollary 1.: It is enough to show that, for any \(c\in\mathbb{R}\), we have \(\text{COOT}(\mathbf{X},\mathbf{Y}+c)=\text{COOT}(\mathbf{X},\mathbf{Y})+C\), for some constant \(C\). Indeed, given \(\boldsymbol{\gamma}^{s}\in\Pi(\mu,\nu),\boldsymbol{\gamma}^{v}\in\Pi(\mu^{ \prime},\nu^{\prime})\), for any \(c\in\mathbb{R}\),
\[\sum_{ijkl}(\mathbf{X}_{ik}-\mathbf{Y}_{jl}-c)^{2}\boldsymbol{\gamma}^{s}_{ij} \boldsymbol{\gamma}^{v}_{kl}=\sum_{ijkl}(\mathbf{X}_{ik}-\mathbf{Y}_{jl})^{2} \boldsymbol{\gamma}^{s}_{ij}\boldsymbol{\gamma}^{v}_{kl}-2c\sum_{ijkl}( \mathbf{X}_{ik}-\mathbf{Y}_{jl})\boldsymbol{\gamma}^{s}_{ij}\boldsymbol{ \gamma}^{v}_{kl}+c^{2} \tag{9}\]
Now,
\[\sum_{ijkl}(\mathbf{X}_{ik}-\mathbf{Y}_{jl})\boldsymbol{\gamma}^{ s}_{ij}\boldsymbol{\gamma}^{v}_{kl} =\sum_{ijkl}\mathbf{X}_{ik}\boldsymbol{\gamma}^{s}_{ij}\boldsymbol {\gamma}^{v}_{kl}-\sum_{ijkl}\mathbf{Y}_{jl}\boldsymbol{\gamma}^{s}_{ij} \boldsymbol{\gamma}^{v}_{kl} \tag{10}\] \[=\sum_{ik}\mathbf{X}_{ik}\left(\sum_{j}\boldsymbol{\gamma}^{s}_{ ij}\right)\left(\sum_{l}\boldsymbol{\gamma}^{v}_{kl}\right)-\sum_{jl}\mathbf{Y}_{jl} \left(\sum_{i}\boldsymbol{\gamma}^{s}_{ij}\right)\left(\sum_{k}\boldsymbol{ \gamma}^{v}_{kl}\right)\] (11) \[=\sum_{ik}\mathbf{X}_{ik}\mu_{i}\mu^{\prime}_{k}-\sum_{jl} \mathbf{Y}_{jl}\nu_{j}\nu^{\prime}_{l}\] (12) \[=\mu^{T}\mathbf{X}\mu^{\prime}-\nu^{T}\mathbf{Y}\nu^{\prime}. \tag{13}\]
So,
\[\text{COOT}(\mathbf{X},\mathbf{Y}+c)=\text{COOT}(\mathbf{X},\mathbf{Y})-2c \left(\mu^{T}\mathbf{X}\mu^{\prime}-\nu^{T}\mathbf{Y}\nu^{\prime}\right)+c^{2}.\]
This implies that COOT is weakly invariant to translation.
Proof of theorem 1.: Note that the GW term in AGW remains unchanged by translation. By adapting the proof of corollary 1, we obtain
\[\text{AGW}_{\alpha}(\mathbf{X},\mathbf{Y}+c)=\text{AGW}_{\alpha}(\mathbf{X}, \mathbf{Y})-2c\left(\mu^{T}\mathbf{X}\mu^{\prime}-\nu^{T}\mathbf{Y}\nu^{ \prime}\right)+c^{2}.\]
This means AGW is weakly invariant to translation.
Note that \(\mathbf{Y}=\mathbf{X}Q\), where \(Q\) is a permutation matrix corresponding to the permutation \(\sigma_{c}\). Since \(\mathbf{Y}\) is obtained by swapping columns of \(\mathbf{X}\), we must have that \(\text{GW}(\mathbf{X},\mathbf{Y})=0\) and the optimal plan between \(\mathbf{X}\) and \(\mathbf{Y}\) is \(\boldsymbol{\gamma}^{s}=\frac{1}{n^{2}}\text{Id}_{n}\). Similarly, \(\text{COOT}(\mathbf{X},\mathbf{Y})=0\) and \(\boldsymbol{\gamma}^{s},\boldsymbol{\gamma}^{v}=\frac{1}{n}Q\) are the optimal sample, feature couplings, respectively. In other words, \(\langle L(\mathbf{D}_{X},\mathbf{D}_{Y})\otimes\boldsymbol{\gamma}^{s}, \boldsymbol{\gamma}^{s}\rangle=0\) and \(\langle L(\mathbf{X},\mathbf{Y})\otimes\boldsymbol{\gamma}^{v},\boldsymbol{ \gamma}^{s}\rangle=0\). We deduce that \(\text{AGW}_{\alpha}(\mathbf{X},\mathbf{Y})=0\).
### Additional illustrations on the MNIST and USPS handwritten digits
## 7 Experimental Set-Up Details
### Code availability and access to scripts for experiments
Code and datasets used in this paper can be found at: [https://github.com/pinardemetci/AGW](https://github.com/pinardemetci/AGW)
### MNIST Illustrations
We align \(1000\) images of hand-written digits from the MNIST dataset with \(1000\) images from the USPS dataset. Each dataset is subsampled to contain \(100\) instances of each of the \(10\) possible digits (\(0\) through \(9\)), using the random seed of \(1976\). We set all marginal distributions to uniform, and use cosine distances for GW and AGW. For all methods, we consider both the entropically regularized and also the non-regularized versions. For entropic regularization, we sweep a grid of \(\epsilon_{1},\epsilon_{2}(\text{if applicable})\in[5e-4,1e-3,5e-3,1e-2,5e-2,1e -1,5e-1]\). For AGW, we consider \([0.1,0.2,0.3,...,0.9]\), and present results with the best performing hyperparameter combination of each method, as measured by the percent accuracy of matching images from the same digit across the two datasets.
### Single-cell multi-omic alignment experiments
As a real-world application of AGW, we align single-cell data from different measurement domains. Optimal transport has recently been applied to this problem in computational biology by multiple groups [9; 6; 10; 7]. To briefly introduce the problem: Biologists are interested in jointly studying multiple genomic (i.e. "multi-omic") aspects of cells to determine biologically-relevant patterns in their co-variation. Such studies could reveal how the different molecular aspects of a cell's genome (e.g. its 3D structure, reversible chemical modifications it undergoes, activity levels of its genes
Figure 4: **A. Examples of isometric transformations that COOT is not invariant to (e.g. rotation, sign change), B. Further improving MNIST-USPS alignment accuracy of AGW through supervision on features to restrict reflections along the y-axis.**
etc) interact to regulate the cell's response to its environment. These studies are of interest for both fundamental biology research, as well as drug discovery applications. However, as Liu _et al_ describes [34], it is experimentally difficult to combine multiple measurements on the same cells. Consequently, computational approaches are developed to integrate data obtained from different measurement modalities using biologically relevant cell populations. In this paper, we apply AGW to jointly align both cells and genomic features of single-cell datasets. This is a novel direction in the application of optimal transport (OT) to single-cell multi-omic alignment task, as the existing OT-based algorithms only align cells.
DatasetsWe largely follow the first paper that applied OT to single-cell multi-omic alignment task [9] in our experimental set-up and use four simulated datasets and three real-world single-cell multi-omic datasets to benchmark our cell alignment performance. Three of the simulated datasets have been generated by Liu _et al._[34] by non-linearly projecting 600 samples from a common 2-dimensional space onto different 1000- and 2000- dimensional spaces with 300 samples in each. In the first simulation, the data points in each domain form a bifurcating tree structure that is commonly seen in cell populations undergoing differentiation. The second simulation forms a three dimensional Swiss roll. Lastly, the third simulation forms a circular frustum that resembles what is commonly observed when investigating cell cycle. These datasets have been previously used for benchmarking by other cell-cell alignment methods [34, 35, 36, 6, 9]. We refer to these datasets as "Sim 1", "Sim 2", and "Sim 3", respectively. We include a fourth simulated dataset that has been generated by [9] using a single-cell RNA-seq data simulation package in R, called Splatter [37]. We refer to this dataset as "Synthetic RNA-seq". This dataset includes a simulated gene expression domain with 50 genes and 5000 cells divided across three cell-types, and another domain created by non-linearly projecting these cells onto a 500-dimensional space. As a result of their generation schemes, all simulated datasets have ground-truth 1-1 cell correspondence information. We use this information solely for benchmarking. We do not have access to ground-truth feature relationships in these datasets, so, we exclude them from feature alignment experiments.
Additionally to the simulated datasets, we include three real-world sequencing datasets in our experiments. To have ground-truth information on cell correspondences for evaluation, we choose three co-assay datasets which have paired measurements on the same individual cells: an scGEM dataset [38], a SNARE-seq dataset [25], and a CITE-seq dataset [24] (these are exceptions to the experimental challenge described above). These first two datasets have been used by existing OT-based single-cell alignment methods [36, 35, 9, 6, 10], while the last one was included in the evaluations of a non-OT-based alignment method, bindSC [23] (described in the "Evaluations" section below). The scGEM dataset contains measurements on gene expression and DNA methylation states of 177 individual cells from human somatic cell population undergoing conversion to induced pluripotent stem cells (iPSCs) [38]. We accessed the pre-processed count matrices for this dataset through the following GitHub repository: [https://github.com/caokai1073/UnionCom](https://github.com/caokai1073/UnionCom). The SNARE-seq dataset contains gene expression and chromatin accessibility profiles of 1047 individual cells from a mixed population of four cell lines: H1(human embryonic stem cells), BJ (a fibroblast cell line), K562 (a lymphoblast cell line), and GM12878 (lymphoblastoid cells derived from blood) [25]. We access their count matrices on Gene Expression Omnibus platform online, with the accession code GSE126074. Finally, the CITE-seq dataset has gene expression profiles and epitope abundance measurements on 25 antibodies from 30,672 cells from human bone marrow tissue [24]. The count matrices for this dataset were downloaded from the Seurat website 4. We use these three real-world single-cell datasets for both cell-cell (i.e. sample-sample) alignment benchmarking, as well as feature-feature alignment benchmarking. In addition to these three datasets, we include a fourth single-cell datasets, which contains data from the same measurement modality (i.e. gene expression), but from two different species: mouse [27] and bearded lizard [28]. Our motivation behind including this dataset is to demonstrate the effects of both sample-level (i.e. cell-level) and feature-level supervision on alignment qualities. We refer to this dataset as the "cross-species dataset", which contains 4,187 cells from lizard pallium (a brain region) and 6,296 cells from the mouse prefrontal cortex. The two species share a subset of their features: 10,816 paralogous genes. Each also has species-specific genes: 10,184 in the mouse dataset and 1,563 in the lizard dataset. As the data comes from different species there is no 1-1 correspondence between cells. However, the two species contain cells from similar cell types. Unlike the other single-cell dataset, there is a subset of the features (the paralogous
genes) that have 1-1 correspondences across the two domains (domains are defined by species in this dataset).
Baselines and hyperparameter tuningWe benchmark AGW's performance on single-cell alignment tasks against three algorithms: (1) COOT [16], (2) SCOT [9], which is a Gromov-Wasserstein OT-based algorithm that uses k-nearest neighbor (kNN) graph distances as intra-domain distance matrices (this choice of distances has been shown to perform better than Euclidean distances, cosine distances by [9]), and bindSC [23]. Among these, bindSC is not an OT-based algorithm: It employs bi-order cannonical correlation analysis to perform alignment. We include it as a benchmark as it is the only existing single-cell alignment algorithm that can perform feature alignments (in addition to cell alignments) in most cases.
When methods share similar hyperparameters in their formulation (e.g. entropic regularization constant, \(\epsilon\) for methods that employ OT), we use the same hyperparameter grid to perform their tuning. Otherwise, we refer to the publication and the code repository for each method to choose a hyperparameter range. For SCOT, we tune four hyperparameters: \(k\in\{20,30,\ldots,150\}\), the number of neighbors in the cell neighborhood graphs, \(\epsilon\in\{5e-4,3e-4,1e-4,7e-3,5e-3,\ldots,1e-2\}\), the entropic regularization coefficient for the optimal transport formulation. Similarly, for both COOT and AGW, we sweep \(\epsilon_{1},\epsilon_{2}\in\{5e-4,3e-4,1e-4,7e-3,5e-3,\ldots,1e-2\}\) for the coefficients of entropic regularization over the sample and feature alignments. We use the same intra-domain distance matrices in AGW as in SCOT (based on kNN graphs). For all OT-based methods, we perform barycentric projection to complete the alignment.
For bindSC, we choose the couple coefficient that assigns weight to the initial gene activity matrix \(\alpha\in\{0,0.1,0.2,\ldots 0.9\}\) and the couple coefficient that assigns weight factor to multi-objective function \(\lambda\in\{0.1,0.2,\ldots,0.9\}\). Additionally, we choose the number of canonical vectors for the embedding space \(K\in\{3,4,5,10,30,32\}\). For all methods, we report results with the best performing hyperparameter combinations.
Evaluation MetricsWhen evaluating cell alignments, we use a metric previously used by other single-cell multi-omic integration tools [34, 35, 36, 9, 6, 10, 23] called "fraction of samples closer than the true match" (FOSCTTM). For this metric, we compute the Euclidean distances between a fixed sample point and all the data points in the other domain. Then, we use these distances to compute the fraction of samples that are closer to the fixed sample than its true match, and then average these values for all the samples in both domains. This metric measures alignment error, so the lower values correspond to higher quality alignments.
To assess feature alignment performance, we investigate the accuracy of feature correspondences recovered. We mainly use two real-world datasets for this task - CITE-seq, and the cross-species scRNA-seq datasets (results on SNARE-seq and scGEM datasets are qualitatively evaluated due to the lack of ground-truth information). For the CITE-seq dataset, we expect the feature correspondences to recover the relationship between the 25 antibodies and the genes that encode them. To investigate this, we simultaneously align the cells and features of the two modalities using the 25 antibodies and 25 genes in an unsupervised manner. We compute the percentage of 25 antibodies whose strongest correspondence is their encoding gene.
For the cross-species RNA-seq dataset, we expect alignments between (1) the cell-type annotations common to the mouse and lizard datasets, namely: excitatory neurons, inhibitory neurons, microglia, OPC (Oligodendrocyte precursor cells), oligodendrocytes, and endothelial cells and (2) between the paralogous genes. For this dataset, we generate cell-label matches by averaging the rows and columns of the cell-cell alignment matrix yielded by AGW based on these cell annotation labels. We compute the percentage of these six cell-type groups that match as their strongest correspondence. For feature alignments, we compute the percentage of the 10,816 shared genes that are assigned to their corresponding paralogous gene with their highest alignment probability. For this dataset, we consider providing supervision at increasing levels on both sample and feature alignments. For feature-level supervision, \(20\%\) supervision means setting the alignment cost of \(\sim 20\%\) of the genes with their paralogous pairs to \(0\). For sample-level supervision, \(20\%\) supervision corresponds to downscaling the alignment cost of \(\sim 20\%\) of the mouse cells from the aforementioned seven cell-types with the \(\sim 20\%\) of lizard cells from their corresponding cell-type by \(\frac{1}{\#\text{ lizard cells in the same cell-type}}\).
### Heterogeneous domain adaptation experiments
We evaluate AGW against GW and COOT on source-target pairs from the Caltech-Office dataset [29]by considering all pairs between the three domains: Amazon (A), Caltech-\(256\) (C), and Webcam (W), similarly to Redko _et al_. We randomly choose 20 samples per class and perform adaptation from CaffeNet to GoogleNet and repeat it 10 times. We report the average performance of each method along with standard deviation. Differently than Redko _et al_, we (1) unit normalize the dataset prior to alignment as we empirically found it to boost all methods' average performance compared to using unnormalized datasets, (2) use cosine distances when defining intra-domain distance matrices for GW and AGW, as we found them to perform better than Euclidean distances, and (3) report results after hyperparameter tuning methods for each pair of datasets. Specifically, for each pair of (A)-(C), (A)-(W) etc, we sweep a hyperparameter grid over 5 runs of random sampling, choose the best performing combination, and run 10 runs of random sampling to report results. For all methods, we consider their version with no entropic regularization (either on the sample-level alignments, feature-level alignments or both), along with various levels of regularization. For entropic regularization over sample alignments, we consider \(\epsilon_{1}\in[5e-4,1e-3,5e-3,1e-2,5e-2,0.1]\) for in methods. For entropic regularization over feature alignments in COOT and AGW, we consider \(\epsilon_{2}\in[5e-4,1e-3,5e-3,1e-2,5e-2,0.1]\). As interpolation coefficient of AGW, we consider \(\alpha\in[0.1,0.2,...,0.9]\).
## 8 SNARE-seq and scGEM feature alignments
Although we do not have ground-truth information on the feature correspondences in the SNARE-seq and scGEM datasets, to ensure comprehensive results, we present the feature alignments obtained from AGW on these datasets in Figure 5. Since we align 1000 genes and 3000 chromatin regions from the SNARE-seq datasets, it is not possible to present all the feature correspondences obtained. Instead, we show the four cell-type marker genes and their top correspondences from the accessible chromatin regions in Panel A. The scientists who originally generated this dataset used the expression status of these four genes when labeling cell types in this dataset. Panel B shows the feature coupling matrix yielded by AGW, which can be interpreted in the light of the information presented in Panel C and Panel D. We detail the significance of the inferred correspondences below.
**SNARE-seq feature correspondences:** Majority of the feature correspondences in Panel A are in agreement with either biologically validated or computationally predicted regulatory relationships. To validate these correspondences, we consult the biological annotations on the UCSC Genome Browser [39], as well as gene regulatory information databases, such as GRNdb [40] and RepMap Atlas of Regulatory Regions [41].
Firstly, three of the alignments are between marker genes and their chromatin regions. These are (1) _PRAME_ and Chr22: 22.520-22.521 Mb region, which is a region upstream of the _PRAME_ gene body that is rich with predicted transcriptional factor (TF) binding sites according to the "RepMap Atlas of Regulatory Regions" [41] annotations on UCSC Genome Browser (Human hg38 annotations) [39]. Among the predicted TF bindings, many of them are K562-specific predictions, and some of these are known regulators of _PRAME_, such as but not limited to _E2F6_, _HDAC2_, _CTCF_ (based on GRNdb database [40] of TF-gene relationships). Additionally, (2) _COL1A2_ and (3) _HLA-DRB1_ also have recovered correspondences with their own chromosomal region, "Chr7:94.395-94.396 Mb" and "Chr6:32,578-32,579 Mb", respectively. We observe that _COL1A2_ and _PRAME_ are also additionally aligned with "Chr1: 58,780 - 58,781 Mb" region, which correspond to the gene body of _JUN_ transcriptional factor. Indeed, _JUN_ has been identified as one of the transcriptional factors differentially expressed in the K562 and BJ cells, but more strongly in the latter, according to the original publication that released this dataset [25]. GRNdb also identified _JUN_ to be one of the regulators of the _COL1A2_ gene. In addition to the chromosomal region of _JUN_, _PRAME_ has another region abundant in predicted TF binding sites among its top correspondences: "Chr6: 7.974-7.975 Mb". This region is annotated with an H3K27Ac mark on the UCSC Genome Browser, which is a histone protein acetylation mark that is often found near gene regulatory elements on the genome [39]. Furthermore, this region contains multiple predicted binding sites of TFs GRNdb identifies as regulators of _PRAME_, such as _IRF1_, _HDAC2_, _HOXC6_ and POU2AF1. The _HLA-DRB1_ gene is also aligned with a chromosomal region rich in GM12878-specific predictions of TF bindings, such as _IRF4_, _IRF8_, _ETV6_, and _CREM_, which GRNdb lists as potential regulators of _HLA-DRB1_. Lastly,
even though we couldn't find a biological relationship reported between the _CLYBL_ gene and _EPCAM_ gene (marker gene for the H1 cell-line), the chromosomal region in _CLYBL_ body where AGW finds a correspondence with _EPCAM_ indeed appears to be differentially accessible in H1 cells in our dataset.
### scGEM feature correspondences:
To interpret the feature coupling matrix we recover on the scGEM dataset, we consult the original publication that introduced this dataset [38]. Figure 5 Panel C presenta a plot from this paper, which shows how the expression of genes that drive pluripotency during cell differentiation correlate or anti-correlate with the methylation of the genes in the "DNA methylation" domain. Based on the same pattern, Welch _et al_ generated the heatmap visualized in Panel D, which shows the underlying correlations between the expression and methylation levels of the genes in the two domains (i.e. gene expression and DNA methylation measurement domains). We observe that the feature coupling we receive from AGW (Panel B) resembles the structure in this ground-truth correlation matrix. Note that the features in the rows and columns of these matrices are ordered identically to aid with the visual comparison. Moreover, we see that the positive relationship between the expression profiles of
Figure 5: **AGW’s feature alignments for A. SNARE-seq, and B. scGEM datasets**. The Sankey plot in Panel A presents the four cell-type marker genes and their top correspondences in the open chromatin regions. Panel B visualizes the feature coupling matrix for the scGEM dataset. Panel C is borrowed from the original publication introducing scGEM dataset, Cheow _et al_[38], which shows how the genomic features in the two measurement domains (gene expression and DNA methylation) vary during cellular differentiation. Panel D is borrowed from Welch _et al_ that shows a heatmap of empirical correlations between the features of the two measurement domains, which we use for comparison with Panel B.
pluripotency-driving genes and the methylation levels of associated genes is also recovered in this feature coupling matrix.
### Heterogeneous domain adaptation experiments
We present the unsupervised and semi-supervised case with \(t=3\) samples used for supervision in the main paper. Here, we additionally present the semi-supervised cases with \(t=1\) and \(t=5\) samples used for supervision in the table below:
## 9 Empirical runtime comparison with COOT and GW
For timing, we run all algorithms on an Intel Xeon e5-2670 CPU with 16GB memory. For GW, we use the implementation in Python's POT library with its NumPy backend. We use the COOT implementation on [https://github.com/ievred/COOT](https://github.com/ievred/COOT). Note that the strength of entropic regularization picked influences the runtime. We report hyperparameters that were picked for each case in the experiment replication scripts on [https://github.com/pinardemetci/AGW](https://github.com/pinardemetci/AGW). |
2306.16913 | AutoML in Heavily Constrained Applications | Optimizing a machine learning pipeline for a task at hand requires careful
configuration of various hyperparameters, typically supported by an AutoML
system that optimizes the hyperparameters for the given training dataset. Yet,
depending on the AutoML system's own second-order meta-configuration, the
performance of the AutoML process can vary significantly. Current AutoML
systems cannot automatically adapt their own configuration to a specific use
case. Further, they cannot compile user-defined application constraints on the
effectiveness and efficiency of the pipeline and its generation. In this paper,
we propose CAML, which uses meta-learning to automatically adapt its own AutoML
parameters, such as the search strategy, the validation strategy, and the
search space, for a task at hand. The dynamic AutoML strategy of CAML takes
user-defined constraints into account and obtains constraint-satisfying
pipelines with high predictive performance. | Felix Neutatz, Marius Lindauer, Ziawasch Abedjan | 2023-06-29T13:05:12Z | http://arxiv.org/abs/2306.16913v2 | # AutoML in Heavily Constrained Applications
###### Abstract
Optimizing a machine learning pipeline for a task at hand requires careful configuration of various hyperparameters, typically supported by an AutoML system that optimizes the hyperparameters for the given training dataset. Yet, depending on the AutoML system's own second-order meta-configuration, the performance of the AutoML process can vary significantly. Current AutoML systems cannot automatically adapt their own configuration to a specific use case. Further, they cannot compile user-defined application constraints on the effectiveness and efficiency of the pipeline and its generation. In this paper, we propose Caml, which uses meta-learning to automatically adapt its own AutoML parameters, such as the search strategy, the validation strategy, and the search space, for a task at hand. The dynamic AutoML strategy of Caml takes user-defined constraints into account and obtains constraint-satisfying pipelines with high predictive performance.
## 1 Introduction
Recently, there has been intensive research on automated machine learning (AutoML) to facilitate the design of machine learning (ML) pipelines [16; 23; 49; 25; 53; 48; 43; 22; 55]. Existing work entails hyperparameter optimization, neural architecture search, and the generation of end-to-end ML pipelines, consisting of data preprocessing, feature engineering, model selection, and postprocessing.
### AutoML with Constraints
In practice, AutoML can be subject to two kinds of constraints: _ML application_ and _Search_ constraints. _ML application_ constraints impose restrictions, such as limits on training/inference time and ML pipeline size, or additional quality criteria, such as adversarial robustness or differential privacy, on the final ML pipeline. The ML application constraints on resource consumption are particularly relevant in systems that work with dynamic data and rely on fast response time [47; 32]. _Search_ constraints impose restrictions on the AutoML search process itself, such as limiting the search time, main memory usage, or parallelism.
Depending on the real-world setting and its commanding constraints, users have to configure the AutoML system differently to achieve the optimal result within a limited search time budget. With emerging applications in the realm of edge computing and real-time analysis, further constraints need to be considered. Autonomous driving relies on real-time video analysis [11] and to keep up with a sufficiently high frame rate, the model has to follow tight inference time constraints. As ML models have become successful, they have also gained traction on smaller devices, such as smartphones, requiring them to reduce their memory footprints and to predict fast. For streaming use cases, it might be important to continuously retrain to adapt to concept drift over time [9]. For fast-changing environments, such as fraud detection for high-frequency transactions, the models are subject to demanding training time constraints. Further, streaming ML requires con
# AutoML systems
Kateatevitz
[MISSING_PAGE_POST]
Katevitz@katez@[email protected]
Katevitz@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@k@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@k@katez@katez@katez@katez@katez@katez@katez@katez@katez@k@katez@katez@katez@k@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@k@katez@katez@katez@katez@k@katez@katez@k@katez@katez@katez@katez@katez@katez@katez@k@katez@k@katez@k@katez@katez@katez@k@katez@katez@katez@katez@k@katez@k@katez@katez@katez@katez@k@katez@k@katez@katez@katez@k@katez@katez@katez@k@katez@k@katez@k@katez@katez@katez@k@katez@katez@k@katez@k@katez@katez@k@katez@katez@katez@k@katez@katez@katez@k@katez@katez@k@katez@k@katez@k@katez@katez@katez@k@katez@katez@katez@k@katez@katez@k@katez@k@katez@k@katez@katez@katez@katez@k@katez@k@katez@katez@k@katez@katez@k@katez@k@katez@katez@katez@k@katez@katez@k@katez@k@katez@katez@k@katez@katez@k@katez@k@katez@katez@k@katez@k@katez@katez@k@katez@katez@katez@katez@k@katez@katez@k@katez@katez@k@katez@k@katez@katez@katez@katez@k@katez@katez@k@katez@katez@k@katez@k@katez@katez@k@katez@katez@k@katez@katez@k@katez@k@katez@k@katez@k@katez@k@katez@k@katez@katez@k@katez@katez@k@katez@k@katez@katez@k@katez@katez@k@katez@k@katez@katez@k@katez@katez@katez@katez@k@katez@k@katez@k@katez@katez@k@katez@katez@katez@k@katez@k@katez@k@katez@katez@katez@k@katez@katez@k@katez@katez@katez@k@katez@k@katez@katez@k@katez@k@katez@k@katez@katez@katez@k@katez@katez@katez@k@katez@katez@k@katez@katez@k@katez@k@katez@k@katez@k@katez@katez@k@katez@katez@k@katez@katez@k@katez@k@katez@katez@k@katez@katez@k@katez@katez@k@katez@k@katez@katez@katez@k@katez@katez@k@katez@katez@katez@katez@katez@k@katez@katez@katez@katez@k@katez@katez@katez@katez@katez@katez@katez@k@katez@k@katez@k@katez@katez@katez@katez@katez@k@katez@k@katez@katez@katez@k@katez@k@katez@katez@k@katez@katez@katez@k@katez@k@katez@katez@katez@k@katez@katez@katez@katez@katez@katez@k@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@k@katez@katez@katez@katez@katez@katez@k@katez@katez@k@katez@katez@katez@k@katez@katez@katez@katez@katez@katez@katez@k@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@katez@k@katez@katez@katez@katez@katez@katez@katez@k@katez@katez@katez@katez@katez@k@katez@katez@katez@katez@katez@k@katez@katez@katez@katez@katez@katez@katez@k@katez
meta-training instances from this huge space to enable the meta-training. To prune the ML hyperparameter space, we have to consider the trade-off between search runtime and predictive performance. If we prune too much of the ML hyperparameter space, the optimization might not find ML pipelines with high predictive performance. If we prune too little, the search might be inefficient. To estimate which AutoML configurations will be successful, it is critical to consider the dataset and user-specified constraints.
2. **Meta-training labels.** To predict an AutoML configuration for a given task, a meta-model has to be trained on similar tasks. Choosing the right meta-training examples and an appropriate prediction target is a problem we intend to solve.
3. **Nondeterministic AutoML.** AutoML is a nondeterministic and stochastic process. Across multiple runs, the same AutoML configuration might lead to significantly different outcomes because both the AutoML optimizer (e.g. Bayesian optimization) and ML model training are stochastic. So, if we naively train a meta-model on such a noisy signal, the meta-model might be inaccurate.
### Contributions
To address these challenges, we propose a new constraint-driven AutoML system, Caml, which dynamically configures its AutoML parameters by taking into account the user-specified ML task (i.e., dataset and constraints). Learning from previous _AutoML runs_ (i.e., dataset, constraints, AutoML configuration), Caml generates AutoML configurations and estimates which of them are promising for a new ML task. To this end, we make the following contributions:
1. We propose alternating sampling as a training data generation strategy - a combination of active learning, Bayesian optimization, and meta-learning. It is parallelized and efficiently explores the huge search space of datasets, AutoML configurations, and constraints to learn a meta-model that estimates the success of AutoML configurations and accelerates the search process.
2. To instantaneously extract the most promising AutoML configurations from the meta-model at runtime, we propose offline AutoML configuration mining that provides Caml with a large pool of promising AutoML configurations. As the meta-model can rank 100k configurations in less than a second, this pool allows for fast AutoML configuration retrieval.
3. To ensure high adaptability for a wide set of constraint settings, we implemented Caml in a way to allow the user to configure whether or not it optimizes any ML hyperparameter. It also supports ML application constraints based on metrics, such as training/inference time, ML pipeline size, and equal opportunity [20] - a fairness metric.
4. We report extensive experiments with Caml and compare it to state-of-the-art AutoML systems. We provide our implementation, datasets, and evaluation framework in our repository [35].
**Main Findings.** Our study lets us draw the following conclusions:
1. Caml does not only outperform the default AutoML configuration but also state-of-the-art systems, such as TPOT [39], AutoGluon [12], and Auto-Sklearn2 [15], in constrained settings.
2. Caml outperforms hand-tailored constraint-specific AutoML solutions, such as Auto-Sklearn 2 [15]. Manually adapting AutoML system configurations to diverse constraints or even combinations of multiple constraints is nearly impossible due to unforeseeable side effects. Therefore, solutions, such as Caml, are required.
3. Caml is the first step towards our vision of constraint-driven AutoML. This way, we can cover multiple diverse constraints and add/remove additional ones without AutoML systems expertise.
## 2 Three-Step Problem
The three-step problem represents the search for the optimal setting of three parameters as described in Figure 1: the AutoML parameters, ML hyperparameters, and model parameters.
Before we formalize the problem of constraint-driven AutoML, we formally define the problem of finding optimal model parameters for a given supervised machine learning model and the AutoML problem of finding the optimal algorithm and ML hyperparameters, e.g., selecting a data encoding, feature preprocessor, and classification model, and all their corresponding hyperparameters.
### Supervised ML Problem
The supervised ML problem is to find the parameters \(\mathbf{\theta}\) for a predictive model \(f\) by minimizing the loss \(\mathcal{L}\) of mapping \(f:\mathbf{x}_{i}\mapsto\hat{y}_{i}\) for a given training dataset \(D_{train}=\{(x_{0},y_{0}),...,(x_{n},y_{n})\}\).
\[\mathbf{\theta}^{*}\in\operatorname*{arg\,min}_{\mathbf{\theta}\in\mathbf{\Theta}}\sum_{( x_{i},y_{i})\in D_{train}}\mathcal{L}_{train}\left(y_{i},f(x_{i};\mathbf{\theta}) \right). \tag{1}\]
# Data
AutoML Configuration
Validation Strategy:
Holdout 66/33
Ensembling:
yes
Incremental Training:
V Validation split reshuffle:
no
ML Hyperparameter space:
SVM.:
Yes
SVM.:
Yes
Extra Trees:
KNN:
Yes
Multilayer Perceptron:
Any Feature Preprocessor:
302 hyperparameters....
###### Abstract
In practice, the problem is often more complex since the loss might be regularized to achieve better generalization performance, and stochastic optimizers might lead to different model parameters returned by the learning process.
### The AutoML Problem
The combined algorithm selection problem and hyperparameter optimization problem of AutoML [50] is to determine the predictive pipeline \(a\in A\) and its corresponding hyperparameters \(\boldsymbol{\lambda}\in\Lambda\), inducing a model \(f^{(\boldsymbol{a}_{\lambda};D_{train})}(\cdot;\hat{\boldsymbol{\theta}})\) with some approximated model parameters \(\hat{\boldsymbol{\theta}}\), that achieve the lowest loss on the validation set \(D_{valid}\). Formally:
\[\operatorname*{arg\,min}_{a\in A,\boldsymbol{\lambda}\in A}\sum_{(x_{i},y_{i} )\in D_{val}}\mathcal{L}_{val}(y_{i},f^{(a_{\lambda};D_{train})}(x_{i};\hat{ \boldsymbol{\theta}})). \tag{2}\]
We note that the training loss \(\mathcal{L}_{train}\) (e.g., cross-entropy) does not have to be the same as the validation loss \(\mathcal{L}_{val}\) (e.g., balanced accuracy). Since the ML model training can already take some time (e.g., training a DNN), AutoML has to be very efficient in evaluating different configurations from \(A\times\Lambda\).
### Constrained-Driven AutoML Problem
The problem that we address in this paper is to find the parameters \(\omega\) of a given AutoML system to efficiently find an ML pipeline that adheres to all user-specified constraints and achieves the highest predictive performance for a specified ML task. Formally,
\[\max_{\omega\in\Omega}m(\omega)\text{ s.t. }\forall c_{i}\leq t_{i},i\in[0,n] \tag{3}\]
where \(\omega\) is a vector representing an AutoML system's own configuration; \(m(\omega)\) is the average validation loss of the final ML model \(\hat{f}\) returned by the AutoML system; \(c_{i}\) are the constraints, and \(t_{i}\) are the user-specified constraint thresholds, i.e., search time \(\leq 5min\) or ML pipeline size \(\leq 1\) MB. For constraints, we distinguish between search constraints and ML application constraints. Search constraints concern the AutoML search process, such as search time, search main memory, and evaluation time, and ML application constraints concern the final ML pipeline, such as training/inference time, and fairness.
Although optimizers with implicit learning of these unknown constraints can be used, we hypothesize that zero-shot adjusting of the AutoML system's own parameters (including the configuration space \(A\times\Lambda\)) will address this problem efficiently.
Choosing the AutoML configuration based on a specified dataset and constraints is challenging because both the solution space (possible AutoML configurations) as well as the task space (possible datasets and constraint thresholds) are infinite. Any change in any of these components might affect the final predictive performance. The nondeterminism of both ML and AutoML further aggravates these challenges.
Figure 1 illustrates how constraint-driven AutoML impacts the configurations. Instead of using the default AutoML configuration, our system automatically
# Data-Learning
[MISSING_PAGE_POST]
Auto
AutoML
Auto
AutoML
AutoML
AutoML
Auto
AutoML
AutoML
AutoML
AutoML
Auto
AutoML
AutoML
Auto
AutoML
Auto
AutoML
AutoML
Auto
AutoML
AutoML
Auto
Auto
AutoML
Auto
Auto
Auto
Auto
AutoML
Auto
Auto
AutoML
[MISSING_PAGE_POST]
the default AutoML configuration will result in better performance for a given task.
In the final step of the offline phase, Caml leverages the meta-model to search for the estimated optimal AutoML configuration for a random dataset and random constraints. Caml leverages BO to address this search problem. The result of this step is a large pool of promising AutoML configurations for a diverse set of use cases.
In the **online phase**, the user specifies the dataset and the constraints. To prepare them for the meta-model training, we encode both the dataset and the constraints in the meta-feature representation (see Section 3.1.4) and combine them with the mined AutoML configurations. Then, Caml leverages the meta-model to predict which of the mined AutoML configurations fits the user-specified dataset and constraints best. Then, Caml equips the AutoML system with the resulting AutoML configuration and executes it with the specified search constraints. Finally, the AutoML system returns an ML pipeline that satisfies all ML application constraints.
### Training Data for Meta-learning
We propose active meta-learning - an approach to efficiently apply meta-learning in a scenario where the corresponding training data, both instances and labels, do not exist and need to be generated; A meta-training instance comprises a combination of a dataset, an AutoML configuration, and constraints. The label of such a training instance should specify how fitting or successful generated AutoML parameters are. The meta-model should learn from a set of such training instances whether a generated configuration leads to better performance than the default AutoML configuration.
To train such a meta-model, we have to answer the following questions: How do we generate the labels? How can we effectively gather training data? How do we encode an AutoML run as meta-features?
#### 3.1.1 Meta-Target Label
To learn which AutoML configurations are promising, we need a meta-training dataset with prediction labels for previous AutoML runs. We need to define what _success_ means for a given AutoML run. We cannot simply choose the predictive performance as a label for an AutoML run, because the performance lives on different scales depending on the ML task at hand. Some ML tasks are harder to solve because some constraints are very restrictive. For instance, the constraint "ML pipeline size \(\leq 5\)KB" is more restrictive than "ML pipeline size \(\leq 500\)MB", leading to different optimally achievable prediction performance values. Therefore, we have to find a metric that considers the entire context of an ML task as an anchor point. To provide such an anchor point, we run the AutoML system with default configuration as a baseline during meta-learning. The default AutoML configuration uses the full ML hyperparameter search space and the default AutoML parameters, such as hold-out validation with 33% validation data. Now, our learning task is to predict whether a generated AutoML configuration yields higher predictive performance than the default AutoML configuration for the same task. This proxy metric is independent of the performance scales and the constraint hardness. To account for the nondeterministic behavior of AutoML, we run the AutoML system several times (ten times in our experiments) for both the generated configuration and the default configuration. Then, we obtain the fraction of cases where the default AutoML configuration was outperformed. We note that this might not lead to the optimum as defined in Eq. 3, but ensures a robust choice of an AutoML configuration, avoiding performance degradation caused by non-determinism. To avoid unnecessary computation for unsatisfiable settings in the meta-training, we first evaluate the given AutoML configuration. If all ten runs yield no ML pipeline that satisfies the specified constraints, we do not need to evaluate the default AutoML configuration anymore.
The meta-model for active learning is a random forest regression model that predicts the fraction of runs that the given AutoML configuration outperformed the default configuration. As shown before [50], random forest is a well-suited model for handling large complex and structured hyperparameter spaces, see Subsection 3.1.4.
#### 3.1.2 Alternating Sampling
To efficiently explore the space of AutoML configurations, datasets, and constraints, we leverage active learning, specifically uncertainty sampling [46]. Similar to the approach presented by Yu et al. [56] that reduces labeling effort for standard ML classification tasks, our system chooses and generates those meta-training instances that the meta-model is most uncertain about. However, if we only sample ML tasks around the decision boundary of whether a given AutoML configuration outperforms the default configuration, we might miss configurations that outperform the default configuration by large margins. While we _exploit_ the space with uncertainty sampling, we additionally _explore_ it with random sampling in an alternating fashion.
Algorithm 1 describes the training data generation process. Sampling requires a repository of datasets, an AutoML system, a constraint space, and a space of AutoML parameters. To start active learning, we need initial training instances that yield the first meta-model. Caml chooses these first instances randomly (Lines 4-7). In particular, Caml randomly chooses the dataset \(d\), the constraints \(c\), and the AutoML configuration \(\omega\) (Line 5). Then, those components are encoded as meta-features and added to the meta-training set (Line 6). The corresponding AutoML run is executed and compared with the default configuration to obtain the corresponding label (Line 7). Then, the alternating sampling process starts (Line 8). The system chooses uniformly at random whether to apply random or uncertainty sampling. Uncertainty sampling picks the most uncertain instance among all given instances. To find uncertain instances in this infinite search space (combinations of datasets, AutoML configurations, and constraints), we leverage BO, which learns a surrogate model to predict which AutoML parameters yield high predictive performance and samples only promising instances by trading off exploration and exploitation. In Line 11, BO identifies the combination of (\(d\),\(c\),\(\omega\)) that leads to the highest standard deviation across all trees of the random forest meta-model. We repeat this two-step loop until the time limit has been reached.
```
0: AutoML system \(A,\text{Datasets}\ D,\text{Constraint Space}\ C\), \(\text{AutoML parameter space}\ \Omega,\text{Random iterations}\ K\), \(\text{Sampling time}\ t\).
0:\(X,Y,\text{groups}\).
1:\(X\leftarrow\emptyset\)
2:\(Y\leftarrow\emptyset\)
3: groups \(\leftarrow\emptyset\)
4:for\(i=0\ to\ K\)do\(\triangleright\) cold start
5:\(d,c,\omega\leftarrow\text{random\_sample}(D,C,\Omega)\)
6:\(X\gets X\cup\{\text{encode}(d,c,\omega)\}\)
7:\(Y\gets Y\cup\{A(d,c,\omega)\}\)\(\triangleright\) Running Caml
8:while\(t\) not elapsed do\(\triangleright\) alternating sampling
9:if\(rand()\geq 0.5\)then
10:\(\text{meta\_model.fit}(X,Y)\)
11:\(d,c,\omega\leftarrow\operatorname*{arg\,max}_{d\in D,c\in C,\omega\in\Omega} \sigma(\text{meta\_model.predict}(\) \(\text{encode}(d,c,\omega)))\)
12:else
13:\(d,c,\omega\leftarrow\text{random\_sample}(D,C,\Omega)\)
14:\(X\gets X\cup\{\text{encode}(d,c,\omega)\}\)
15:\(Y\gets Y\cup\{A(d,c,\omega)\}\)\(\triangleright\) Running Caml
16:\(\text{groups}\leftarrow\text{groups}\cup d\)
17:return\(X,Y,\text{groups}\).
```
**Algorithm 1** Training data generation
#### 3.1.3 Parallelization and Optmizations
To speed up the presented sequential algorithm, we parallelize it asynchronously. Each worker always accesses the latest training instances. Once a new meta-training instance and a corresponding label are available, the meta-training data is locked briefly to add the new instance. We found that the more common approach [58] to predict the label for a newly sampled instance with the current meta-model and adding both to the meta-training data does not work well for our scenario. Our label is only predicted and is thus only an approximation of the ground truth. If the label is not correct, the search could fall into the wrong direction. Therefore, our approach only adds a new instance once the label is confirmed. To avoid the same instances being evaluated in parallel, we start each nondeterministic BO run with different seeds. As the search space is huge, it is highly unlikely that similar instances will be sampled during the same period.
#### 3.1.4 Meta-Feature Representation
To estimate whether an AutoML configuration yields higher predictive performance than the default AutoML configuration, the meta-model has to know the dataset, the AutoML parameters, and the constraints. We encode each of these components in a meta-feature vector.
Dataset FeaturesFor encoding datasets into meta-feature vectors, multiple approaches have been proposed [52; 6; 16]. We leverage the 32 meta-features proposed by Feurer et al. [16], such as the class entropy, the number of features, classes, and instances.
Constraint FeaturesAll constraints, such as inference time \(\leq 0.001s\), can be represented by the corresponding threshold. If the user does not specify the constraint, we set the maximum possible default value. Extending the set of constraints is always possible. The safest strategy is to train the meta-model from scratch. However, one can also leverage the assumption that the missing constraint was simply set to default. Thus, all previous training instances can be appended with the default value for the new constraint and new instances with novel thresholds for the constraint can be generated for new instances. This way, we can continue meta-training asynchronously without the need of starting from scratch. The same reasoning applies to extending the search space of the AutoML parameters. However, this only works, if one does not change the underlying AutoML system that we compare to - e.g. if one uses the state-of-the-art AutoML system as a comparison, one can leverage the assumption that the missing component was simply not chosen. This way, we can continue meta-training without the need of starting from scratch.
#### AutoML Configuration Features
To encode an AutoML configuration, we distinguish numeric parameters and categorical ones. Numeric AutoML parameters, such as the choice of the validation fraction, are simply added to the meta-feature vector. We encode the ML hyperparameters as binary values. The AutoML system either optimizes each ML hyperparameter (_True_) or uses its corresponding default value (_False_). For instance, the AutoML system can optimize the number of neighbors for \(K\) nearest neighbors or use its default \(K=5\).
We follow the well-known assumption that the ML hyperparameter space has a tree structure where each node represents an ML hyperparameter [50; 3] and each edge represents the dependency on its parent ML hyperparameter. Figure 4 shows a branch of this tree. We describe the details of how we structure this tree in Section 3.4. If we do not optimize an ML hyperparameter higher up in the tree, we will not optimize any of its descendant ML hyperparameters either. For instance, if we remove the \(K\)-nearest-neighbor classifier from the choice of possible classifiers, we also do not need to optimize the number of neighbors \(k\). We refer the reader to our repository [36] for the complete tree space that we leverage.
The aforementioned set of meta-features assumes uniform hardware specifications at training and deployment time which cannot always be guaranteed. If the hardware of meta-learning training is different from the hardware where Caml is deployed, one can apply calibration strategies that were proposed for database query optimization cost models [18]. For instance, one could run a lightweight benchmark to understand the hardware performance difference and obtain corresponding scaling functions. We leave these adaptations for future work. We believe that our approach fits well with the common application of using cloud instances with equal or similar specifications.
### Meta-Model Training
Once the meta-data sampling is finished, Caml trains the final meta-model. The straightforward approach would be to use the same model that was trained for uncertainty sampling. However, this model is suboptimal because it might be overfitted to certain datasets that are more frequently sampled than others due to their uncertainty estimation. Further, we do not optimize the model hyperparameters during uncertainty sampling as it would significantly slow down the training data generation. For these reasons, we apply hyperparameter optimization on the meta-model after sampling has finished with 10-fold grouped cross-validation avoiding that training instances with the same dataset do not appear in both training and test folds.
To achieve optimal performance, we train two meta-models, one for AutoML configuration mining and one to rank the large pool of mined AutoML configurations.
For AutoML configuration mining, we use the same objective as for the surrogate model for uncertainty sampling (see Section 3.1.1): we predict the fraction of runs that the given AutoML configuration outperformed the default one (regression). For ranking the mined AutoML configurations, we predict whether the given AutoML configuration outperforms the default one at least once (classification). The regression meta-model contains more information than the classification meta-model because it estimates how much better the given AutoML configuration is compared to the default one whereas the classification model estimates only whether the AutoML configuration is better than the default one. However, as the regression task is much harder than the classification task, the regression meta-model is more likely to make mistakes and therefore more unstable. Yet, as we describe in Section 3.3, we query the regression meta-model many times, avoid local optima/mistakes, and converge over time to a well-performing AutoML configuration. In turn, we only query the ranking meta-model once. Therefore, we need to make sure that it makes no mistakes and is as conservative as possible. This way, we ensure that the highest ranked AutoML configuration is robust - meaning it outperforms at least the default configuration.
### AutoML Configuration Mining
Given an ML task and a generated configuration, the trained regression meta-model can predict whether the generated configuration will be more effective than the default configuration or not. The question is how we can leverage this regression meta-model to find the best AutoML configuration for a new dataset and user-specified constraints. To use the trained regression meta-model, we need a set of generated candidate configurations for each of which we can carry out the inference. Here, we are looking for the AutoML configuration that yields the best predictive performance for a given dataset and given constraints.
The simplest approach would be to generate a large number of random configuration candidates and let the regression meta-model predict which of these configurations has the highest likelihood of success. The disadvantage of this approach is that many of the randomly generated configurations will perform poorly and we cannot generate all possible configurations because
there are infinitely many. The advantage of this approach is that the generation of these random configurations can be performed in the offline phase. During the online phase, we would only apply inference. The cost of inference is minimal - e.g. predicting one million configurations takes around 1s.
Instead of random sampling, we could also apply BO. We could maximize the estimated likelihood that a generated configuration outperforms the default configuration, and freeze all meta-features for the user-specified dataset and constraints:
\[\hat{\omega}\leftarrow\operatorname*{arg\,max}_{\omega\in\Omega}\text{meta\_ model.predict}(\text{encode}(d,c,\omega)) \tag{5}\]
The advantage of BO is that it would adjust the configuration to the specified dataset and constraints. The disadvantage of BO is that it is slow. For instance, performing 1000 iterations would take more than 700s. Waiting for more than 10min before we even start the AutoML system is not viable - especially if the user is interested in fast development cycles.
We propose a hybrid approach that combines the strengths of both probing strategies. In the offline phase, we randomly sample a dataset and constraints - similar to Line 5. But instead of randomly sampling a configuration \(\omega\), we leverage BO to find the most promising configuration for this randomly generated ML task with the help of the regression meta-model. This way, we generate a large number of promising random configurations offline. In the online phase, we let the classification meta-model choose which of these promising random configurations fits the specified dataset and constraints best. Then, Caml sets up the actual AutoML system with this configuration and executes it.
### AutoML Parameters
Adapting AutoML parameters is only meaningful if there is a wide range of parameters that are in fact adaptable. In contrast to Auto-Sklearn and AutoGluon, we implemented Caml to not only provide access to the common user-adjustable AutoML parameters, such as whether to use ensembling, incremental training, or which validation strategy, but also to allow external adjustment of every single ML hyperparameter in the search space. This way, it can be dynamically decided whether those parameters should be optimized or not, as shown in Figure 1.
We extend the ML hyperparameter space of Auto-Sklearn [16] additionally supporting oversampling strategies random oversampling, SMOTE [8], and ADASYN [21] to address class imbalance. Further, we added support for one-vs-rest classification to improve multi-class classification. We refer the reader to our repository [36] for the complete tree space that we leverage. We structure the ML hyperparameter space in a tree [36], as proposed in Auto-Weka [50]. Figure 4 represents a slice of the leveraged tree space. The first level of the tree contains all main components of the ML pipeline: categorical encoding, imputation, scaling, classifier, feature preprocessor, augmentation, sampling, and class weighting. Below this level, each component can be implemented by various strategies and each strategy has its own hyperparameters. This way, the ML hyperparameter space naturally builds up a tree. The hierarchical organization of the ML hyperparameter space is essential to allow the meta-model to prune a large part of the ML hyperparameter space as early as possible. This way, the AutoML system will not optimize the child ML hyperparameters if their parent ML hyperparameter is not optimized. Instead, the system will use their default value. For instance, by providing a hierarchical structure, we allow the meta-model to realize that no preprocessing transformation will be beneficial for a specific setting, instead of deciding for every single preprocessor and all its corresponding hyperparameters whether to optimize it or not.
Figure 3: AutoML constraints.
### Constraints
In constraint-driven AutoML, the user can define constraints, which might concern the AutoML process or the ML application, as shown in Figure 3.
**Search constraints** limit time-related, hardware-related, or system-specific aspects of the AutoML process. Time-related search constraints limit the search time or the evaluation time. Hardware-related search constraints are limits on the memory or parallelism. System-specific search constraints are limits on the size of ensembles or the search space.
The most important search constraint limits the _search time_. This search constraint is mandatory for each AutoML run and therefore it represents the class of search constraints well. For fast development cycles, data scientists will limit the search time to less than an hour to quickly experiment with the pipeline.
**ML application constraints** restrict the ML pipelines with regard to different quality dimensions. Zhang et al. [57] described 7 quality dimensions that can serve as constraints: correctness, robustness, security, privacy, efficiency, fairness, and interpretability. These constraints can be categorized along two dimensions.
Gelbart et al. [17] differentiate between unknown and known constraints as also illustrated in our constraint taxonomy. _Known constraints_ are those constraints that can be checked before training and evaluating a model. For instance, knowing that an \(\varepsilon\)-differentially private implementation of classifiers [7] is used apriori ensures that privacy constraints are satisfied. Another example of known constraints is a restriction with respect to the ML pipeline components or the number of features to improve the interpretability of the resulting ML pipeline. In contrast, _unknown constraints_ refer to those that can only be checked once the model is trained and evaluated. For instance, most efficiency constraints have this property.
Generally, our approach can integrate any known constraint easily by adding an if statement at the beginning of the objective function. For our experiments, we focus on unknown constraints.
The second dimension along which one can differentiate constraints refers to their dependence on the ML pipeline and/or the data. For our experiments, we focus on constraints that significantly depend on the pipeline and not so much on the dataset. To incorporate more dataset-dependent constraints, such as fairness one would need to use more dataset-specific meta-features in the meta-model.
All in all, among the seven quality dimensions proposed by Zhang et al. [57], we focus on correctness, efficiency, and fairness. In particular, we always maximize correctness i.e. the predictive performance. Further, we choose three well-established efficiency constraints _training time_, _inference time_, and _ML pipeline size_1, and equal opportunity [20] which is a fairness measure. All four are _unknown_ constraints and depend on the ML pipeline.
Footnote 1: For some ML models, such as random forest and KNN, the model size is data dependent.
The relevance of the three efficiency constraints is particularly high in edge computing and streaming scenarios. In streaming scenarios, reducing inference time is vital to ensure continuous real-time predictions. As the data is evolving, the model requires constant retraining. In continuous training scenarios, enforcing training time limits plays a significant role. The same constraint type is relevant for federated learning [28], where users continue training on their own devices. Finally, to apply ML on IoT devices or smartphones, it is important to limit memory consumption.
### Constrained Optimization
So far we know how to train the meta-learning approach and how to retrieve an adapted AutoML configuration dynamically. Now, we explain how Caml optimizes the ML hyperparameters under constraints. Previous systems by default consider the predictive performance as the objective function, which is not sufficient and requires adjustment. Furthermore, aspects such as ensembling have to be adjusted as we need to make sure that only constraint-satisfying models are ensembled and that the final ensemble also satisfies the constraints.
To support ML application constraints we formulate the objective function as follows for Caml:
\[\max\left(-1\cdot\left(\sum_{i=1}^{n}\Delta c_{i}\right)+\left(\left[\sum_{i=1 }^{n}\Delta c_{i}==0\right]\cdot BA\right)\right),\]
where \(\Delta c_{i}\) is the distance to satisfying the \(i\)th constraint and \(BA\) is balanced accuracy. This objective ensures to satisfy the constraints first and then optimizes the balanced accuracy. This way, the user can set thresholds for any of the supported constraints through an API. As the BO framework to maximize this objective, we choose Optuna [1], which leverages the tree-structured Parzen estimator (TPE) as the surrogate model. TPE is well-suited for our tree-structured ML search space.
To enable model ensembling in Caml, we integrate the greedy ensembling strategy proposed by Caruana et al. [5]. The strategy iteratively adds the model that
maximizes ensemble validation predictive performance as long as all constraints are satisfied.
To enable hyperparameter optimization on large data, we implement incremental training similar to successive halving [27]. First, we train a model on a small sample containing 10 instances per class. Then, we double the training set size and train the model again. We continue this approach until either the evaluation time is over or the ML hyperparameter configuration is pruned because it performed worse than the median configuration of the history. Further, for constraint metrics that monotonically increase with the training set size, such as the training time or ML pipeline size, we stop the configuration evaluation as early as possible to avoid unnecessary computation. As incremental training might result in a large number of ML hyperparameter evaluations, the danger of overfitting increases. Levesque proposes to reshuffle the validation split after each evaluation to avoid overfitting [26]. Therefore, we implemented this option in Caml as well and expose it as an AutoML parameter.
## 4 Experiments
Our experiments aim to answer the following questions:
1. How does our dynamically configured AutoML system compare to state-of-the-art AutoML systems?
2. How does dynamic AutoML system configuration perform when ML application constraints have been defined?
3. Is alternating sampling more efficient than random sampling for generating the meta-learning training data?
4. How does the number of mined AutoML configurations affect the predictive performance of Caml?
### Setup
We evaluate our approach on the same dataset split as used by Feurer et al. [15]: 39 meta-test datasets and 207 meta-train datasets. To extend our framework for fairness constraints, we add 17 fairness-related datasets provided by Neutatz et al. [37] to the meta-train datasets because common datasets do not annotate sensitive attributes that are required to measure fairness. As test datasets for fairness, we use the five fairness datasets that Ding et al. proposed to benchmark fair ML systems [10]. As a prediction accuracy metric, we leverage balanced accuracy that can handle binary, multi-class, and unbalanced classification problems. To compare the performance across datasets, we report the average and the standard deviation across datasets by repeatedly random sampling one result out of ten runs with different seeds with replacement. This approach ensures that we report the uncertainty induced by our system and not the different hardness scales of the datasets. Similarly, we test significance using the Mann-Whitney U rank test with \(\alpha=0.05\) between repeatedly random sampled averages. We mark a number with a star (*) if it passes this test.
Due to our limited resources, we sample the meta-training dataset for two weeks, which amounts to \(6,915\) meta-training instances in total. Further, we mine AutoML configurations for two weeks using BO for \(2,000\) iterations, which amounts to \(11,911\) AutoML configurations. As AutoML parameter space, Caml chooses (i) the hold-out fraction, which affects both the size of training and the validation set, (ii) whether to use model ensembling, (iii) whether to use incremental training, (iv) whether to reshuffle the validation split, and (v) the whole adjustable ML hyperparameter space with 302 ML hyperparameters. Note that we do consider the time required for ensembling for the search time as it can be run in parallel to the model search as per
Figure 4: Slice of the tree space that we use in our implementation.
formed for Auto-Sklearn2 [15]. We ran the experiments on Ubuntu 16.04 machines with 28 \(\times\) 2.60 GHz cores and 264 GB memory.
**Baselines.** We compare our system with the state-of-the-art AutoML systems:
1. _TPOT_ (0.11.5) is a genetic programming-based AutoML system that optimizes feature preprocessors and ML models [39].
2. _AutoGluon_ (0.3.2) is an AutoML system that focuses on model ensembling and stacking [12].
3. _Auto-Sklearn2_ (0.14.0) [15] is the latest version of the well-known AutoML system Auto-Sklearn1 [16] that leverages BO, meta-learning, and model ensembling to find the Sklearn [41] ML pipelines that achieve high predictive performance. Further, we extended the system to support the constraints for pipeline size, inference/training time, and fairness. We follow the same approach as described in Section 3.6 and only add a model to the ensemble if all constraints are satisfied. This allows a fair comparison of Caml and Auto-Sklearn2.
4. _Spearman_[17] leverages BO for constrained optimization with Gaussian processes. We use the implementation by Paleyes et al. [40] and search the same ML hyperparameter space as in our static system.
Furthermore, we evaluate our system with and without dynamic AutoML configuration: Caml_Dynamic_ and Caml_Static_:
1. Caml_Static._ The static version covers the full ML hyperparameter space that is inspired by Auto-Sklearn1 [16]. It does not leverage meta-learning to optimize the search space. The details of the ML hyperparameter space are described in Section 3.4. We use the same ML hyperparameter ranges as Auto-Sklearn1. Further, the static version always leverages hold-out validation with 33% validation data, which is again the default validation strategy by Auto-Sklearn1. Additionally, it always uses model ensembling and incremental training.
2. Caml_Dynamic_ implements our proposed approach. It automatically selects a subset of the full ML hyperparameter space and identifies the hold-out validation fraction, whether to use ensembling, incremental training, and validation split reshuffling.
In the following, we focus on a comparison and insights compared to Auto-Sklearn2 since it is the most similar system compared to ours and considered as one of the strongest systems to date.
### Effectiveness on Search Time Constraints
The most important constraint for AutoML limits the search time, which is a mandatory constraint that AutoML systems require because it is not obvious when to terminate an AutoML system. Therefore, it is crucial that our approach works well for this constraint as it also has to be fulfilled in combination with other constraints. We compare our dynamically configured AutoML system Caml_Dynamic_ with the same AutoML system with the default AutoML configuration Caml_Static_. Additionally, we compare our approach to state-of-the-art AutoML systems to show the potential of our idea of constraint-driven AutoML. We note that this is the only type of constraint easily applicable to all other AutoML systems considered in this study.
#### 4.2.1 Performance Comparison
Table 1 reports the average balanced accuracy for the meta-test datasets over time and across systems. We focus on search times of up to 5min to simulate heavily constrained time settings. Further, shorter search times allow the user to iterate more quickly through various research ideas and investigate how efficient the state-of-the-art systems are with respect to resource consumption in the context of the new paradigm of Green AutoML [51].
First, it is noticeable that Caml with the default AutoML configuration outperforms AutoGluon [12] and TPOT [39]. The reason is that Caml leverages incremental training, which is a multi-fidelity strategy. Therefore, it can yield ML pipelines in a short time, even for large datasets. However, Caml with the default AutoML configuration does not outperform Auto-Sklearn2 [15] for five minutes search time. It is noteworthy that Auto-Sklearn2 is a carefully optimized version of the Auto-Sklearn system [16] with a smaller hand-designed configuration space with six model classes. We also report the performance of Auto-Sklearn2 using the full ML hyperparameter space like Auto-Sklearn1. This version achieves significantly worse predictive performance, which shows that the right choice of the ML hyperparameter space is crucial.
Our approach Caml (Dynamic) with meta-learned AutoML configuration always outperforms all other systems significantly according to the Mann-Whitney U rank test (\(\alpha\) = 0.05). This shows that our objective of dynamically choosing good AutoML configurations was achieved.
In fact, Caml_Dynamic_ selects on average only 55 out of 302 ML hyperparameters for the search space and a 5-minute time frame and still achieves a higher
average balanced accuracy across all experiments. Interestingly the search space only reduces slightly from here on. Having the 10 seconds constraint, 51 ML hyperparameters are considered on average, which is only four less than 55 for 5 minutes.
Yet, the space can also differ significantly between 5 and 1 minutes. Figure 5 shows AutoML configurations that were selected for the dataset "Christine" and "Robert" from the OpenML repository. The visualization follows the hierarchical view that we presented in Section 3.4 and displays the obtained configuration space for 1min and 5min search time, respectively. Comparing the ML hyperparameter spaces, we see that in this case the ML hyperparameter space for 1min search time is smaller than for 5min search time. This is because a higher time period allows for the optimization of more ML pipeline parameters.
Additionally, for the dataset "Christine", our system chooses the validation fraction 0.13, ensembling, and incremental training. The small validation fraction reduces the time for evaluation. Ensembling makes the predictions more robust against noise. Incremental training ensures that the system finds a suitable ML pipeline early. In addition to incremental training, our system also chose to optimize the size of training set to further reduce the iteration overhead.
For the dataset "Robert", our system chooses the validation fraction 0.54, incremental training, and validation split reshuffling. Validation split reshuffling avoids overfitting. Additionally, our system chose to optimize each class weight individually because the dataset has 10 classes.
#### 4.2.2 Analyzing the Meta-Models
To analyze the meta-models, we computed the meta-feature importance based on impurity scores for the trained random forest meta-model. We list the top-15 meta-features in Table 2 for the classification meta-model. The most important meta-features are the constraint thresholds, in particular, for the pipeline size, and inference/training time. These meta-features are important because different constraints also require different AutoML configurations. This finding supports the aim of this work to consider dynamic AutoML configuration, especially for constrained settings. Another important feature is the hold-out fraction. Especially for large datasets, it is crucial to identify the right sample size to allow the AutoML system to yield any ML pipeline. For instance, for the dataset "KDDCup09 competency" (50k instances), our method chooses a validation fraction of 7% of the data.
The remaining 8th-15th meta-features all cover dataset-specific meta-features, e.g. about the class distributions and the shape of the data. The meta-features representing the ML hyperparameter search space are less important, e.g. the meta-feature of whether to use a specific categorical encoding is the 37th most important feature.
For the regression meta-model, we list the top-15 meta-features in Table 3. The most important meta-features are similar to the ones for the classification meta-model. However, for the regression meta-model,
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Strategy & 10s & 30s & 1 min & 5 min \\ \hline \hline \multirow{2}{*}{Caml} & \multirow{2}{*}{Static} & \(0.43\pm 0.02\) & \(0.53\pm 0.02\) & \(0.58\pm 0.01\) & \(0.67\pm 0.01\) \\ & Dynamic & \(\mathbf{0.57\pm 0.01^{*}}\) & \(\mathbf{0.67\pm 0.01^{*}}\) & \(\mathbf{0.70\pm 0.01^{*}}\) & \(\mathbf{0.74\pm 0.00^{*}}\) \\ Auto-Sklearn2 opt. & \(0.00\pm 0.00\) & \(0.11\pm 0.02\) & \(0.48\pm 0.02\) & \(0.74\pm 0.02\) \\ Auto-Sklearn2 full space & \(0.00\pm 0.00\) & \(0.06\pm 0.02\) & \(0.14\pm 0.02\) & \(0.70\pm 0.03\) \\ TPOT & \(0.00\pm 0.00\) & \(0.00\pm 0.00\) & \(0.31\pm 0.03\) & \(0.47\pm 0.04\) \\ AutoGluon & \(0.33\pm 0.02\) & \(0.41\pm 0.01\) & \(0.49\pm 0.01\) & \(0.62\pm 0.01\) \\ Spearmint & \(0.24\pm 0.03\) & \(0.36\pm 0.03\) & \(0.43\pm 0.01\) & \(0.60\pm 0.02\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Search time constraint: Balanced accuracy averaged across 10 repetitions and 39 datasets comparing Caml to state-of-the-art AutoML systems.
\begin{table}
\begin{tabular}{c l r} \hline \hline Rank & Meta-Feature & Importance \\ \hline
1 & pipeline size constraint & 0.072 \\
2 & inference time constraint & 0.053 \\
3 & training time constraint & 0.044 \\
4 & hold-out fraction & 0.036 \\
5 & search time constraint & 0.023 \\
6 & number of evaluations & 0.022 \\
7 & fairness constraint & 0.020 \\
8 & hold-out test instances & 0.017 \\
9 & evaluation time & 0.017 \\
10 & \(|\)instances\(|\) & 0.016 \\
11 & ClassProbabilitySTD & 0.016 \\
12 & DatasetRatio & 0.015 \\
13 & ClassProbabilityMax & 0.015 \\
14 & ClassProbabilityMin & 0.015 \\
15 & ClassEntropy & 0.015 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Meta-feature importances of the classification meta-model.
the meta-feature that describes whether to use a feature _preprocessor_ and whether to _incremental training_. Both decisions have a significant impact on how much the given AutoML configuration outperforms the default one.
Table 4 contains statistics about how often our system chooses a specific classifier across the 39 datasets and how many classifiers it chooses on average. The first observation is that the meta-model learned that it is beneficial to choose around ten classifiers to achieve high balanced accuracy fast. The Auto-Sklearn2 developers choose only 5 classifiers. However, since our system can decide for every single ML hyperparameter whether to optimize it, the search space stays small in comparison but adjusts itself to the specified dataset. In contrast to building Auto-Sklearn2, this approach is fully automatic and does not require any AutoML systems expertise. Auto-Sklearn2 uses a dynamic chooses the validation strategy. Additionally, its ML hyperparameter space has been manually tuned for accuracy and search time. Thus, users who want to apply Auto-Sklearn2 for a new constrained setting, would need to adjust the ML hyperparameter search space manually again. Further, we see that ExtraTrees are chosen frequently. The reason is that the computation cost is low and the prediction is robust due to ensembling.
To understand the interaction among these important AutoML parameters, we report in Table 5 the fraction of datasets that a certain AutoML parameter was applied. First, we see that the choice for in
\begin{table}
\begin{tabular}{c l c} \hline \hline Rank & Meta-Feature & Importance \\ \hline
1 & pipeline size constraint & 0.072 \\
2 & inference time constraint & 0.056 \\
3 & training time constraint & 0.052 \\
4 & hold-out fraction & 0.043 \\
5 & search time constraint & 0.024 \\
6 & preprocessor & 0.023 \\
7 & fairness constraint & 0.022 \\
8 & number of evaluations & 0.022 \\
9 & incremental training & 0.021 \\
10 & ClassProbabilitySTD & 0.020 \\
11 & ClassProbabilityMin & 0.016 \\
12 & ClassEntropy & 0.016 \\
13 & ClassProbabilityMax & 0.015 \\
14 & evaluation time & 0.015 \\
15 & RatioNominalToNumerical & 0.015 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Meta-feature importances of the regression meta-model.
Figure 5: Examples of ML hyperparameter spaces chosen by Caml Dynamic’
Automatic
baselines significantly. Only for equal opportunity, Auto-Sklearn2 achieves the best accuracy for very restrictive fairness constraints. The reason is that Auto-Sklearn2 uses Dummy classifiers if it does not find any other model. Dummy classifiers predict only one class. This way it is likely that both the majority and the minority group have very similar true positive rates and therefore very high equal opportunity. However, we decided against including dummy classifiers because users expect an AutoML system to fit actual ML models.
For the constraints inference and training time, our dynamic approach always outperforms our static approach. For pipeline size constraints, the static approach is better for restrictive thresholds. The reason is that pipeline size is more bound to the size of training set size and our default approach always uses incremental training. That means that it starts with a very small training dataset. So, if the pipeline size is not satisfied for such a small set, it will go to the next ML hyper-parameter configuration immediately. Our meta-model might be too optimistic and try to avoid incremental training if possible because it has a higher chance of higher accuracies but might miss satisfying the constraints.
For fairness constraints, the dynamic and static approach perform similarly. The reason is that fairness is highly data dependent. Without explicit information about the sensitive attributes, it is harder for the meta-model to decide on the AutoML system configuration. Furthermore, the meta-training for fairness had access to much fewer datasets compared to the other constraints. Additional datasets might help the meta-model to generalize better. However, in case of missing values and fairness constraints, Caml independently learned to choose only median value imputation, which supports the finding by Schelter et al. (2015) that mean value imputation negatively affects fairness.
#### 4.3.2 Analysis
To better understand how our system adapts the ML hyperparameter search space depending on the ML application constraints, we average the chosen classifiers for each ML application constraint and compare it to case using with no ML application constraint in Table 7.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Percentile & 2\% & 4\% & 8\% & 16\% & 32\% \\ \hline Pipeline & 4026B & 6651B & 8359B & 16797B & 32266B \\ \hline Auto-Sklearn2 & \(0.01\pm 0.00\) & \(0.01\pm 0.00\) & \(0.01\pm 0.00\) & \(0.01\pm 0.00\) & \(0.01\pm 0.00\) \\ Spearmint & \(0.04\pm 0.02\) & \(0.08\pm 0.02\) & \(0.09\pm 0.03\) & \(0.19\pm 0.03\) & \(0.22\pm 0.03\) \\ Caml & \multirow{2}{*}{Dynamic Static} & \(0.25\pm 0.01\) & \(\mathbf{0.39\pm 0.01^{*}}\) & \(\mathbf{0.43\pm 0.00^{*}}\) & \(\mathbf{0.54\pm 0.01^{*}}\) & \(\mathbf{0.63\pm 0.01^{*}}\) \\ & & \(0.39\pm 0.01\) & \(0.42\pm 0.01\) & \(0.49\pm 0.01\) & \(0.59\pm 0.01\) \\ \hline Training & \multirow{2}{*}{0.009s} & \multirow{2}{*}{0.010s} & \multirow{2}{*}{0.012s} & \multirow{2}{*}{0.019s} & \multirow{2}{*}{0.078s} \\ time & & & & & \\ \hline Auto-Sklearn2 & \(0.00\pm 0.00\) & \(0.00\pm 0.00\) & \(0.00\pm 0.00\) & \(0.00\pm 0.00\) & \(0.01\pm 0.01\) \\ Spearmint & \(0.00\pm 0.01\) & \(0.01\pm 0.01\) & \(0.00\pm 0.01\) & \(0.00\pm 0.01\) & \(0.05\pm 0.02\) \\ Caml & \multirow{2}{*}{Dynamic Static} & \(\mathbf{0.61\pm 0.01^{*}}\) & \(\mathbf{0.62\pm 0.01^{*}}\) & \(\mathbf{0.63\pm 0.01^{*}}\) & \(\mathbf{0.68\pm 0.01^{*}}\) & \(\mathbf{0.71\pm 0.00^{*}}\) \\ & \(0.46\pm 0.02\) & \(0.46\pm 0.02\) & \(0.50\pm 0.02\) & \(0.57\pm 0.02\) & \(0.65\pm 0.01\) \\ \hline Inference & \multirow{2}{*}{0.00079s} & \multirow{2}{*}{0.00082s} & \multirow{2}{*}{0.00102s} & \multirow{2}{*}{0.00146s} & \multirow{2}{*}{0.00302s} \\ time & & & & & \\ \hline Auto-Sklearn2 & \(0.29\pm 0.02\) & \(0.27\pm 0.02\) & \(0.27\pm 0.03\) & \(0.40\pm 0.02\) & \(0.42\pm 0.02\) \\ Spearmint & \(0.02\pm 0.01\) & \(0.02\pm 0.02\) & \(0.02\pm 0.01\) & \(0.02\pm 0.01\) & \(0.06\pm 0.01\) \\ Caml & \multirow{2}{*}{Dynamic Static} & \(\mathbf{0.42\pm 0.02^{*}}\) & \(\mathbf{0.45\pm 0.02^{*}}\) & \(\mathbf{0.57\pm 0.02^{*}}\) & \(\mathbf{0.66\pm 0.01^{*}}\) & \(\mathbf{0.74\pm 0.00^{*}}\) \\ & \(0.25\pm 0.03\) & \(0.26\pm 0.03\) & \(0.38\pm 0.03\) & \(0.52\pm 0.02\) & \(0.64\pm 0.02\) \\ \hline Equal & \multirow{2}{*}{1.000} & \multirow{2}{*}{0.999} & \multirow{2}{*}{0.994} & \multirow{2}{*}{0.981} & \multirow{2}{*}{0.949} \\ Opportunity & & & & & \\ \hline Auto-Sklearn2 & \(\mathbf{0.50\pm 0.00^{*}}\) & \(0.56\pm 0.00\) & \(0.59\pm 0.01\) & \(0.63\pm 0.00\) & \(0.67\pm 0.02\) \\ Spearmint & \(0.17\pm 0.10\) & \(0.19\pm 0.12\) & \(0.35\pm 0.11\) & \(0.57\pm 0.07\) & \(0.58\pm 0.07\) \\ Caml & \multirow{2}{*}{Dynamic Static} & \(0.10\pm 0.00\) & \(\mathbf{0.61\pm 0.05^{*}}\) & \(\mathbf{0.64\pm 0.01^{*}}\) & \(\mathbf{0.67\pm 0.01^{*}}\) & \(\mathbf{0.70\pm 0.00^{*}}\) \\ & \(0.10\pm 0.00\) & \(0.46\pm 0.09\) & \(0.62\pm 0.05\) & \(0.66\pm 0.01\) & \(0.68\pm 0.01\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: We report the balanced accuracy for 5 minutes search time averaged across 10 repetitions and test datasets for four constraints.
# Feature learning
AdaB. B.NBDTE.Trees G.NB HGB.KNNLDALSVCCMLPM.NBPAQDARFSGDSVC
### AutoML Configuration Mining
Another important question for our approach is how many promising AutoML configurations we need to mine to achieve high predictive performance. Therefore, we experiment, for the search time constraint of 5min, with various fractions of the AutoML configurations that we mined within two weeks. We report the results in Table 9. With an increasing number of mined AutoML configurations, the predictive performance increases as well. The accuracy gain in percent might seem small but is significant according to the Mann-Whitney U rank test. Further, the more constraints we add, the more diverse the pool of mined AutoML configurations needs to be to achieve high predictive performance across all constraints.
## 5 Related Work
Our work on constraint-driven AutoML combines research from various areas of optimization, AutoML, and meta-learning.
**Constrained Optimization.** One direction of work addresses constrained optimization by learning a surrogate model that estimates whether sampled configurations violate the corresponding constraints [17; 42; 24; 2; 31]. However, this approach has two downsides. First, it requires the surrogate models to learn the constraints each time from scratch. Second, it cannot adjust the parameters of the AutoML systems, such as the validation approach or the search strategy, to the corresponding ML task.
**Meta-Learning.** A more effective approach is to learn upfront whether a given ML pipeline satisfies a well-known constraint, such as training time [33]. This approach does not require learning the constraint each time from scratch. Still, it does not adjust the AutoML parameters. Another direction is to meta-optimize the AutoML parameters. For instance, Lindauer et al. [30] optimize the parameters of hyperparameter optimization. However, they do not consider constraints. Further, Auto-Sklearn 2 [15] only supports predicting discrete strategy decisions using pair-wise modeling. Therefore, their approach does not support continuous AutoML hyperparameters and does not scale to hundreds of settings. This scalability issue also hinders joint strategy prediction as the combinatorial space is too huge. Van Rijn et al. leverage meta-learning to identify the most important hyperparameter for various ML models individually [44]. However, they do not consider constraints.
**Accelerating AutoML.** Further, there is a large effort in the data management community to speed up AutoML systems. For instance, Li et al. propose to leverage search space decomposition [29]. Yakovlev et al. propose to leverage proxy models, iteration-free optimization, and adaptive data reduction to accelerate hyperparameter optimization [54]. Another well-known approach to speed up hyperparameter optimization is to leverage successive halving [27; 14]. It starts by evaluating many configurations on a small budget and incrementally chooses the best half of the configurations to evaluate them on a bigger budget. Xin et al. leverage caching to accelerate hyperparameter optimization [53]. However, their strategies cannot be applied in case of validation split reshuffling. Nakandala et al. propose a new parallel SGD execution strategy to speed up hyperparameter optimization for SGD-based models [34]. All these strategies accelerate the process of hyperparameter optimization and can be used to extend our system, yet are orthogonal to our contribution.
## 6 Conclusion
We proposed integrating constraints as a first-class citizen into AutoML - a paradigm that we call constraint-driven AutoML. As the constraints set limitations on the hyperparameter search, we proposed an approach to dynamically change the AutoML search space for the constraints at hand. To achieve this goal, we leverage active meta-learning. To explore the huge space of datasets, AutoML configurations, and constraints, we sample those combinations that benefit the meta-model. To show the full benefit of this approach, we develop a simple adjustable AutoML system, Caml, that exposes its whole ML hyperparameter space as binary AutoML parameters to have a task-specific search space. This way, Caml _Dynamic_ can decide for every single ML hyperparameter whether it should be optimized or not. It automatically chooses an ML hyperparameter space for search time constraints that is similar to the space covered by the hand-tuned Auto-Sklearn2 [15] system. Overall, our new approach allows for configurable generic AutoML systems that dynamically adjust to the task and constraints at hand, and thus further increase the applicability of AutoML systems in practical application.
|
2304.00908 | Freeze-in of WIMP dark matter | The nature of dark matter (DM) remains one of the most important unanswered
questions in particle physics. Here, we propose a novel scenario for DM in
which weakly interacting massive particles (WIMPs) can freeze-in due to a
first-order phase transition (FOPT) in the early Universe. The FOPT dilutes the
pre-existing DM density to zero and leads to a sudden change in DM mass,
preventing WIMPs from re-equilibrating due to their large mass-to-temperature
ratio. Following the FOPT, WIMPs are produced via a freeze-in process, even
though their interactions are NOT feeble. We demonstrate this concept using a
simplified model and then apply it to a realistic model with a delayed
electroweak phase transition. Our work presents a promising new direction for
the freeze-in mechanism, and also extends the category of WIMP DM. | Xiao-Rui Wang, Ke-Pan Xie | 2023-04-03T11:55:39Z | http://arxiv.org/abs/2304.00908v3 | # Freeze-in of WIMP dark matter
###### Abstract
The nature of dark matter (DM) remains one of the most important unanswered questions in particle physics. Here, we propose a novel scenario for DM in which weakly interacting massive particles (WIMPs) can freeze-in due to a first-order phase transition (FOPT) in the early Universe. The FOPT dilutes the pre-existing DM density to zero and leads to a sudden change in DM mass, preventing WIMPs from re-equilibrating due to their large mass-to-temperature ratio. Following the FOPT, WIMPs are produced via a freeze-in process, even though their interactions are NOT feeble. We demonstrate this concept using a simplified model and then apply it to a realistic model with a delayed electroweak phase transition. Our work presents a promising new direction for the freeze-in mechanism, and also extends the category of WIMP DM.
## I Introduction
Despite its large abundance (\(\sim 27\%\)) in the Universe, the particle origin of dark matter (DM) remains a mystery [1]. One of the most promising theoretical paradigms for DM involves assuming that the DM particle \(X\) can annihilate into Standard Model (SM) particles via the \(2\to 2\) scattering
\[X\,X\to\text{SM SM}. \tag{1}\]
Depending on the strength of the portal interaction between the SM and dark sectors, there are two extensively studied scenarios. In the first scenario, Eq. (1) is in thermal equilibrium in the early Universe, causing DM particles to follow the equilibrium distribution until the temperature drops to \(\sim 1/25\) of the DM mass, at which point the annihilation process decouples and a fixed DM relic abundance remains. This process is known as the freeze-out mechanism of weakly interacting massive particles (WIMPs) [2; 3; 4], which has been the most popular explanation for particle DM. In the second scenario, the initial density of DM is negligibly small, and the interactions are so feeble that DM particles can never reach thermal equilibrium. As a result, DM accumulates via the inverse process of Eq. (1), leading to the freeze-in mechanism of feebly interacting massive particles (FIMPs) [5; 6; 7].
WIMP freeze-out and FIMP freeze-in are two opposite scenarios based on Eq. (1). In this work, we propose a novel scenario based on the same reaction, which is the _freeze-in_ of the WIMPs. By "WIMPs," we mean that the portal interactions are not feeble. Therefore, in the conventional thermal history of the Universe, DM particles inevitably thermalize and freeze-out. However, we suggest that freeze-in of WIMPs can happen if the Universe experiences a supercooled first-order phase transition (FOPT). A FOPT is the transition of the Universe from a metastable false vacuum to a stable true vacuum via bubble nucleation and expansion [8], and its usage is two-fold:
1. A supercooled FOPT releases a huge amount of entropy, which dilutes the preexisting DM density to a negligible level.
2. The WIMP could gain mass from the FOPT, such that after the transition the DM particles have a huge mass-to-temperature ratio and hence an exponentially suppressed Boltzmann factor, which prevents them from thermalizing.
Therefore, after the FOPT, the DM will be accumulatively produced via the inverse process of Eq. (1), which is a typical freeze-in scenario, but it applies to weak or moderate couplings, rather than feeble ones as seen in traditional FIMP freeze-in.
Our work introduces a novel scenario for DM, which is based on the simple \(2\to 2\) annihilation and represents a third possible scenario in addition to the traditional WIMP freeze-out and FIMP freeze-in. This scenario can be applied to many new physics models. As will be demonstrated, this scenario shares the common features from the conventional freeze-in [5], such as IR dominated, i.e. the DM behavior is determined by physics at scales not far from the DM mass, but independent of the UV physics (e.g. the inflation model); and the relic abundance is positively correlated with the coupling strength, independent of the DM mass, allowing DM exceeding the Griest-Kamionkowski (GK) bound (\(\sim 100\) TeV) [9].
## II Freeze-in of WIMPs
To illustrate the idea, we consider a simplified model with a scalar DM candidate \(X\) that interacts with a massless thermal bath scalar \(B\) via the quartic coupling \(\lambda X^{\dagger}XB^{\dagger}B\). In the radiation era, the Boltzmann
equation governing the evolution of \(X\) is
\[\frac{\mathrm{d}Y_{X}}{\mathrm{d}z}=-\sqrt{\frac{\pi g_{s,s}^{2}}{45g_{*}}}\frac{M _{\mathrm{Pl}}m_{X}}{z^{2}}\left\langle\sigma v_{\mathrm{rel}}\right\rangle \left(Y_{X}^{2}-Y_{\mathrm{eq}}^{2}\right), \tag{2}\]
where \(Y_{X}=n_{X}/s\) is the yield of \(X\) with \(s\) being the entropy density, \(z=m_{X}/T\), \(M_{\mathrm{Pl}}=1.22\times 10^{19}\) GeV is the Planck scale, \(g_{*,s}\) and \(g_{*}\) are the numbers the relativistic degrees of freedom for entropy and energy, respectively, \(Y_{\mathrm{eq}}=45z^{2}K_{2}(z)/(4\pi^{4}g_{*,s})\) is the equilibrium yield of \(X\),
\[\left\langle\sigma v_{\mathrm{rel}}\right\rangle=\frac{\lambda^{2}}{32\pi m_{X }^{2}}\left(\frac{K_{1}(z)}{K_{2}(z)}\right)^{2} \tag{3}\]
is the thermal average of the annihilation cross section of \(XX^{\dagger}\to BB^{\dagger}\) multiplying the relative velocity \(v_{\mathrm{rel}}\), and \(K_{i}(z)\) is the \(i\)-th modified Bessel function.
In the conventional thermal history, \(z\) starts from \(\sim 0\) at the end of the inflationary reheating epoch and evolves to \(\gg 1\) to the current Universe. If \(\lambda\) is sufficient to keep \(X\) in equilibrium for \(z\ll 1\), then Eq. (2) realizes the WIMP freeze-out scenario that \(\Omega_{X}h^{2}\sim 0.1\,(0.5/\lambda)^{2}(m_{X}/\mathrm{TeV})^{2}\), which implies an upper limit of \(\sim 100\) TeV for the DM mass due to the unitarity bound of \(\lambda\), known as the GK bound [9]. On the other hand, for feeble \(\lambda\), Eq. (2) explains DM with a FIMP freeze-in scenario that has \(\Omega_{X}h^{2}\sim 0.1\,[\lambda/(2.5\times 10^{-11})]^{2}\), independent of the DM mass.
In our WIMP freeze-in model, there exists a discontinuity in the evolution of \(z\) during the thermal history. Prior to the FOPT, the \(X\) particle is massless, leading to \(z\equiv 0\). Following the transition, however, the DM mass undergoes a sudden change to \(m_{X}\gg T_{2}\), where \(T_{2}\) denotes the temperature after the FOPT. We assume a supercooled FOPT such that \(T_{2}\gg T_{1}\), the temperature at which the FOPT begins. This leads to an enormous increase in entropy density by a factor of \((T_{2}/T_{1})^{3}\), resulting in the preexisting \(X\) density being diluted by \((T_{1}/T_{2})^{3}\). Consequently, the evolution of Eq. (2) begins at \(z_{2}=m_{X}/T_{2}\gg 1\), with an initial condition \(Y_{X}(z_{2})\approx 0\). When \(z_{2}\) is large enough, the \(X\) particles fail to thermalize due to the Boltzmann suppression factor \(e^{-z_{2}}\), even though \(\lambda\) is NOT feeble. Freeze-in then occurs via \(BB^{\dagger}\to XX^{\dagger}\) after the FOPT, and the yield can be approximately solved from Eq. (2) as
\[Y_{\infty}\approx\frac{135\sqrt{5}\lambda^{2}M_{\mathrm{Pl}}}{4096\pi^{15/2}g _{*,s}\sqrt{g_{*}}m_{X}}(1+2z_{2})e^{-2z_{2}}. \tag{4}\]
We immediately see that the relic abundance \(\Omega_{X}h^{2}\propto m_{X}Y_{\infty}\) is proportional to \(\lambda^{2}\) but irrelevant to \(m_{X}\), which are typical features of freeze-in. Importantly, the relic abundance is suppressed by the exponent \(e^{-2z_{2}}\), or more explicitly, \(\Omega_{X}h^{2}\sim 0.1\times\left[\lambda e^{-z_{2}}/(3.5\times 10^{-11}) \right]^{2}(1+2z_{2})\), where we use \(g_{*}\approx g_{*,s}\approx 106.75\). Therefore, even if \(\lambda\) is not feeble, this scenario can still produce a correct DM relic abundance via a large enough \(z_{2}\), and that is the crucial point for the WIMP freeze-in. A formula similar to Eq. (4) can be found in the non-thermal DM production after inflationary reheating [10; 11; 12; 13], which requires an inflationary reheat temperature at TeV scale and hence sets constraints for inflation models. In contrast, our mechanism is not sensitive to UV physics such as inflationary reheat temperature.
Fig. 1 illustrates the three DM scenarios for \(m_{X}=1\) TeV and different \(\lambda\) values, all give the correct DM abundance \(\Omega_{X}h^{2}=0.12\)[1]. The gray dashed line is the equilibrium \(X\) distribution for reference. For \(\lambda\approx 0.65\), the WIMP freeze-out is realized in the blue line; while for \(\lambda\approx 2.6\times 10^{-11}\), the FIMP freeze-in is given in the orange line. Our WIMP freeze-in scenario is described by the red line, corresponding to \(\lambda=0.1\) and \(z_{2}\approx 23.6\). We can see that the DM density starts from zero and increases rapidly to a fixed value at around \(z\sim 25\).
## III Remarks on model-building and phenomenology
The WIMP freeze-in scenario needs a supercooled FOPT. Therefore, it could be realized in a classically conformal (CC) model, whose scalar potential has no quadratic mass term at tree level but a Coleman-Weinberg potential [14] is generated at one-loop level [15; 16; 17]. It is well-known that such models can exhibit supercooled phase transitions [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28], which drastically alter the thermal history of the Universe. As a minimal setup, let us assume \(B\) is also the field that experiences the FOPT, and based on the CC principle its potential can be parametrized as
\[V_{1}(\phi)=V_{\Lambda}+\frac{\lambda_{B}^{2}}{64\pi^{2}}\phi^{4}\left(\log \frac{\phi}{w}-\frac{1}{4}\right), \tag{5}\]
Figure 1: The three DM scenarios realized by the Boltzmann equation (2) for \(m_{X}=1\) TeV with different \(\lambda\). The blue, orange and red lines are WIMP freeze-out (\(\lambda\approx 0.65\)), FIMP freeze-in (\(\lambda\approx 2.6\times 10^{-11}\)) and WIMP freeze-in (\(\lambda=0.1\) and \(z_{2}\approx 23.6\)), respectively. The equilibrium distribution is plotted in gray dashed line.
where \(\phi=\sqrt{2}\,\text{Re}[B]\), \(V_{\Lambda}=\lambda_{B}^{2}w^{4}/(256\pi^{2})\) is the vacuum energy, and \(\lambda_{B}\) receives the contributions from all particles coupling to \(B\). The potential yields a nonzero vacuum expectation value (VEV) \(\langle\phi\rangle\neq 0\), which breaks the CC symmetry spontaneously, and also provides a mass \(m_{X}^{2}=\lambda w^{2}/2\) to the DM.
Across a vast range of parameter space, the potential (5) can trigger a supercooled FOPT from \(\phi=0\) to \(\phi\approx w\) at a temperature of \(T_{1}\ll w\), releasing a significant amount of vacuum energy \(V_{\Lambda}\) and reheating the Universe to \(T_{2}\approx T_{\Lambda}\), where \(\pi^{2}g_{*}T_{\Lambda}^{4}/30=V_{\Lambda}\). By substituting these equations into Eq. (4) and requiring the correct DM abundance, we obtain
\[\lambda_{B}\approx 0.189\,\lambda^{0.881}, \tag{6}\]
which provides a relation between the DM portal coupling \(\lambda\) and the effective potential coupling \(\lambda_{B}\), and can be treated as a guide for model-building. For example, if we would like to build a model with minimal particle content, then \(\lambda_{B}\sim\lambda\) and Eq. (6) yields \(\lambda\sim 10^{-6}\), which is the expected coupling strength for WIMP freeze-in. If one instead favors a coupling at scale of electroweak (EW) gauge coupling, e.g. \(\lambda=0.1\), then Eq. (6) estimates \(\lambda_{B}\approx 0.025<\lambda\), implying additional fermionic degrees of freedom coupled to \(B\), which provide negative contributions to \(\lambda_{B}\).
Our scenario can be probed through the direct [29], indirect [30] and collider [31] searches as in the traditional WIMP scenario. In addition, the stochastic Gravitational Wave (GW) background is another approach to probe this scenario since supercooled FOPTs generate strong GW signals [24; 25]. The correlation between GWs and traditional WIMP search strategies could efficiently probe the scenario.
## IV A realistic model
Extend the SM with three gauge singlets, one real scalar \(\phi\), one complex scalar \(X\), and one Dirac fermion \(\psi\). The tree level potential is [32; 33; 34; 35]
\[V=\lambda_{h}|H|^{4}+\lambda_{x}|X|^{4}+\frac{\lambda_{\phi}}{4 }\phi^{4}\\ +\lambda_{hx}|H|^{2}|X|^{2}+\frac{\lambda_{h\phi}}{2}\phi^{2}|H|^ {2}+\frac{\lambda_{\phi x}}{2}\phi^{2}|X|^{2}, \tag{7}\]
where \(H\) is the SM Higgs doublet, \(X\) is the DM candidate, and \(\phi\) plays the role of \(B\). The fermion singlet couples to other particles via the Yukawa interactions \(\mathcal{L}_{1}\supset-(y_{\psi}/\sqrt{2})\phi\bar{\psi}\psi-y_{\psi}\bar{ \ell}_{L}\tilde{H}\psi\), with \(\ell_{L}\) the SM lepton doublet. The quadratic terms are absent in Eq. (7), leading to a CC invariance. However, a Coleman-Weinberg potential is generated at one-loop level, which is replacing \(\lambda_{B}^{2}\) with \((\lambda_{\phi x}^{2}-2y_{\phi}^{4})\) in Eq. (5). This leads to \(\langle\phi\rangle=w\) and the breaking of CC symmetry.
The nonzero \(\langle\phi\rangle\) triggers the EW symmetry breaking via the negative mixing term \(\lambda_{h\phi}\approx-m_{h}^{2}/w^{2}\), as the Higgs direction potential then becomes \(-(m_{h}^{2}/2)|H|^{2}+\lambda_{h}|H|^{4}\), leading to \(\langle h\rangle=v_{\text{EW}}\approx 246\) GeV, where \(h\) is the real part of the neutral component of \(H\). For \(w\gg v_{\text{EW}}\), \(|\lambda_{h\phi}|\) is tiny, thus the back reaction from Higgs to \(\phi\) in Eq. (5) is neglected. The vacuum is then \((\phi,h)=(w,v_{\text{EW}})\), and around the vacuum there are one Dirac fermion \(\psi\) with \(m_{\psi}=y_{\psi}w/\sqrt{2}\) and three massive scalar bosons, namely the \(\phi\) boson with a mass of \(m_{\phi}\approx\sqrt{\lambda_{\phi x}^{2}-2y_{\phi}^{4}}w/(4\pi)\), the DM \(X\) with a mass of \(m_{X}=\sqrt{(\lambda_{\phi x}w^{2}+\lambda_{hx}v_{\text{EW}}^{2})/2}\), and the Higgs boson with a mass of \(m_{h}\approx 125\) GeV. \(X\) enjoys an unbroken \(\mathbb{Z}_{2}\) symmetry which ensures its stability.
In the early Universe, the scalar potential receives thermal corrections and becomes [35]
\[V_{T}(\phi)=V_{1}(\phi)+\frac{2T^{4}}{2\pi^{2}}J_{B}\left(\frac{ \lambda_{\phi x}\phi^{2}}{2T^{2}}\right)+\frac{4T^{4}}{2\pi^{2}}J_{F}\left( \frac{y_{\psi}^{2}\phi^{2}}{2T^{2}}\right)\\ -\frac{2T}{12\pi}\left(\frac{\lambda_{\phi x}}{2}\right)^{3/2} \left[\left(\phi^{2}+\frac{T^{2}}{12}\right)^{3/2}-\phi^{3}\right], \tag{8}\]
where the bosonic and fermionic thermal integrals are
\[J_{B/F}(y)=\pm\int_{0}^{\infty}x^{2}dx\ln\left(1\mp e^{-\sqrt{x^{2}+y}}\right). \tag{9}\]
As \(J_{B}(y)\sim\pi^{2}y/12\) and \(J_{F}(y)\sim\pi^{2}y/24\) for \(y\ll 1\), the thermal potential has a positive quadratic term \(V_{T}\approx(\lambda_{\phi x}+y_{\psi}^{2})T^{2}\phi^{2}/24\) at \(\phi\sim 0\), which is a local minimum. When \(T\gg w\), the Universe stays in this \((\phi,h)=(0,0)\) vacuum, and the EW symmetry is restored. As \(T\) drops, the potential develops another local minimum at \(\phi\sim w\), which eventually becomes a true vacuum, i.e. the global minimum. The two vacua are separated by a potential barrier so that a smooth transition between them is not allowed. Therefore, the Universe will decay from \(\phi=0\)
Figure 2: The thermal history of the realistic model in the field space. The Universe is trapped in the origin down to \(T_{\text{QCD}}\approx 85\) MeV, when the QCD-EW FOPT occurs. Then at \(\sim T_{1}\), the \(\phi\)-FOPT happens and the Universe rolls down to the true vacuum, reheating the Universe to \(T_{2}\).
to \(\phi\sim w\) via quantum tunneling, resulting in a FOPT along the \(\phi\) direction in the field space. However, the tunneling probability is suppressed by \(e^{-S_{3}/T}\) while \(S_{3}/T\) is greatly enhanced when \(\lambda_{\phi x}\) and \(y_{\psi}\) are small, and hence the direct \(\phi\)-FOPT cannot occur. We have checked this is the case for the parameter space of interest, and the Universe is trapped in the false vacuum \(\langle\phi\rangle=0\) down to a very low temperature.
If the Universe stays in the origin of the \(\phi\)-\(h\) space until \(T_{\rm QCD}\approx 85\) MeV, then the QCD confinement phase transition happens [36]. This is a FOPT, as there are \(N_{f}=6\) massless quarks in the plasma [37]. The QCD FOPT then triggers the EW phase transition of the Higgs field via the top Yukawa coupling \(-(y_{t}/\sqrt{2})\bar{t}th\), which generates a VEV \(\langle h\rangle=v_{\rm QCD}=(y_{t}\left\langle\bar{t}t\right\rangle/\sqrt{2} \lambda_{h})^{1/3}\)[20; 21]. Hence this is actually a QCD-EW FOPT, which changes the vacuum from \((\phi,h)=(0,0)\) to \((0,v_{\rm QCD})\). Then the potential near the vacuum would be \(V_{T}\approx[(\lambda_{\phi x}+y_{\psi}^{2})T^{2}+6\lambda_{h\phi}v_{\rm QCD }^{2}]\phi^{2}/24\), which still traps the \(\phi\) field in its origin until the the Universe cools to
\[T_{1}=v_{\rm QCD}\sqrt{\frac{-6\lambda_{h\phi}}{\lambda_{\phi x}+y_{\psi}^{2} }}\approx v_{\rm QCD}\frac{m_{h}}{w}\sqrt{\frac{6}{\lambda_{\phi x}+y_{\psi}^ {2}}}, \tag{10}\]
and the quadratic term vanishes. At \(\sim T_{1}\), the Universe tunnels along the \(\phi\) direction and then rolls down to the true vacuum \((\phi,h)\approx(w,v_{\rm EW})\), leading to a \(\phi\)-FOPT, reheating the Universe to \(T_{2}\). The thermal history in \(\phi\)-\(h\) field space is sketched in Fig. 2. Such kind of QCD-EW FOPT and \(\phi\)-FOPT is first proposed in Ref. [20], and it provides an excellent environment for the realization of the WIMP freeze-in scenario.
Before the QCD-EW FOPT, the mass of DM candidate \(X\) is zero, and its number density \(\propto T^{3}\). After the transition, \(X\) gains a mass \(m_{X}^{\prime}=v_{\rm QCD}\sqrt{\lambda_{hx}/2}\), and the density is suppressed by the Boltzmann factor \(e^{-m_{X}^{\prime}/T}\). Then the \(\phi\)-FOPT increases the \(X\) mass to \(m_{X}\approx\sqrt{(\lambda_{\phi x}w^{2}+\lambda_{hx}v_{\rm EW}^{2})/2}\) and reheats the Universe to \(T_{2}\). The yield of \(X\) is diluted to be
\[Y_{X}(z_{2})=Y_{\rm eq}(z_{1})\frac{T_{2}}{T_{\Lambda}}\left(\frac{T_{1}}{T_{ \Lambda}}\right)^{3}, \tag{11}\]
where \(z_{1}=m_{X}^{\prime}/T_{1}\), \(z_{2}=m_{X}/T_{2}\), and \(T_{\Lambda}\) is defined as the temperature that the radiation energy equals the vacuum energy, i.e. \(\pi^{2}g_{*}T_{\Lambda}^{4}/30=V_{\Lambda}\). When \(T<T_{\Lambda}\), the Universe enters a vacuum dominated epoch and expands exponentially, known as the thermal inflation [38; 39; 40]. The FOPT reheat temperature \(T_{2}=T_{\Lambda}\min\{1,\Gamma/H\}^{1/2}\), with \(\Gamma=\Gamma_{h}\sin^{2}\theta+\Gamma_{\phi}\cos^{2}\theta\), where \(\Gamma_{h,\phi}\) are the decay widths of \(h\) and \(\phi\), respectively, and \(\theta\approx-v_{\rm EW}/w\) is the mixing angle [41]. In the parameter space of our interest, the reheating is prompt and hence \(T_{2}=T_{\Lambda}\). We also check that \(T_{2}/w\lesssim 10^{-2}\) and hence \(\langle\phi\rangle\) is not affected by the FOPT reheating, i.e. there will not be an inverse decay of the vacuum back to \(\phi=0\).
The \(\phi\)-FOPT dilutes the \(X\) density and generates the \(z_{2}=m_{X}/T_{2}\) gap. For the sake of a WIMP freeze-in DM scenario, the diluted \(X\) density after FOPT at \(T_{2}\) should be negligibly small, and then \(X\) will be produced via \(\phi\phi\to XX^{\dagger}\) and \(hh\to XX^{\dagger}\) from the \(\phi\) and \(h\) particles in the thermal bath, and the production cross sections are determined by the masses of the bosons and the couplings \(\lambda_{\phi x}\) and \(\lambda_{hx}\). Taking \(v_{\rm QCD}=100\) MeV and \(w=10\) TeV as a benchmark, given a set of \((m_{X},\lambda_{hx})\), we derive \(\lambda_{\phi x}\) and the \(y_{\psi}\) required for the correct DM relic abundance. The parameter space that can realize our scenario is plotted in Fig. 3.
If \(\lambda_{hx}\) is too large, \(X\) would come back to equilibrium after the \(\phi\)-FOPT reheating and then experience the normal freeze-out; the corresponding parameter space is covered by black. Another necessary condition is a negligible \(X\) density after the dilution, and the region that dilution cannot provide \(Y_{X}(z_{2})<Y_{\rm DM}\) is covered by gray in Fig. 3. The region with white color satisfies all conditions and realizes the WIMP freeze-in scenario, and the corresponding \(z_{2}\)'s are shown in red contours. One obtains \(\lambda_{hx}\), \(\lambda_{\phi x}\sim 0.1\), as expected from Eq. (6). For such \((m_{X},\lambda_{hx})\) values, the traditional Higgs-portal WIMP freeze-out scenario yields a DM abundance \(\sim\mathcal{O}(10-10^{2})\) larger than the observed value [42].
The spin-independent \(X\)-nucleon elastic scattering cross section is \(\sigma_{\rm SI}\sim 10^{-48}\) cm\({}^{2}\), which is challenging in direct detection. Nonetheless, a considerable fraction of the parameter space has a \(\sigma_{\rm SI}\) larger than the neutrino floor (shown as the blue dashed line) and hence might be probed by future experiments [43]. Another feature of the model is a light \(\phi\) boson, leading to Higgs exotic decay \(h\to\phi\phi\) with \(\phi\to\) SM SM. The projected reach is Br\((h\to\phi\phi)\approx 4\%\) at the HL-LHC [44], plotted as the dashed orange line in Fig. 3. Besides, since the
Figure 3: The parameter space that realizes the WIMP freeze-in scenario. The red contours are \(z_{2}\) that give the correct DM density. The gray-shaded region cannot dilute the preexisting DM to a negligible level, while the black-shaded region cannot prevent the DM from thermalization. The parameter space that can be probed by direct detection and Higgs exotic decay is plotted with the blue and orange lines, respectively.
ratio of \(\phi\)-FOPT latent heat to the radiation energy is typical \(\alpha\gtrsim 10^{14}\), the GWs are expected to be very strong and mainly from bubble collisions [24; 25]. Taking the inverse ratio of FOPT duration and the Hubble time scale \(\beta/H=100\) as a benchmark, we estimate the GW spectra [45; 46]. After the cosmological redshift, the GWs peak at \(f\sim 10^{-3}\) Hz, within the sensitive region of a few near-future space-based interferometers, including LISA [47], TianQin [48; 49], Taiji [50; 51], BBO [52] and DECIGO [53]. The projected reach of the 1-year operation LISA can cover the parameter space in Fig. 3, and TianQin, Taiji are expected to have similar reach.
## V Conclusion
In this work, we propose a novel DM scenario based on the simple \(2\to 2\) annihilation process, showing that WIMP freeze-in is viable with the assistance of a FOPT. We have demonstrated the main features of our idea through a simplified model and provided a simple guide for the model building of its application. We then realized the scenario in a realistic model with a delayed electroweak phase transition, discussing its phenomenology. Although we illustrate the idea with scalar DM, similar discussion applies to fermion DM as well. Our work opens up a third possibility for realizing DM besides the traditional WIMP freeze-out and FIMP freeze-in mechanisms, allowing for WIMPs with mass beyond the GK bound.
It is known that DM evolution is affected by non-standard thermal history such as early matter era [54], second-order phase transitions [55; 56; 57], non-thermal production after inflationary reheating [11; 12; 13], and FOPTs can leave great impacts on WIMP freeze-out or FIMP freeze-in via the change of particle masses [58; 59; 60; 61]. Besides, FOPTs alter the decay of DM [62; 63; 64; 65; 66], produce DM non-thermally [67; 68; 69], filter the DM particles [70; 71; 72], dilute the DM density [41; 73], or form macroscopic DM candidate including primordial black holes [74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88]. The WIMP freeze-in proposed in this work provides a new connection between FOPTs and DM, and the mechanism is quite general that it applies to many new physics models. The scenario can be tested by combining the WIMP detection and GW experiments.
## Acknowledgement
We thank Iason Baldes, Huai-Ke Guo, Kiyoharu Kawana, Tong Li, Wei Liu, Kengo Shimada and Tao Xu for the very useful discussions.
|
2310.15524 | On the Inherent Privacy Properties of Discrete Denoising Diffusion
Models | Privacy concerns have led to a surge in the creation of synthetic datasets,
with diffusion models emerging as a promising avenue. Although prior studies
have performed empirical evaluations on these models, there has been a gap in
providing a mathematical characterization of their privacy-preserving
capabilities. To address this, we present the pioneering theoretical
exploration of the privacy preservation inherent in discrete diffusion models
(DDMs) for discrete dataset generation. Focusing on per-instance differential
privacy (pDP), our framework elucidates the potential privacy leakage for each
data point in a given training dataset, offering insights into how the privacy
loss of each point correlates with the dataset's distribution. Our bounds also
show that training with $s$-sized data points leads to a surge in privacy
leakage from $(\epsilon, O(\frac{1}{s^2\epsilon}))$-pDP to $(\epsilon,
O(\frac{1}{s\epsilon}))$-pDP of the DDM during the transition from the pure
noise to the synthetic clean data phase, and a faster decay in diffusion
coefficients amplifies the privacy guarantee. Finally, we empirically verify
our theoretical findings on both synthetic and real-world datasets. | Rongzhe Wei, Eleonora Kreačić, Haoyu Wang, Haoteng Yin, Eli Chien, Vamsi K. Potluru, Pan Li | 2023-10-24T05:07:31Z | http://arxiv.org/abs/2310.15524v3 | # On the Inherent Privacy Properties of Discrete Denoising Diffusion Models
###### Abstract
Privacy concerns have led to a surge in the creation of synthetic datasets, with diffusion models emerging as a promising avenue. Although prior studies have performed empirical evaluations on these models, there has been a gap in providing a mathematical characterization of their privacy-preserving capabilities. To address this, we present the pioneering theoretical exploration of the privacy preservation inherent in _discrete diffusion models_ (DDMs) for discrete dataset generation. Focusing on per-instance differential privacy (pDP), our framework elucidates the potential privacy leakage for each data point in a given training dataset, offering insights into data preprocessing to reduce privacy risks of the synthetic dataset generation via DDMs. Our bounds also show that training with \(s\)-sized data points leads to a surge in privacy leakage from \((\epsilon,\mathcal{O}(\frac{1}{s^{2}\epsilon}))\)-pDP to \((\epsilon,\mathcal{O}(\frac{1}{s\epsilon}))\)-pDP during the transition from the pure noise to the synthetic clean data phase, and a faster decay in diffusion coefficients amplifies the privacy guarantee. Finally, we empirically verify our theoretical findings on both synthetic and real-world datasets.
## 1 Introduction
Discrete tabular or graph datasets with categorical attributes are prevalent in many privacy-sensitive domains [1, 2, 3, 4, 5], including finance [6, 7], e-commerce [8, 9], and medicine [10, 11, 12]. For instance, medical researchers often collect patient data, such as race, gender, and medical conditions, in a discrete tabular form. However, using and sharing data in these domains carry the risk of revealing personal information [13]. Studies have shown that it is possible to re-identify individuals in supposedly de-identified healthcare data [14, 15]. To address these types of concerns, publishing synthetic datasets with privacy guarantees has been proposed as a way to protect sensitive information and to reduce the risk of privacy leakage [16, 17, 18, 19].
Previous research has explored discrete synthetic database releasing methods [20, 21, 22]. Many of these methods employ data anonymization techniques [23, 24, 25, 26] or focus on private statistics/statistical models [27, 28, 29, 30]. In the former category, k-anonymization [23] directly works on anonymizing categorical features but it can be vulnerable to the attackers with background knowledge [31]. Alternatively, methods using private statistics or models concentrate on sharing specific private statistics [30] or privatizing model parameters [32, 33]. However, these techniques can sometimes misrepresent the original distribution or reduce sample quality by adding noise directly to model parameters.
Neural network (NN)-based generative models have been leveraged in various domains on account of their ability in learning underlying distributions [34]. Recently, discrete diffusion models (DDMs) [35, 36, 37, 38], as a typical representative of diffusion models (DMs), have emerged as a powerful class of generative models for discrete data and demonstrate great potential to generate samples with striking performance [39, 40]. DDMs are latent variable generative models that employ both a forward and reverse Markov process (See Fig. 1). In the forward diffusion process, each discrete sample is gradually corrupted with dimension-wise independent noise. This is often implemented through the use of progressive transition kernels, which yields not only high fidelity-diversity trade-offs but also robust training objectives [41]. On the other hand, the reverse process learns denoising neural networks that aim to predict the noise and reconstruct the
original sample. Despite the impressive performance of DDMs, it is still unclear whether DDMs trained on sensitive datasets can be safely used to generate synthetic samples.
Efforts have empirically examined the privacy implications of DMs. While previous literature suggests that DMs generate synthetic training data to address privacy concerns [42, 43], recent studies have shown that DMs may not be suitable for releasing private synthetic data. Specifically, Wu et al. [44] and Hu et al. [45] conduct membership inference attacks on DMs for text-to-image tasks and demonstrate that membership inference poses a severe threat in diffusion-based generation. Besides, studies show that DMs can memorize training samples [46, 47]. Although there exist practical observations for privacy properties of DMs, there is limited research aimed at mathematically characterizing the privacy guarantees of data generated by DMs. Moreover, understanding privacy guarantees may guide practitioners to determine whether additional mechanisms, such as DP-SGD [48], PATE [49], should be incorporated to meet practical privacy requirements.
Differential privacy (DP) [50, 51], the most commonly used framework to characterize the privacy guarantee of an algorithm, is derived from the worst-case dataset. However, in the context of synthetic data sharing, the learned data distribution to generate synthetic data strongly depends on the empirical distribution of the data points used for training. Therefore, a privacy guarantee that may incorporate the allocation of data points in the given training dataset is needed, which is likely to offer a far more accurate privacy characterization than the worst-case analysis.
In this paper, we take the first step to analyze the privacy guarantees of DDMs for a fixed training dataset. Specifically, we leverage the data-dependent privacy framework termed per-instance differential privacy (pDP), which is defined upon an instance in a fixed training dataset as outlined by [52]. The analysis of pDP allows for a characterization of the potential privacy leakage of each data point in the training set. This offers practitioners insights into detecting and preprocessing the sensitive data points of the dataset to reduce the privacy risk of the synthetic data generated by DDMs.
Our analysis considers a DDM trained on \(s\) samples and generates \(m\) samples, and we keep track of the privacy leakage in each generation step. We may prove that as the data generation step transits from \(t=T\) (noisy regime) to \(t=0\) (noise-free regime), the privacy leakage increases from \((\epsilon,\mathcal{O}(\frac{m}{s\epsilon}))\)-pDP to \((\epsilon,\mathcal{O}(\frac{m}{s\epsilon}))\)-pDP where the data-dependent term is hidden in the big-\(\mathcal{O}\) notation. Consequently, the final few generation steps (\(\alpha_{t}\to 1\) in Fig. 1) dominate the main privacy leakage in DDMs. Moreover, faster decay in diffusion coefficients yields better privacy preservation. Both synthetic and real dataset evaluations validate our theoretical findings.
For the data-dependent part, we develop a practical algorithm to estimate the privacy leakage of each data point in real-world datasets according to our pDP bounds. We evaluate the data-dependent part by removing the most sensitive data points (according to our data-dependent privacy parameters) from the dataset to train a DDM, and then evaluating the ML models trained based on the synthetic dataset generated by the DDM. Interestingly, we observe that the ML models obtained after a part of data removal can even outperform others without such data removal. We attribute this to the fact that the removed data points are likely outliers which may be actually not good for ML models to learn from. This illustrates another potentially valuable usage of our data-dependent analysis.
### More Related Work
Hitherto, there are studies on NN-based private models but few analyze the inherent privacy of the model itself. In [53], it was shown that a vanilla GAN trained on \(s\) samples inherently satisfies a weak \((\epsilon,\mathcal{O}(\frac{m}{s\epsilon}))\)-DP guarantee when releasing \(m\) samples. In this work, our results demonstrate that DDMs provide weak privacy guarantees in the same order as GANs. But note that [53] did not provide a data-dependent bound. Their bounds are in the order form and
Figure 1: **An Illustration of Discrete Diffusion Models (DDMs).**
cannot be explicitly computed for a given training dataset. Because of such weak inherent privacy there were efforts to bring additional privacy techniques into the model, such as DP-SGD [48]. Xie et al. [54] proposed DPGAN that integrates modified DP-SGD in WGAN to ensure privacy for GAN-generated samples. Dockhorn et al. [55] applied DP-SGD to privatize model parameters in continuous DMs for image data without analyzing the inherent privacy of DMs. Recently, Ghalebikesabi et al. [56] have showed that fine-tuning a pre-trained diffusion model with DP-SGD can generate verifiable private synthetic data for the dataset used for fine-tuning.
While many existing analyses and privacy techniques are based on the standard DP framework, they often overestimate actual privacy loss [52]. When it comes to publishing synthetic data whose distribution is closely related to the specific training dataset, data-dependent privacy becomes far more relevant. Traditional data-dependent DP methods, such as smooth sensitivity [57] and propose-test-release [58], add DP noise of a scale based on the specific dataset. These methods typically add excessive noise to protect private information potentially leaked from such added data-dependent noise. In contrast, our scenario does not add noise to the model. Instead, we analyze the privacy due to the inherent randomness of the data generation process.
## 2 Preliminaries
We start by introducing notations and concepts for analysis. Let \([n]=\{1,2,...,n\}\) and \(\mathcal{X}^{n}\) represent an \(n\)-dimensional discrete space with each dimension having \(k\) categories, i.e. \(\mathcal{X}^{n}:=\mathcal{X}_{1}\times\cdots\times\mathcal{X}_{n}\) with \(\mathcal{X}_{i}=[k],i\in[n]\). We assume that training datasets \(\mathcal{V}\) reside in \(\mathcal{X}^{n}\), implying samples are vector-valued data of \(n\) entries, each from one of the \(k\) categories. Although we assume consistent categories across columns, our analysis can account for datasets with varied category counts using the maximum category count.
**Per-instance Differential Privacy.** DP [50, 51] is a de-facto standard to quantify privacy leakage. We adapt DP definition for specific adjacent datasets, introducing per-instance DP:
**Definition 1** (\((\epsilon,\delta)\)-Per-instance Differential Privacy (pDP) [52]).: Let \(\mathcal{V}_{0}\) be a training dataset, \(\mathbf{v}^{*}\in\mathcal{V}_{0}\) be a fixed point and \(\mathcal{M}\) be a randomized mechanism. Define adjacent dataset \(\mathcal{V}_{1}=\mathcal{V}_{0}\backslash\{\mathbf{v}^{*}\}\). We say \(\mathcal{M}\) satisfies (\(\epsilon,\delta\))-pDP with respect to \((\mathcal{V}_{0},\mathbf{v}^{*})\) if for all measurable set \(\mathcal{O}\subset range(\mathcal{M})\), \(\{i,j\}=\{0,1\}\):
\[\mathcal{P}(\mathcal{M}(\mathcal{V}_{i})\in\mathcal{O})\leq e^{\epsilon} \mathcal{P}(\mathcal{M}(\mathcal{V}_{j})\in\mathcal{O})+\delta. \tag{1}\]
It is important to highlight that pDP is uniquely defined for a specific dataset-data point pair. This capability is crucial for understanding the privacy leakage of the given dataset and guiding data preprocessing, as elaborated in Sec. 4. Additionally, by taking the supremum over all conceivable datasets \(\mathcal{V}_{0}\) and points \(\mathbf{v}^{*}\), we can obtain DP from pDP (Theorem F.1). A more comprehensive discussion of the DP guarantees associated with DDMs is provided in Appendix. F.
**Discrete Diffusion Models.** DDMs [34, 35, 38, 39] are diffusion models that can generate categorical data. Let \(\mathbf{v}_{t}\) denote the data random variable at time t. The forward process involves gradually corrupting data with the noising Markov chain \(q\), according to \(q(\mathbf{v}_{1:T}|\mathbf{v}_{0})=\prod_{t=1}^{T}q(\mathbf{v}_{t}|\mathbf{v} _{t-1})\), where \(\mathbf{v}_{1:T}=\mathbf{v}_{1},\mathbf{v}_{2},...,\mathbf{v}_{T}\). On the other hand, the reverse process, \(p_{\phi}(\mathbf{v}_{0:T})=p(\mathbf{v}_{T})\prod_{t=1}^{T}p_{\phi}(\mathbf{v} _{t-1}|\mathbf{v}_{t})\), gradually reconstructs the datasets starting from a prior \(p(\mathbf{v}_{T})\). The denoising neural network (NN) learns \(p_{\phi}(\mathbf{v}_{t-1}|\mathbf{v}_{t})\) by optimizing the ELBO, which comprises three loss terms: the reconstruction term (\(L_{r}\)), the prior term (\(L_{p}\)), and the denoising term (\(L_{t}\)), represented in the following equation [59]:
\[\underbrace{\mathbb{E}_{q(\mathbf{v}_{1}|\mathbf{v}_{0})}[\log p_{\phi}( \mathbf{v}_{0}|\mathbf{v}_{1})]}_{\text{Reconstruction Term }L_{r}}-\underbrace{D_{\text{KL}}(q(\mathbf{v}_{T}|\mathbf{v}_{0}) \|p_{\phi}(\mathbf{v}_{T}))}_{\text{Price Term }L_{p}}-\sum_{t=2}^{T}\underbrace{\mathbb{E}_{q(\mathbf{v}_{t}| \mathbf{v}_{0})}[D_{\text{KL}}(q(\mathbf{v}_{t-1}|\mathbf{v}_{t},\mathbf{v}_{0 })\|p_{\phi}(\mathbf{v}_{t-1}|\mathbf{v}_{t}))]}_{\text{D denoising Term }L_{t}}.\]
Specifically, the **forward process** can be described by a series of transition kernels \(\{Q_{t}^{i}\}_{t\in[T],i\in[n]}\) where for any entry \(\mathbf{v}^{i}\), \([Q_{t}^{i}]_{lh}=q(\mathbf{v}_{t}^{i}=h|\mathbf{v}_{t-1}^{i}=l)\) represent the probability of a jump from category \(l\) to \(h\) on the \(i\)-th entry at time \(t\). Since for each entry \(i\) the number of categories is the same, we can rely on the same transition kernels for all dimensions and use \(Q_{t}\) instead of \(Q_{t}^{i}\). Let \(\overline{Q}_{t}=Q_{1}Q_{2}...Q_{t}\) denote the accumulative transition matrix from time 1 to time t. We use a uniform prior distribution \(p(\mathbf{v}_{T})\). The corresponding doubly stochastic matrices is determined by a series of important parameters termed **diffusion coefficients** (\(\{\alpha_{t},t\in[T]|\alpha_{t}\in(0,1)\}\)) which control the transition rate from original distribution to uniform measure. Specifically, define \(Q_{t}=\alpha_{t}I+(1-\alpha_{t})\frac{11^{T}}{k}\) and then \(\bar{Q}_{t}=\bar{\alpha}_{t}I+(1-\bar{\alpha}_{t})\frac{11^{T}}{k}\) where \(\bar{\alpha}_{t}=\prod_{t=1}^{t}\alpha_{t}\). In the **reverse process**, denoising networks are leveraged to predict \(p_{\phi}(\mathbf{v}_{t-1}|\mathbf{v}_{t})\) in hope of approximating \(q(\mathbf{v}_{t-1}|\mathbf{v}_{t},\mathbf{v}_{0})\). In practice, instead of directly predicting \(p_{\phi}(\mathbf{v}_{t-1}|\mathbf{v}_{t})\), denoising networks are learned to predict a clean data \(\mathbf{v}_{0}\) at time \(0\) with a noisy \(\mathbf{v}_{t}\) as input, i.e. \(p_{\phi}(\mathbf{v}_{0}|\mathbf{v}_{t})\). To train the denoising network, one needs
to sample noisy points from \(q(\mathbf{v}_{t}|\mathbf{v}_{0})\), and feed them into the denoising network \(\phi_{t}\) and obtain \(p_{\phi}(\mathbf{v}_{0}|\mathbf{v}_{t})\). Specifically, we adopt
\[L_{\text{train}}=D_{\text{KL}}(q(\mathbf{v}_{0}|\mathbf{v}_{t})||p_{\phi}( \mathbf{v}_{0}|\mathbf{v}_{t}))=\frac{1}{|\mathcal{V}|}\sum_{\mathbf{v}_{0} \in\mathcal{V}}\mathbb{E}_{\mathbf{v}_{t}\sim q(\mathbf{v}_{t}|\mathbf{v}_{0} )}\left[\sum_{i=1}^{n}L_{\text{CE}}(\mathbf{v}_{0}^{i},p_{\phi}(\mathbf{v}_{0}^ {i}|\mathbf{v}_{t}))\right] \tag{2}\]
This loss serves as the basis for our later sufficient training Assumption 1. In the generation process, we need to bridge the connection of \(p_{\phi}(\mathbf{v}_{t-1}|\mathbf{v}_{t})\) and \(p_{\phi}(\mathbf{v}_{0}|\mathbf{v}_{t})\), which in practice depends on a dimension-wise conditional independence condition [38]:
\[p_{\phi}(\mathbf{v}_{t-1}|\mathbf{v}_{t})=\prod_{i\in[n]}p_{\phi}(\mathbf{v}_ {t-1}^{i}|\mathbf{v}_{t})=\prod_{i\in[n]}\sum_{l\in\mathcal{X}_{i}}q(\mathbf{ v}_{t-1}^{i}|\mathbf{v}_{t},\mathbf{v}_{0}^{i}=l)p_{\phi}(\mathbf{v}_{0}^{i}=l| \mathbf{v}_{t}). \tag{3}\]
**Other Notations.** Given two samples \(\mathbf{v}\) and \(\tilde{\mathbf{v}}\), let \(\bar{\omega}(\mathbf{v},\tilde{\mathbf{v}})\) represent the count of differing entries, i.e., \(\bar{\omega}(\mathbf{v},\tilde{\mathbf{v}})=\#\{i|\mathbf{v}^{i}\neq\tilde{ \mathbf{v}}^{i},i\in[n]\}\). For \(\eta\in[n]\) and \(\mathbf{v}\in\mathcal{V}_{1}\), define \(N_{\eta}(\mathbf{v})=|\{\mathbf{v}^{\prime}\in\mathcal{V}_{1}:\bar{\omega}( \mathbf{v},\mathbf{v}^{\prime})\leq\eta\}|\) and \(\mathbf{v}_{1}^{iill}=\{\mathbf{v}\in\mathcal{V}_{1}|\mathbf{v}^{i}=l\}\) the set of data points with a fixed-valued entry. We use \(\mathcal{D}_{\text{KL}}(\cdot\|\cdot)\) and \(\|\cdot\|_{TV}\) for KL-divergence and total variation. Let \(\mu_{t}^{+}=\frac{1+(k-1)\alpha_{t}}{k}\) and \(\mu_{t}^{-}=\frac{1-\alpha_{t}}{k}\) represent one-step transition probabilities to the same and different states respectively at time \(t\) while \(\bar{\mu}_{t}^{+}=\frac{1+(k-1)\overline{n}_{t}}{k}\) and \(\bar{\mu}_{t}^{-}=\frac{1-\overline{n}_{t}}{k}\) are the accumulated transition probabilities. Transition probability ratios are defined as \(R_{t}=\frac{\mu_{t}^{+}}{\mu_{t}^{+}}\) and \(\bar{R}_{t}=\frac{\bar{\mu}_{t}^{+}}{\bar{\mu}_{t}^{+}}\). A larger ratio indicates a higher likelihood of maintaining the same feature category in the diffusion process. Moreover, define \((\cdot)_{+}=\max\{\cdot,0\}\).
## 3 Main Results
### Inherent Privacy Guarantees of DDMs
First, we define the mechanism under analysis. Let \(\mathcal{M}_{t}(\mathcal{V};m)\) represent the mechanism where, for an input dataset \(\mathcal{V}\), it outputs \(m\) samples generated at time \(t\) using the DDM's generation process. Specifically, \(\mathcal{M}_{0}(\mathcal{V};m)\) signifies the final generated dataset by DDM. In the paper, we focus on the behavior of \(\mathcal{M}_{t}\) in the generation process. Below, we outline the assumptions:
**Assumption 1** (**Sufficient training of \(\phi\))**.: Given dataset \(\mathcal{V}\), let \(\mathbf{v}_{0}\) denote the predicted random variables at time \(0\). Let \(\phi\) denote denoising NNs trained on dataset \(\mathcal{V}\). We say Assumption 1 is satisfied if there exist small constants \(\gamma_{t}>0\) such that \(\forall\mathbf{v}_{t}\in\mathcal{X}^{m}\):
\[\mathcal{D}_{\text{KL}}(q(\mathbf{v}_{0}^{i}|\mathbf{v}_{t})\|p_{\phi}(\mathbf{ v}_{0}^{i}|\mathbf{v}_{t}))\leq\gamma_{t},\forall i\in[n],\forall t\in[T]. \tag{4}\]
**Assumption 2** (**Gap between Forward and Backward Diffusion Paths)**.: Given dataset \(\mathcal{V}\), let \(\mathbf{v}_{t}\) denote the random variable sampled from intermediate distributions at time t in both the forward process (following \(q(\mathbf{v}_{t})\)) and backward process (following \(p_{\phi}(\mathbf{v}_{t})\)). We say the Assumption 2 is satisfied if there exists small positive constant \(\tilde{\gamma}_{t}\ll 2\) such that
\[\|q(\mathbf{v}_{t})-p_{\phi}(\mathbf{v}_{t})\|_{\text{TV}}\leq\tilde{\gamma}_{ t},\forall t\in[T]. \tag{5}\]
Assumption 1 states that denoising networks, when trained using the loss function in Eq. (2), can effectively infer clean data from intermediate noisy data distributions. Given a sufficiently expressive model, we expect \(\gamma_{t}\) to be small. Assumption 2 asserts that diffusion and generation paths are close, which is a reasonable assumption due to the recent analysis [36]. However, one cannot use Eq. (5) to derive privacy bound directly as closeness in total variation does not imply DP in general though the reverse could be true [60].
With above assumptions, we investigate the flow of privacy leakage along generation process. Our analysis centers around the inherent privacy guarantees of DDM-generated samples at specific release step, denoted as \(T_{\text{fd}}\). We also investigate how each data point \(\mathbf{v}^{*}\) in the training dataset \(\mathcal{V}_{0}\) is affected by privacy leakage.
**Theorem 1** (**Inherent pDP Guarantees for DDMs**).: _Given a dataset \(\mathcal{V}_{0}\) with size \(|\mathcal{V}_{0}|=s+1\) and a data point \(\mathbf{v}^{*}\in\mathcal{V}_{0}\) to be protected, denote \(\mathcal{V}_{1}\) such that \(\mathcal{V}_{1}=\mathcal{V}_{0}\backslash\{\mathbf{v}^{*}\}\). Assume the denoising networks trained on \(\mathcal{V}_{0}\) and \(\mathcal{V}_{1}\) satisfy Assumption 1 and Assumption 2. Given a specific time step \(T_{\text{fd}}\), the mechanism \(\mathcal{M}_{T_{\text{fd}}}(\cdot;m)\) satisfies \((\epsilon,\delta)\)-pDP with respect to \((\mathcal{V}_{0},\mathbf{v}^{*})\) such that given \(\epsilon\), \(\delta(\mathcal{V}_{0},\mathbf{v}^{*})\) is upper bounded by_
\[m\!\left[\underbrace{\sum_{t=T_{\text{fd}}}^{T}\min\biggl{\{}\frac{4N_{(1+ \epsilon_{t}^{*})n}(\mathbf{v}^{*})}{s},1\biggr{\}}\frac{n}{s^{\psi_{t}}}+ \frac{n(1-\frac{1}{R_{t-1}})}{s^{2}}}_{\text{Main Privacy Term}}+\underbrace{ \mathcal{O}\biggl{(}\sqrt{\gamma_{t}}+\tilde{\gamma}_{t}\biggr{)}}_{\text{ Error Term}}\right]\!/(\epsilon(1-e^{-\epsilon})). \tag{6}\]
_where \(\psi_{t},\eta_{t},c_{t}^{*}\) are **data-dependent quantities** determined by \(\mathbf{v}^{*}\) and \(\mathcal{V}_{1}\). Define a similarity measure \(\text{Sim}(\mathbf{v}^{*},\mathcal{V})=\sum_{\mathbf{v}\in\mathcal{V}}\bar{R}_{t }^{-\oplus(\mathbf{v},\mathbf{v}^{*})}\). Then, \(\psi_{t},\eta_{t},c_{t}^{*}\) follow_
\[\frac{n}{s^{\psi_{t}}}=\frac{(\overline{\alpha}_{t-1}-\overline{\alpha}_{t})/(k \bar{\mu}_{t}^{+}\bar{\mu}_{t}^{-})}{1+\text{Sim}(\mathbf{v}^{*},\mathcal{V}_{ 1})}\cdot\underset{i=1}{\overset{n}{\sum}}\text{log}\bigg{(}1+\frac{\bar{R}_{t -1}^{2}-1}{R_{t-1}^{2}\text{Sim}(\mathbf{v}^{*},\mathcal{V}_{1}^{|\mathbf{v}^{ *}|^{*}})+\text{Sim}(\mathbf{v}^{*},\mathcal{V}_{1})+1}\bigg{)}. \tag{7}\]
_And, \(\eta_{t},c_{t}^{*}\) are the smallest \(\eta_{t}\in\{1,2,...,n\}\), \(c_{t}^{*}\in\{0,\frac{1}{\eta_{t}},\frac{2}{\eta_{t}},...,\frac{n-\eta_{t}}{ \eta_{t}}\}\) which satisfy_
\[\eta_{t}\geq\frac{\log\vartheta(\eta_{t})}{\log\frac{1}{n(1-\bar{\mu}_{t}^{*} )}}+\left(\frac{\log\Big{(}\vartheta(\eta_{t})\frac{\overline{\alpha}_{t-1}- \overline{\alpha}_{t}}{k\bar{\mu}_{t}^{*}\bar{\mu}_{t}}\cdot s^{\psi_{t}} \Big{)}}{2\log\bar{R}_{t}}-2\right)_{+},c_{t}^{*}\geq\frac{\frac{1}{\eta_{t}} \log\vartheta((1+c_{t}^{*})\eta_{t})+\frac{3}{2}}{\log\frac{1}{\mu_{t}^{*}}-1}. \tag{8}\]
_where \(\vartheta(\eta)=(s-N_{\eta}(\mathbf{v}^{*}))/N_{\eta}(\mathbf{v}^{*})\) that represents the ratio between the numbers of points outside the \(\eta\)-ball and inside it._
Theorem 1 quantifies the privacy leakage of a specific point \(\mathbf{v}^{*}\) in training set \(\mathcal{V}_{0}\). The privacy bound comprises a main privacy term that represents the inherent pDP guarantees for DDMs, highlighting the data-dependent nature of our bound, and an error term stemming from denoising network training and path discrepancies. Those data-dependent quantities are complex to maintain a tight measurement for a dataset-data point pair. Next, we will further explain these quantities.
First, as the generation process forms a Markov chain where the transition probability \(p_{\phi}(\mathbf{v}^{(t-1)}|\mathbf{v}^{(t)})\) is learned from training, each generation step will leak some information from the training dataset. It can be shown that the majority of such leakage, represented in the pDP bound (in the appendix) follows
\[\mathbb{E}_{\mathbf{v}\sim p_{\phi}(\mathbf{v}_{t|0}=\mathbf{v})}d^{(t)}( \mathbf{v}) \tag{9}\]
where let \(\mathbf{v}_{t|\lambda}\) represents the r.v. of the generated data at time \(t\) of the generation process when the diffusion model gets trained over the dataset \(\mathcal{V}_{\lambda}\), \(\lambda\in\{0,1\}\) and \(d^{(t)}(\mathbf{v})=\sum_{\lambda\in\{0,1\}}\mathcal{D}_{\text{KL}}(p_{\phi}( \mathbf{v}_{t-1|\lambda}|\mathbf{v}_{t|\lambda}=\mathbf{v})\|p_{\phi}(\mathbf{ v}_{t-1|\bar{\lambda}}|\mathbf{v}_{t|\bar{\lambda}}=\mathbf{v}))\) which characterizes a symmetric distance between two conditional distributions characterized by the learned diffusion model. Essentially, the three data-dependent quantities \(\psi_{t},\eta_{t},c_{t}^{*}\) are to bound Eq. (9).
**Quantity \(\psi_{t}\):** As shown in Fig. 2, \(\frac{n}{s^{\psi_{t}}}\) quantifies \(\max_{\mathbf{v}}d^{(t)}(\mathbf{v})\) where the maximum is achieved at the removed point \(\mathbf{v}=\mathbf{v}^{*}\) (green in Fig. 2 ). A closer inspection reveals that \(\psi_{t}\) depends on the terms \(\text{Sim}(\mathbf{v}^{*},\mathcal{V}_{1})\) and \(\text{Sim}(\mathbf{v}^{*},\mathcal{V}_{1}^{|\mathbf{v}^{*}|})\). By the definition of \(\bar{\omega}\), these terms assess how \(\mathbf{v}^{*}\) aligns with the remaining points in \(\mathcal{V}_{1}\).
**Evolution of \(\psi_{t}\).** During the generation phase, as \(t\) progresses from \(T\) to \(1\), the values of \(\frac{1}{s^{\psi_{t}}}\) increase from \(\mathcal{O}_{s}(\frac{1}{s^{\varpi}})\) to \(\mathcal{O}_{s}(1)\). This implies that the potential privacy risk escalates as the data generation process evolves from a noisy regime to a noise-free regime.
**Quantities \(\eta_{t}\) and \(c_{t}^{*}\):** It is evident that the intermediate generated measure \(p_{\phi}(\mathbf{v}_{t|0})\) (blue in Fig. 2) diverges from the delta measure on the most sensitive point \(\delta_{\mathbf{v}=\mathbf{v}^{*}}\) (green). Therefore, the actual privacy leakage characterized by \(d^{(t)}(\mathbf{v})\) (yellow) averaged over the measure \(p_{\phi}(\mathbf{v}_{t|0})\) is much less than its maximum. To provide a tight characterization of such, the two quantities \(\eta_{t}\) and \(c_{t}^{*}\) are introduced to define a local region \(\mathcal{S}=\{\mathbf{v}^{\prime}\in\mathcal{X}^{n}:\bar{\omega}(\mathbf{v}, \mathbf{v}^{\prime})\leq(1+c_{t}^{*})\eta_{t}\}\) centered on
vulnerable point \(\mathbf{v}^{*}\), within which the privacy leakage can be bounded by the sum of (a) \(p_{\phi}(\mathbf{v}_{\ell|0}\in\mathcal{S})\max_{\mathbf{v}\in\mathcal{S}}d^{(t) }(\mathbf{v})\) with a small \(p_{\phi}(\mathbf{v}_{\ell|0}\in\mathcal{S})\) and (b) \(p_{\phi}(\mathbf{v}_{\ell|0}\notin\mathcal{S})\max_{\mathbf{v}\notin\mathcal{S}}d ^{(t)}(\mathbf{v})\) with a small \(\max_{\mathbf{v}\notin\mathcal{S}}d^{(t)}(\mathbf{v})\). (\(\eta_{t},c_{t}^{*}\)) shown in Eq. (8) are chosen to properly balance these two parts. \(\eta_{t}\) and \(c_{t}^{*}\) always exist: Note that when \(\eta_{t}=n\) or \(c_{t}^{*}=n/\eta_{t}-1\), the right-hand side of either inequality in Equation (8) approaches \(-\infty\) (\(\log 0\)). In fact, both of the RHS's of the two inequalities decrease w.r.t. \(\eta_{t}\) and \(c_{t}^{*}\). So, in practice, the smallest \(\eta_{t}\) and \(c_{t}^{*}\) can be found via binary search given the dataset \(\mathcal{V}_{0}\) and \(\mathbf{v}^{*}\).
**Evolution of \(\boldsymbol{(1+c_{t}^{*})\eta_{t}}\).** For each time step \(t\), the smallest value of \((1+c_{t}^{*})\eta_{t}\) is chosen as the radius. As \(t\) progresses from \(T\) to \(1\), the value of \((1+c_{t}^{*})\eta_{t}\) monotonically decreases. When \(\bar{\alpha}_{t}\) approaches 1 for smaller \(t\) values, \((1+c_{t}^{*})\eta_{t}\) tends to zero, i.e., \(\mathcal{S}\) only includes \(\mathbf{v}^{*}\). The reason is that, as smaller \(t\) values, different data points are less mixed with others (because of less noise added in the forward process), the privacy leakage of \(\mathbf{v}^{*}\) becomes more concentrated around the changes of the likelihoods of the generated data points that look like \(\mathbf{v}^{*}\), thus calling for a decrease of the radius. To consider the impact on the bound in Eq. (6), the number of data points in this region \(N_{(1+c_{t}^{*})\eta_{t}}(\mathbf{v}^{*})\) will decrease from \(s\) to \(1\) as \(t\) changes from \(T\) to \(1\).
**Discussion on Theorem 1.** Based on the previous discussion, as \(t\) decreases from \(T\) to \(1\), \(N_{(1+c_{t}^{*})\eta_{t}}(\mathbf{v}^{*})/s\) changes from \(\mathcal{O}_{s}(1)\) to \(\mathcal{O}_{s}(1/s)\), and \(1/s^{\psi_{t}}\) changes from \(\mathcal{O}_{s}(\frac{1}{s^{2}})\) to \(\mathcal{O}_{s}(1)\). Consequently, the privacy leakage for each-step DDM-generated samples gradually weakens from \(\mathcal{O}_{s}(\frac{m}{s^{2}})/[\epsilon(1-e^{-\epsilon})]\) to \(\mathcal{O}_{s}(\frac{m}{s})/[\epsilon(1-e^{-\epsilon})]\).
This implies a natural utility-privacy tradeoff for the data generated by DDMs. In practice, to guarantee the data quality, we often release the data in the noise-free side \((t=0)\), where only a weak privacy guarantee of approximately \((\epsilon,\mathcal{O}_{s}(m/s)/[\epsilon(1-e^{-\epsilon})])\) can be achieved. To enhance data privacy, we may expect to release the data generated with a larger step \(t\geq 1\).
This result also reveals that the inherent privacy guarantees of releasing data generated by DDMs is weak (\(\propto\mathcal{O}(m/s)\)), in the same order of guarantees for GAN-generated samples [53]. This characterization also matches many recent empirical studies that have shown concerns on privacy leakage due to publishing data generated by DMs [45, 47, 55]. While privacy budgets for all data points maintain the same order in relation to the sample size, the contacts can differ markedly across data points. Intuitively, a data point \(\mathbf{v}^{*}\in\mathcal{V}_{0}\) with less similarity with the other data points tends to have higher privacy leakage. This is indicated by Eq. (7), where a smaller similarity \(\sum_{\mathbf{v}\in\mathcal{V}_{0}\setminus\{\mathbf{v}^{*}\}}\bar{R}_{t}^{- \omega(\mathbf{v}^{*},\mathbf{v})}\), leads to a larger pDP leakage (as illustrated in Fig. 3).
**Tightness of Privacy Bound w.r.t Sample Size.** In Theorem 1, the privacy parameter of \(\delta\) scales as \(\mathcal{O}_{s}(\frac{1}{s})\) with sample size. Here we establish a lower bound for \(\delta\) with respect to the sample size by evaluating the worst-case scenario. For illustrative purposes, consider the case where \(n=2\) with two distinct categories. Define adjacent datasets: \(\mathcal{V}_{0}=\{[0,0]^{T},...,[0,0]^{T}\},[1,1]^{T},[1,1]^{T}\}\) and \(\mathcal{V}_{1}=\mathcal{V}_{0}\setminus\{[1,1]^{T}\}\).
**Theorem 2** (Lower Bound on Inherent pDP Guarantees for DDMs).: _Assume the denoising networks are perfectly trained. Given a diffusion model architecture design (Sigmoid Schedule \(\alpha_{t}=\frac{\text{Sigmoid}(3)-\text{Sigmoid}(\frac{3t}{4})}{\text{Sigmoid}(3)-0.5 },T=10\)), there exist an adjacent dataset \(\mathcal{V}_{0},\mathcal{V}_{1}=\mathcal{V}_{0}\setminus\{\mathbf{v}^{*}\}\) with feature dimension \(n=2\) such that the mechanism \(\mathcal{M}_{0}(\cdot;1)\) does not satisfies \((0.04,\delta)\)-pDP with respect to \((\mathcal{V}_{0},\mathbf{v}^{*})\) for any \(\delta<\frac{1}{\delta s}\)._
For a general lower bound on \((\epsilon,\delta)\)-pDP under various DDM designs, please refer to Appendix E.
Figure 4: pDP Leakage in Eq. (6): **LETT:** Characterization of \(\frac{\eta}{\varphi_{0}}\). **MIDDLE:** Characterization of \((1+c_{t})\eta_{t}\). **RIGHT:** Characterization of Privacy Leakage (Main Privacy Term). **Experimental Setup:** Given specific DDM design \(k=5,n=5,T=20,\epsilon=10\) trained on dataset with \(s=1000\) following the distribution in Sec. 3.3 with parameter \(p\). Fix \(\mathbf{v}^{*}\) where each column has a non-majority category. Results are based on 5 times independent tests.
### Impact of DDM Coefficients and Dataset Distributions on the Privacy Bound
**Influence of Diffusion Coefficients.** The privacy term is largely influenced by the proximity between \(\mathbf{v}^{*}\) and \(\mathcal{V}_{1}\). As time \(t\) progresses, this similarity is governed by the transition ratio \(\bar{R}_{t}\). A faster rate of diffusion coefficients going to zero boosts this ratio, enhancing the privacy guarantee. Experiments in Sec. 4 validate this observation.
**Impact of Dataset Distribution.** We find that \(\psi_{t}\) has a major effect on the privacy bound. \(\psi_{t}\) is influenced by the similarity between the additional point \(\mathbf{v}^{*}\) and \(\mathcal{V}_{0}\backslash\{\mathbf{v}^{*}\}\). If \(\mathbf{v}^{*}\) is far away from (close to) the rest points in \(\mathcal{V}_{0}\), then \(\text{Sim}(\mathcal{V}_{0}\backslash\{\mathbf{v}^{*}\},\mathbf{v}^{*},t)\) becomes small (large) and the corresponding term \(s^{-\psi_{t}}\) become large (small), which indicates weaker (stronger) protection of \(\mathbf{v}^{*}\). _This indicates that points with notably low \(\text{Sim}(\mathcal{V}_{0}\backslash\{\mathbf{v}^{*}\},\mathbf{v}^{*},t)\) are probably sensitive points in the dataset._ Removing them may help reduce the privacy leakage of DDMs when being trained on the dataset.
### Characterizing Data-dependent Quantities under Simple Distributions
Here, we consider the training dataset sampled from some specific distributions to further illustrate the data-dependent quantities.
Consider a distribution such that each column independently takes value \(l\in[k]\) with probability \(p\)\((p\geq\frac{1}{k})\) and any other \(k-1\) categories with probability \(\frac{1-p}{k-1}\). Let \(\mathbf{v}^{*}\) take non-majority category \(((\mathbf{v}^{*})^{i}\neq l)\) along all \(n\) columns (termed **non-majority points**, which thus tends to have higher privacy leakage) and the rest points in \(\mathcal{V}_{0}\backslash\{\mathbf{v}^{*}\}\) are sampled from the distribution. We have the following characterization (For detailed explanations and proofs, please refer to Appendix G).
* \(\frac{1}{s^{\mathbf{v}_{t}}}\). For a sufficiently large \(s\) (detailed in appendix), with high probability, \(\frac{1}{s^{\mathbf{v}_{t}-2}}\rightarrow\frac{(\overline{\alpha}_{t-1}- \overline{\alpha}_{t})/(k\mu_{t}^{*}\mu_{t}^{*})}{\overline{R}_{t-1}^{2}-\tau_{ t}^{*}n-1}\cdot\frac{1}{k-1}+\frac{\tau_{t}^{*}}{\tau_{t}^{*}}\), where \(\tau_{t}:=\frac{1-p}{k-1}+\frac{\bar{\mu}_{t}^{*}}{\mu_{t}^{*}}(1-\frac{1-p}{ k-1})\). In the noisy regime (a large t, \(\frac{\bar{\mu}_{t}^{*}}{\mu_{t}^{*}}\to 1\)), \(\tau_{t}\to 1\), \(\frac{1}{s^{\psi_{t}}}=\mathcal{O}_{s}(\frac{s}{s^{2}})\). For distribution characterized by larger skewness, i.e., larger \(p\), we have smaller \(\tau_{t}\) result in larger \(\frac{1}{s^{\psi_{t}}}\). Fig. 4 (LEFT) precisely matches the above conclusions.
* \(\mathbf{\eta_{t}},\mathbf{c_{t}^{*}}\). For a sufficiently large \(s\) (detailed in appendix), a sufficient condition for \(\eta_{t}\) and \(c_{t}^{*}\) to satisfy Eq. (8) is \[\eta_{t}\geq n-\left(\frac{n-\log(s/\overline{\frac{\alpha_{t-1}- \overline{\alpha}_{t}}{k\mu_{t}^{*}\mu_{t}^{*}}})/\log\frac{\bar{\mu}_{t}^{*} }{\bar{\mu}_{t}^{*}}}{2\log\frac{k-1}{1-p}/\log(\max\{\frac{1}{n\bar{\mu}_{t}^ {*}},1\})+1}\right)_{+},\ c_{t}^{*}\geq\frac{\frac{n-\eta_{t}}{n}\log\frac{k-1}{1 -p}-\log\frac{1}{2\epsilon}}{\log\frac{k-1}{1-p}+\log\frac{1}{e\bar{\mu}_{t} ^{*}}}.\] (10) In the noise free regime (\(\alpha_{t}\to 1\)), \(\eta_{t}\to 0\), while in the noise full regime (\(\alpha_{t}\to 0\)), \(\eta_{t}\to n\). From noise free regime to noisy regime, \(\bar{\mu}_{t}\) increases, \(c_{t}^{*}\to\frac{n-\eta_{t}}{\eta_{t}}\). Furthermore, as we rise in the skewness (\(p\)) of the distribution, the R.H.S of Eq. 10 monotonically increases, and results in larger values for \(\eta_{t}\) and \(c_{t}^{*}\). Fig. 4 (MIDDLE) matches the above conclusions.
### The Algorithm for Evaluating Privacy Bound in Eq. (6) on a given Dataset
In practical situations, when data curators release synthetic data, it is crucial to assess the privacy safeguards of the mechanism trained on a specific dataset. This ensures the synthetic data upholds privacy and the confidentiality of the training data's sensitive information. To this end, we introduce Algorithm 2 in Appendix D, to compute the privacy bound, enabling direct per-instance privacy budget calculation for DDM-generated datasets given particular training sets. Specifically, for each \(\mathbf{v}^{*}\), we determine \(\psi_{t}\), \(\eta_{t}\), and \(c_{t}^{*}\) to compute \(\delta(\mathbf{v}^{*},\mathcal{V}_{0})\) using Eq. (6). Using this algorithm, we can exclude sensitive points \(\mathbf{v}^{*}\) with high \(\delta(\mathbf{v}^{*},\mathcal{V}_{0})\), enhancing privacy protection. This approach's efficacy is confirmed with real dataset experiments in Sec. 4.
## 4 Experiments
We validate our theoretical findings via computational simulations on synthetic and real-world datasets.
### Synthetic Experiments
We first we study the asymptotic behavior of privacy leakage with respect to the training dataset size \(s\). Given a DDM with 100 diffusion steps and trained with a linear schedule \(\alpha_{t}=1-\frac{t}{T}\), We fix \(\mathbf{v}^{*}\) and increase the number of samples in the training set from \(1e4\) to \(1e7\), ensuring that the newly added samples satisfy \(\bar{\omega}(\mathbf{v},\mathbf{v}^{*})=n\), which makes \(\mathbf{v}^{*}\) with high privacy leakage risk. From the results shown in Fig. 5 (LEFT, MIDDLE) confirm our theoretical prediction that, in noise-free regime (\(t=1\), Fig. 5 (LEFT)), the **main privacy term** in Theorem 1 is \(\mathcal{O}_{s}(\frac{1}{s})\), which is almost a linear decay with a slope of \(-1\) in the logarithmic scale (all lines in the figure). On the other hand, in the noisy-regime
(\(t=50\), Fig. 5 (MIDDLE)), the privacy leakage term decays faster at the rate of \(\mathcal{O}_{s}(\frac{1}{s^{2}})\), which is evident from the linear decay with a slope around \(-2\). In the second experiment, we examine how decay rate of diffusion coefficients affects the privacy bound. Given specific \(\mathbf{v}^{\star}\) (non-majority categories along all entries). We sample the training set from the distribution with \(p=0.5\) in Sec. 3.3. We consider two noise schedules: linear schedule and sigmoid schedule. In Fig. 5 RIGHT, the red line denotes the linear schedule with decay rate \(\in\{0.1,0.3,0.5,0.7,0.9\}\) and the blue line denotes the Sigmoid schedule where decay rate increases from \(2.5\) to \(5\). \(\delta\) decreases along both two lines as we increase the decay rate of diffusion coefficients. This indicates that a faster decay rate in diffusion coefficients implies better privacy.
Due to space constraints, we put more results discussing privacy leakage and the behaviors of data-dependent quantities under various DDM designs in Appendix J.
### Experiments on Real Datasets
In these experiments, our goal is to showcase the efficacy of our privacy algorithm (as detailed in Algorithm 2, Appendix. D) when used as a dataset preprocessing method. We show that our approach not only bolsters the privacy guarantee but also preserves comparable utility performance. We evaluate our algorithm on three benchmark datasets: Adult [61], German Credit [62], and Loan [63] with (# training samples, # feature dimensions, # categories) of (30718, 9, 5), (1000, 10, 5), and (480, 11, 4) (see Appendix I for details).
**Experimental Settings.** Our study vary the sensitive data removal ratio according to our per-instance privacy bound. Specifically, we calculate the privacy budget for every point in the dataset according to Eq. (6) via Algorithm 2 and remove the most sensitive points in the dataset amounting to a specific portion which is controlled by the removal
Figure 5: **LEFT: Privacy Leakage at \(t=1\) (Noise-free Regime). MIDDLE: Privacy Leakage at \(t=50\) (Noisy Regime). Right: Privacy Leakage w.r.t Decay Rate under Linear (\(\alpha_{t}=1-\) decay rate \(\ast\frac{1}{T}\)) and Sigmoid (\(\alpha_{t}=\) Signal(3-decay rate) \(-\) Signal(\(\frac{3}{2}\)-decay rate)) Schedules. Results are based on \(5\) times independent tests.**
Figure 6: **First Row: Privacy-utility trade-offs with respect to data removal ratio. LEFT: Adult, MIDDLE: German Credit, RIGHT: Loan. Experimental Setup: DDM design: T = 10, Linear Schedule. Second Row: Visualizing privacy budget in relation to average feature overlap. LEFT: Adult, MIDDLE: German Credit, RIGHT: Loan.**
ratio. The removal ratio ranges from \(0.01\) to \(0.5\) for the Adult and German Credit datasets, and between \(0.001\) and \(0.05\) for the Loan dataset. We recalculate and report the mean privacy leakage (blue line) and the most sensitive point privacy budget (yellow line) after data removal process in Fig. 6 (First Row). We further measure utility performance with respect to downstream classification task by training a binary classifier on DDM-generated samples and evaluate its performance (red line) on the original dataset. We further illustrate the sensitive points--those removed from the dataset--by graphing their potential privacy leakage alongside the average overlap with the entire dataset across all feature dimensions, denoted by \(\bar{\omega}\). The visualizations are presented in Figure 6 (Second Row).
**Remove sensitive points for better privacy with comparable utility.** As depicted in Fig. 6 (First Row), eliminating a minor proportion of the most sensitive points from the dataset results in a decrease in privacy leakage. Meanwhile, the classification accuracy (red line) only gets slightly decreased: \(81\%\to 78\%\) for Adult, \(73\%\to 70\%\) for German Credit, \(81.1\%\to 79.8\%\) for Loan (note that for Loan we remove at most 5% data points as its size is too small). More interestingly, by removing a certain number of those most sensitive data points, the classification model trained over the pruned generated dataset may even achieve better performance over the original dataset, say removing 3% in Adult and 1% in German Credit. We attribute such gains to the fact that the most sensitive data points are often outliers in the dataset, which may be actually not good for training an ML model. In data visualization (Fig. 6, Second Row), we note that the data points prone to greater privacy leakage tend to have less feature overlap, indicating that these data points have a lower similarity to others in the dataset.
## 5 Conclusion
In this work, we analyzed data-dependent privacy bound for the synthetic datasets generated by DDMs, which revealed a weak privacy guarantee of DDMs. Thus, to meet practical needs, other privacy-preserving techniques such as DP-SGD [48] and PATE [49] may have to be adopted. Alternatively, we can also remove the sensitive points from the dataset indicated by our bound to enhance the privacy of the generated synthetic dataset. Our findings well align with empirical observations over synthetic and real datasets.
## Acknowledgement
RW, HY, PL's contribution to this work was funded in part by J.P. Morgan AI Research.
## Disclaimer
This paper was prepared for informational purposes and is contributed by the Artificial Intelligence Research group of JPMorgan Chase & Co and its affiliates ("J.P. Morgan"), and is not a product of the Research Department of J.P. Morgan. J.P. Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful.
|
2302.02249 | Self-supervised Multi-view Disentanglement for Expansion of Visual
Collections | Image search engines enable the retrieval of images relevant to a query
image. In this work, we consider the setting where a query for similar images
is derived from a collection of images. For visual search, the similarity
measurements may be made along multiple axes, or views, such as style and
color. We assume access to a set of feature extractors, each of which computes
representations for a specific view. Our objective is to design a retrieval
algorithm that effectively combines similarities computed over representations
from multiple views. To this end, we propose a self-supervised learning method
for extracting disentangled view-specific representations for images such that
the inter-view overlap is minimized. We show how this allows us to compute the
intent of a collection as a distribution over views. We show how effective
retrieval can be performed by prioritizing candidate expansion images that
match the intent of a query collection. Finally, we present a new querying
mechanism for image search enabled by composing multiple collections and
perform retrieval under this setting using the techniques presented in this
paper. | Nihal Jain, Praneetha Vaddamanu, Paridhi Maheshwari, Vishwa Vinay, Kuldeep Kulkarni | 2023-02-04T22:09:17Z | http://arxiv.org/abs/2302.02249v1 | # Self-supervised Multi-view Disentanglement for Expansion of Visual Collections
###### Abstract.
Image search engines enable the retrieval of images relevant to a query image. In this work, we consider the setting where a query for similar images is derived from a _collection of images_. For visual search, the similarity measurements may be made along multiple axes, or _views_, such as style and color. We assume access to a set of feature extractors, each of which computes representations for a specific view. Our objective is to design a retrieval algorithm that effectively combines similarities computed over representations from multiple views. To this end, we propose a self-supervised learning method for extracting disentangled view-specific representations for images such that the inter-view overlap is minimized. We show how this allows us to compute the _intent_ of a collection as a distribution over views. We show how effective retrieval can be performed by prioritizing candidate expansion images that match the intent of a query collection. Finally, we present a new querying mechanism for image search enabled by composing multiple collections and perform retrieval under this setting using the techniques presented in this paper.1
Footnote 1: A version of this paper has been accepted at WSDM 2923.
## 1. Introduction
The task of image search (Bangalore, 2016) requires the definition of specific axes along which image similarities can be computed. The reference standard is the use of embeddings from intermediate layers of convolutional neural networks (CNNs) trained for supervised classification tasks (Krizhevsky et al., 2014), which have been shown to have superior effectiveness in visual search tasks (Bangalore, 2016). To enable retrieval along specialized notions of image similarity, multiple image feature extractors have been developed. Some examples include shapes within content (Sutskever et al., 2016), co-occurrences of objects and their relationships (Krizhevsky et al., 2014), or styles (Sutskever et al., 2016). We build on existing image representation methods (e.g. those described above), using the term 'view' to refer to a representation capturing one aspect of the content within an image.
We consider the setting of designers working within the context of a visual creation task. As part of ideation, designers typically compile a set of inspirational assets that represents the desired visual characteristics of the target creation - such a collection is referred to as a "Moodboard" (Sutskever et al., 2016). In this work, we focus on _Moodboard Expansion_, i.e., retrieving other visual assets from a corpus of images that match the user's intent as expressed by a modboard; this is a version of image search where the query is a _collection of images_. Our proposed method for moodboard expansion infers the _intent_ of the query collection, and we show how this enables effective retrieval. The intent inference mechanism leverages the fact that, unlike typical retrieval settings where the query is often sparse (e.g. a short textual phrase), in the current setting, we have
access to a collection of images. It also provides a convenient visual querying mechanism for our target user personas, who operate in a domain where it might be difficult to express the information need in textual form.
A stylized image of our application setting is shown in Figure 2. On the left is an example moodboard. To surface new candidate additions to the moodboard, we are required to define a notion of similarity for retrieval from a corpus of images - there could be multiple visual characteristics that we want to consider (e.g., object information, color or style). We formalize each visual characteristic as a separate representation space in which similar images may be found - we have used the term _views_ to refer to these alternative representations. In each subplot of Figure 2, the point \(C\) indicates the collection-level representation of the moodboard along the corresponding view. We pictorially depict distances of images from this collection representation in each representation space. Solely using object representations for similarity surfaces the set \(\mathbf{A}\) on the right, whereas incorporating information along other views may allow the retrieval of content with greater style and color diversity (set \(\mathbf{B}\)), which may aid more effective visual exploration.
We describe a self-supervised model, the inputs into which are well known single-view representations, that provides disentangled view-specific representations for individual images. We develop an algorithm that utilizes these disentangled multi-view representations for inferring the intent of a collection and ranks candidate images utilizing the predicted intents. Finally, we provide an empirical study that evaluates our representation learning and image ranking setups. In addition to retrieving images relevant to a query collection, our algorithm ensures diversification of results in the absence of a strong signal along certain views. Figure 1 provides an illustrative example of our setup and highlights the desired characteristics of the results.
Learning view-specific representations for collections of images allows a novel use-case concerning image search: given two (or possibly more) collections of images, we can compose these to hallucinate a new set of images that has selective characteristics from each query collection; we can then retrieve relevant images from an index that match the intent of this hallucinated collection. We present an effective approach to enable this using the techniques presented in this paper. Section 6 discusses this in more detail and provides qualitative results achieved using our method.
Our main contributions are as follows:
1. We describe a self-supervised multi-view representation learning method for images. Our model provides a framework to disentangle view-specific information distinct from what is common across views.
2. We propose an approach to use the view-specific representations output by our model to compute the intent for a collection of images. We validate our intent prediction method via a simulation-based study.
3. We present experimental results that show how our proposed method leads to more effective visual retrieval than baseline approaches.
4. Finally, we propose a novel querying mechanism for image search driven by composing collections of images, and solve this task using other techniques presented in this paper.
## 2. Related Work
In this section, we review work related to the area of multi-view representation learning of images, as well as retrieval support for visual exploration and ideation activities.
### Multi-View Representation Learning
There has been significant recent interest in representation learning for multimodal content items. Methods for extracting representations from multiple views (Koch et al., 2017; Koch et al., 2017) usually focus on enforcing alignment across views. Specifically, representations from the different views of the same item are mapped to a shared space, with some notion of closeness enforced between them. These works motivate the need for alignment using a cross-modal task (Koch et al., 2018), e.g., matching a text caption to the image modality or text-to-image retrieval (Koch et al., 2017).
In contrast, our work deals with a naturally multi-view task: even though the views are expected to capture overlapping information, our focus is on extracting the information which is specific to a given view. This notion is closely related to the area of factorized representations. The authors of (Koch et al., 2017) motivate the need to separately model factors common across modalities from modality specific factors. We borrow this need, but our definition of _view_ is more general since these alternative representations can be derived from the same modality (images in our case). Other authors (e.g. (Koch et al., 2018; Koch et al., 2018)) invoke similar intuitions and refer to the desired behaviour as _disentanglement_ - we use this word and _orthogonalization_ interchangeably in the current paper.
### Visual Exploration
The sub-field of Content-based Image Retrieval (CBIR) (Koch et al., 2017) contains many works that are relevant to the topic of the current paper. This includes the need for extraction of the right feature representation for the images, as well as the definition of an appropriate notion of similarity to be used for retrieval. Closest in motivation to our work is "MindReader" (MindReader, 2017), where the authors focus on mechanisms that allow creatives to construct visual queries without resorting to keyword-based interfaces. So as to cover a range of plausible user requirements, MindReader utilizes multiple dimensions (shapes, textures, colors) with the user specifying the relative importance of each. They also account for the correlations between the dimensions, which is also the target of our orthogonalization procedure.
Our work attempts to tackle moodboard expansion by recommending visual variations that are relevant to the user. Our premise is that the variations that are related to existing assets can encourage ideation. Facilitating design ideation via moodboards has been explored by Rieuf _et al_(Rieuf et al., 2017), though their focus was an immersive interface into the corpus of assets. The recent work of Koch _et al_(Koch et al., 2017) recommends that the exploratory process of ideation be an interactive one, with the system each time refining its view of the user's intent. Solving for the needs of creatives involves a holistic treatment that includes interface, interaction and many more dimensions. The current paper focuses on the quantitative evaluation of a single retrieval iteration. This retrieval is via a weighted similarity across views, where the weights are a prediction of intent of the query collection. The disentangled/orthogonalized multi-view representations of images are central to this process, and are the outputs of our self-supervised model that we describe next.
### Compositional Representation Learning
Recent progress in representation learning has enabled combining representations from multiple simple individual elements to learn representations for complex entities. These simple elements may be from different modalities such as image and text (Beng et al., 2015; Chen et al., 2016; Chen et al., 2017) or the same modality (Zhu et al., 2017; Chen et al., 2018). Our work focuses on the problem of learning effective representations for collections of images. We demonstrate further utility of these representations by showing how these can be used to achieve composition in a _zero-shot_ manner, i.e., without further fine-tuning for this task. Finally, we propose composing collections as a new way of querying images and solve this task using our representations in Section 6.
## 3. Retrieval using Disentangled Representations
The expansion of a moodboard requires us to understand the motivation behind collating the member images of the collection representing the moodboard. Towards this, we propose the use of representations for collections. We argue (and show empirically, in Section 5.2) that for effective retrieval, collection level representations need to have independent view-specific components. We next characterize view-specific collection level representations and outline our self-supervised approach to obtain them.
### View-Specific Representation Learning
Our primary premise is that each image can be described by multiple alternative descriptors of its content. For example, images can be projected into a space capturing color semantics such that nearby points have similar color composition; these images can also be projected onto a different space where closeness captures style properties of images. We refer to color and style as views, such that alternative aspects of image content are captured by different views. Our hypothesis is that collection-based retrieval requires effective view-specific understanding of the images comprising the collection. While obtaining shared information across views has been studied in other works (e.g. (Zhu et al., 2017)), in the following sections, we focus on recovering information that is unique to each view.
#### 3.1.1. Views and Out-of-the-Box Representations
The set of views we consider in this paper are specifically chosen based on their central role in visual exploration:
1. **Object**: The object view of images forms the essence of several computer vision tasks such as object detection and segmentation. The ResNet-152 model (He et al., 2016) was trained for an object classification task, so, embeddings from its penultimate layer are taken to be the object view; the expectation is that images with similar ResNet embeddings contain visually similar objects.
2. **Style**: Our data, described in Section 4, is derived from an artistic domain, which motivates us to consider the style view. We use the outputs of the ALADIN architecture (Song et al., 2016), which was developed to retrieve images based on artistic style similarity.
3. **Color**: Since our setting is one of visual discovery, we consider color due to its important role in image retrieval. We utilize the LAB space, with the '_L_' corresponding to luminance and the other two views representing chrominance. As with reference works in this domain (Song et al., 2016), the ranges of \(L\), \(A\) and \(B\) values are discretized into bins of width \((10,10,10)\), and a color embedding is obtained as a histogram over what fraction of pixels contain a particular LAB value.
We have considered three views for images and associated well-known and state-of-the-art feature extractors -- retrieval using these out-of-the-box representations serve as our baselines. Finally, we note that while we have only described three views, our setup extends naturally to being able to consider a larger enumeration
Figure 3. Our proposed model. Input features \(x_{*}\) are transformed into two outputs: (1) \(x_{*}^{\text{D}}\) are specific to a view with orthogonalization encouraged amongst them; (2) \(x_{*}^{\text{a}}\) are aligned to capture common information across views.
Figure 2. A reference example that is intended to motivate the use of multiple views for collection-based retrieval.
of visual axes. We leave a thorough study of the possible range of visual dimensions and their interactions to future work.
#### 3.1.2. Disentanglement Model
We intuit that there exist correlations amongst the views: for example, nearest neighbors obtained using ResNet features might also capture similarities in color. Several efforts that study the extraction of features common and specific to views from multi-view data have relied on reconstructing original features from factorized representations (Wang et al., 2017; Wang et al., 2018). Our approach, illustrated in Figure 3, relies on the same intuition. We factorize the input into two components - the view-aligned representations contain what is common across the views, and the view-specific representations contain information unique to each view. These are then combined to reconstruct the input.
We begin with input representations \(\mathbf{x_{m}}\in\mathbb{R}^{d_{m}}\) for each data point, where view \(m\in\mathcal{M}\) has input representations of size \(d_{m}\), and as described previously, we have \(\mathcal{M}=\{\text{object, style, color}\}\) in this paper. Our model processes the input through three neural network pathways:
1. [leftmargin=*]
2. **View-specific:**\(\mathbf{z_{m}^{p}}=\mathcal{F}_{m}^{p}\left(\mathbf{x_{m}}\right)\)
3. **View-aligned:**\(\mathbf{z_{m}^{a}}=\mathcal{F}_{m}^{a}\left(\mathbf{u}\right)\), where \(\mathbf{u}=\mathcal{F}^{u}(\left[\mathbf{x_{*}}\right])\), with \(\left[\mathbf{x_{*}}\right]\) being the concatenation of all input representations
4. **Reconstruction:**\(\mathbf{\bar{x}_{m}}=\mathcal{F}_{m}^{r}(\left[\mathbf{z_{m}^{p}};\mathbf{z_{m}^{a }}\right])\), where \(\left[\mathbf{z_{m}^{p}};\mathbf{z_{m}^{a}}\right]\) is a concatenation of \(\mathbf{z_{m}^{p}}\) and \(\mathbf{z_{m}^{a}}\)
Here, all \(\mathcal{F}_{m}^{a}\) and \(\mathcal{F}^{u}\) are two-layer and one-layer feed-forward networks respectively, where _ReLU_ non-linear activation (Krizhevsky et al., 2015) is used between layers.
For ease of notation, we stack \(\mathcal{B}\) individual feature vectors into a batch to form data matrices: \(\mathcal{X}_{m}=\left[\mathbf{x_{m}}\right]\) contains input representations, \(\mathcal{Z}_{m}^{p}=\left[\mathbf{z_{m}^{p}}\right]\) contains view-specific representations, \(\mathcal{Z}_{m}^{a}=\left[\mathbf{z_{m}^{a}}\right]\) contains view-aligned representations, and \(\mathcal{X}_{m}=\left[\mathbf{\bar{x}_{m}}\right]\) contains reconstructed representations for view \(m\). All representations that are the output of our models are normalized to be of unit norm, and an inner product between them serves as the definition of similarity between the corresponding embeddings.
We note that \(\mathcal{F}_{m}^{a}\) is equivalent to multimodal models that project different views to the same underlying space to obtain aligned representations (Bang et al., 2017). Our intention in the current paper is to extract view-specific representations that capture what is uniquely contained within that view. The role of model components \(\mathcal{F}_{m}^{r}\) and \(\mathcal{F}_{m}^{a}\) is to ensure that there is minimal loss of information with respect to the input \(\mathbf{x_{m}}\). Therefore, though the focus in the current paper is on \(\mathcal{F}_{m}^{p}\), the complete architecture illustrated in Figure 3 is required to obtain robust and useful representations.
#### 3.1.3. Model Fitting
We define the following loss function that is minimized to estimate the parameters of our proposed model:
\[\mathcal{L}=\lambda_{1}\cdot\mathcal{L}_{ali}+\lambda_{2}\cdot\mathcal{L}_{ spec}+\lambda_{3}\cdot\mathcal{L}_{inf}+\lambda_{4}\cdot\mathcal{L}_{rec} \tag{1}\]
The \(\lambda_{i}\)'s are hyperparameters that control the contribution of the various factors we are attempting to balance. The loss terms in Equation 1 are defined below with respect to batches of size \(\mathcal{B}\).
* **Inter-view Alignment Loss \(\left(\mathcal{L}_{ali}\right)\)**: We use the following objective to align representations from every pair of views \((m,m^{\prime})\): \[\mathcal{L}_{ali}=\frac{1}{\mathcal{B}}\sum_{(m,m^{\prime})}\left(\mathcal{B} -trace(\mathcal{Z}_{m}^{a}\times\mathcal{Z}_{m^{\prime}}^{a^{T}})\right)\]
This term encourages \(\mathcal{Z}_{m}^{a}\) and \(\mathcal{Z}_{m^{\prime}}^{a}\) to be aligned. It is designed to reward an increased similarity between aligned representations of the same data point from different views \(m\) & \(m^{\prime}\) - captured by the diagonal entries of the matrix \(\mathcal{Z}_{m}^{a}\times\mathcal{Z}_{m^{\prime}}^{a^{T}}\).
* **Inter-view Orthogonalization Loss \(\left(\mathcal{L}_{spe}\right)\)**: This is an orthogonality constraint minimizing the overlap between pairs \((m,m^{\prime})\): \[\mathcal{L}_{spe}=\sum_{(m,m^{\prime})}\frac{1}{d_{m}\ast d_{m}^{\prime}}\left\| \mathcal{Z}_{m}^{p^{T}}\times\mathcal{Z}_{m^{\prime}}^{p}\right\|_{F}^{2}\]
Since \(\mathcal{Z}_{m}^{p}\) contains unit norm vectors, \(\mathcal{L}_{spe}\) is the squared Frobenius-norm of the cross-correlation matrix between pairs of views.
* **Intra-view Information Transfer Loss \(\left(\mathcal{L}_{inf}\right)\)**: To prevent degenerate view-specific representations, we introduce a regularization term to retain information content within a view: \[\mathcal{L}_{inf}=\frac{1}{\mathcal{B}}\sum_{m}\left(\mathcal{B}-trace( \mathcal{X}_{m}\times\mathcal{Z}_{m}^{p^{T}})\right)\]
Since \(\mathcal{X}_{m}\) and \(\mathcal{Z}_{m}^{p}\) contain unit norm vectors, minimizing \(\mathcal{L}_{inf}\) maximizes the cosine similarity between view-specific and input representations for each sample.
* **Intra-view Reconstruction Loss \(\left(\mathcal{L}_{rec}\right)\)**: The reconstruction loss is the mean squared error between input representations \(\mathbf{x_{m}}\) and their estimate \(\mathbf{\hat{x}_{m}}\). \[\mathcal{L}_{rec}=\sum_{m}\frac{1}{\mathcal{B}\ast d_{m}}\left\|\hat{\mathcal{ X}_{m}}-\mathcal{X}_{m}\right\|_{F}^{2}\]
Alternative formulations for these losses are possible, and we leave such explorations to future work.
### Collection-based Retrieval
In this section, we describe how the view-specific representations are used for inferring the intent of the collection of images and measuring how true a candidate image is to this intent.
#### 3.2.1. Representing a Collection
We denote a collection of \(N\) images as \(\mathcal{C}\). Let the view-specific representation for view \(m\) of the \(i^{th}\) image in \(\mathcal{C}\) be denoted as \(\mathbf{z_{m,i}^{p}}\forall i\in\{1,\dots,N\}\), which we obtain as outputs of our model described in Section 3.1. We define the collection-level representation of \(\mathcal{C}\) for view \(m\) as the mean of the view-specific representations over its member images: \(\mathbf{C_{m}^{p}}=\frac{1}{N}\sum_{i=1}^{N}\mathbf{z_{m,i}^{p}}\). Computing a query in this manner is similar to how psued relevance feedback is used for image retrieval (Song et al., 2017).
#### 3.2.2. Intent Computation
Given a collection \(\mathcal{C}\), we are interested in inferring why its member images were brought together. We model this by characterizing the intent as being proportional to the degree of homogeneity of images in the collection along that view. We obtain the raw intent of a collection with respect to view \(m\) as the average similarity along view \(m\) across all pairs of images in
the collection:
\[\hat{\beta}_{m}=\frac{1}{N\times(N-1)}\sum_{(i,j)}\mathbf{z}_{\text{m},i}^{\text{p}} \cdot\mathbf{z}_{\text{m},j}^{\text{p}} \tag{6}\]
Note that the summand in Equation 6 computes the average cosine similarity since the output view-specific vectors are normalized. To ensure that these raw intent weights are comparable across views, we standardize them using statistics obtained from each view's embedding space:
\[\beta_{m}=\frac{\hat{\beta}_{m}-\mu_{m}}{\sigma_{m}} \tag{7}\]
where \(\mu_{m}\) and \(\sigma_{m}\) are the mean and standard deviation of the pairwise similarities between all pairs of images from the dataset measured along view \(m\). Finally, our definition of intent is a normalization across views so that the intent weights sum to 1:
\[\alpha_{m}=\frac{exp(\beta_{m})}{\sum_{m^{\prime}}exp(\beta_{m^{\prime}})} \tag{8}\]
#### 3.2.3. Weighted Similarity for Retrieval
Given a collection \(\mathcal{C}\), we are interested in ranking a corpus of images \(\mathcal{D}\) in decreasing order of relevance to \(C\). Given a candidate image \(d\in\mathcal{D}\), we obtain its view-specific representations \(\mathbf{d}_{\text{m}}^{\text{p}}\) as outputs of our model. We then assign a score to \(d\) using a weighted similarity metric as:
\[score\left(C,d\right)=\sum_{m}\alpha_{m}\times sim\left(\mathbf{C}_{\text{m }}^{\text{p}},\mathbf{d}_{\text{m}}^{\text{p}}\right) \tag{9}\]
where \(\mathbf{C}_{\text{m}}^{\text{p}}\) is the the view-specific representations for the collection. And, \(\alpha_{m}\) is computed as in Equation 8 and \(sim\left(\mathbf{a},\mathbf{b}\right)\) is a measure of similarity between \(\mathbf{a}\) and \(\mathbf{b}\). For input style and color representations, we use the inverse of the \(L2\) distance between \(\mathbf{a}\) and \(\mathbf{b}\) as the distance measure (Golovolov et al., 2015; Zhang et al., 2017). For other representations, we use \(sim\left(\mathbf{a},\mathbf{b}\right)=\mathbf{a}\cdot\mathbf{b}\).
Equation 9 reflects our idea that views corresponding to \(\mathcal{C}\)'s intent should be given a higher weight while ranking by relevance. Finally, we obtain a ranked list \(\mathcal{R}\) by sorting \(\mathcal{D}\) in decreasing order of \(score(C,d)\) values \(\forall d\in\mathcal{D}\). We discuss the evaluation of \(\mathcal{R}\) in Section 5.2.
## 4. Experimental Setup
We use the Behance-Artistic-Media dataset (BAM) (Zhu et al., 2017), a publicly-available dataset of artistic images. In particular, we use the crowd-annotated subset of BAM, containing 331,116 images (after filtering out broken links). Several images in BAM are annotated with one or more of 3 _attributes_ - (i) content (associated with 143,480 images), (ii) media (60,225 images), and (iii) emotion (24,844 images). Each attribute corresponds to multiple _attribute classes_ into which the image may be categorized. Table 1 shows the distribution of images among all attributes and classes in BAM. For each image in BAM, we obtain the outputs of the out-of-the-box feature extractors described earlier - ResNet (object), ALADIN (style), and LAB Histogram (color) - as our model inputs. We divide the dataset into train, validation, and test sets, maintaining a \(6:3:1\) ratio. While the validation set is used for hyperparameter tuning, images in the test set are set aside to enable the evaluation of our model on the collection expansion task.
### Simulating Collections
To evaluate the performance of our approach on the task of moodboard expansion, we simulate the gathering of moodboards of images with known intent using the attribute labels in BAM. By picking a subset of images that are all annotated with the same attribute class, we obtain a collection of images that are similar with respect to that characteristic. For example, gathering a collection of dog images may be simulated by picking a set of images from BAM with the label content = _'dog'_. We refer to such simulated collection as {attribute}-type collections, where {attribute} \(\in\){content, media, emotion}. As another example, a sample of images tagged with emotion = _'happy'_ may be taken to represent an emotion-type collection. Given a collection of a known attribute type, we retrieve additional candidate images that are relevant to the collection using the method described in Section 3.2.3. To judge the relevance of retrieved results, we compute ranking metrics on the top-100 retrieved results using the attribute types as labels.
#### 4.1.1. Ground Truth Intents
There are implicit correlations between the attribute labels of BAM and the views we have considered. For each attribute, we identify the view that we expect the attribute to have a high correlation with. We state the following associations between views and types of collections:
* content-type collections have high object intent
* media and emotion-type collections have high style intent
Note that this is knowledge we possess about the dataset, these associations are not made available to the model, which only has access to the input feature extractors for each view. The purpose of intent inference would be to recover the view correlated with the collection's attribute-type as having the highest weight. We demonstrate this via empirical experiments in Section 5.2.1.
### Metrics
Our evaluation setup comprises of two phases: (i) intrinsic: we assess the quality of view disentanglement achieved using our approach, and (ii) extrinsic: we evaluate the effectiveness of our model by computing relevance metrics for the retrieved results.
#### 4.2.1. **Evaluating Disentanglement**
We use two metrics to quantify the disentanglement of view-specific representations:
1. **Pearson correlation coefficient**: Let \(\mathcal{S}=matsim(\mathcal{P},\mathcal{Q})\in\mathbb{R}^{n_{1}\times n_{2}}\) represent a matrix of pairwise similarities between entries of \(\mathcal{P}\in\mathbb{R}^{n_{1}\times d}\) and \(\mathcal{Q}\in\mathbb{R}^{n_{2}\times d}\) such that \(\mathcal{S}(i,j)=sim(\mathcal{P}(i),\mathcal{Q}(j))\). We compute Pearson correlation coefficient between the rows of \(matsim(\mathcal{Z}_{m}^{\text{p}},\mathcal{Z}_{m}^{\text{p}})\) and \(matsim(\mathcal{Z}_{m}^{\text{p}},\mathcal{Z}_{m^{\prime}}^{\text{p}})\).
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Attribute** & **\# Images** & **Attribute Classes** \\ \hline content & 143,480 & bicycle, bird, building, cars, cat, \\ & dog, flower, people, tree \\ \hline media & 60,225 & 3D graphics, vector art, watercolor, \\ & pencil sketch, comic, pen ink, oil paint \\ \hline emotion & 24,844 & happy, gloomy, scary, peaceful \\ \hline \hline \end{tabular}
\end{table}
Table 1. Attributes and Attribute Classes in the BAM dataset.
where \(m\neq m^{\prime}\). This quantifies the inter-view correlation between the pairwise similarities of data points in views \(m\) and \(m^{\prime}\). A high Pearson correlation coefficient indicates overlap between the two views, whereas low correlation values indicate that unique aspects are being captured by the two views individually. Similarly, by computing the inter-view correlation between the rows of \(matsim(X_{m},X_{m})\) and \(matsim(X_{m^{\prime}},X_{m^{\prime}})\), we obtain measures of overlap between the views when they are represented using input representations. Further, the intra-view correlation between the rows of \(matsim(X_{m},X_{m})\) and \(matsim(\mathcal{Z}_{m}^{p},\mathcal{Z}_{m}^{p})\) informs us about the deviation of the output view-specific representations from the input representations.
2. **Hilbert-Schmidt Independence Criterion (HSIC)**: We compute the normalized HSIC metric (Han et al., 2017) as a proxy for the mutual information (MI) between view representations: \[HSIC\left(\mathcal{Y}_{m},\mathcal{Y}_{m^{\prime}}\right)=\frac{trace\left( \mathbf{K_{m}HK_{m^{\prime}}H}\right)}{\left\|\mathbf{H}\mathbf{K_{m}H}\right\| _{2}\left\|\mathbf{H}\mathbf{K_{m^{\prime}}H}\right\|_{2}}\] (10) where \(\mathbf{K_{m}}=matsim(\mathcal{Y}_{m},\mathcal{Y}_{m})\), and \(\mathbf{H}=\mathbf{I}-\frac{1}{n}\mathbf{1}\mathbf{1}^{\mathbf{T}}\) if \(\mathbf{K_{m}}\in\mathbb{R}^{n\times n}\). Just as the case with the correlation measure above, we measure inter-view MI between output representations when \(\mathcal{Y}_{m}=\mathcal{Z}_{m}^{p}\) and between input representations when \(\mathcal{Y}_{m}=\mathcal{X}_{m}\), with \(m\neq m^{\prime}\). We can similarly measure the intra-view mutual information by substituting \(\mathcal{Y}_{m}=\mathcal{X}_{m}\) and \(\mathcal{Y}_{m^{\prime}}=\mathcal{Z}_{m}^{p}\).
#### Evaluating Expansion of Collections
To quantify the retrieval performance of our approach, we compute the relevance of the _top-k_ results of the ranked list \(\mathcal{R}\) described in Section 3.2. We compute the Mean Average Precision (MAP) and Mean Reciprocal Rank (MRR) of \(\mathcal{R}\) by using the attribute labels in the BAM dataset (as discussed in Section 4.1) to indicate ground-truth relevance. Specifically, a retrieved image is considered relevant for a query collection if the image belongs to the same attribute class as that used for simulating the query collection.
## 5. Experiments and Results
We validate our method using both intrinsic and extrinsic evaluations using view-specific representations.
### Model Training
Our model is trained using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.0001, in batches of size 64. The other important hyperparameters associated with our model are the \(\lambda_{i}\)'s described in Section 3.1.3. Since our focus is on learning view-specific representations, which are directly influenced by \(\mathcal{L}_{spc}\), we study the effect of varying \(\lambda_{2}\) more closely. We consider \(\lambda_{2}\) from 0.0 to 5.0 while keeping the values of the other hyperparameters in the loss function fixed (\(\lambda_{1}=0.001,\lambda_{3}=0.001\), and \(\lambda_{4}=0.0001\)). By sweeping over this operating range for \(\lambda_{2}\), we observe its effect on the disentanglement of input representations.
Figure 4 visualizes the effect of increasing \(\lambda_{2}\) on the Pearson correlation and HSIC metrics discussed in Section 4.2.1. Firstly, when no disentanglement is enforced, i.e., \(\lambda_{2}=0\), the metrics computed using input representations and output view-specific representations are almost equal, across all pairs of views. This indicates a complete information transfer between input and output representations. As we increase \(\lambda_{2}\), the inter-view Pearson correlation and HSIC metrics decrease for all pairs of views; this trend is expected for inter-view disentanglement. The decrease in correlation indicates that each output view-specific representation is capturing less information about all other views than their input counterparts.
We are also interested in quantifying the intra-view information retained by our view-specific representations. We measure this as the Pearson correlation and mutual information between the input
Figure 4. Inter view correlations with varying \(\lambda_{2}\) and fixed \(\lambda_{i}\). As expected, both Pearson and HSIC drop with increasing \(\lambda_{2}\).
Figure 5. (a) Intra-view correlation and MI between input and output representations with increasing \(\lambda_{2}\); (b) Final reconstruction loss for different values of \(\lambda_{2}\). In both cases, varying \(\lambda_{2}\) gives us the control we require, even though the \(\lambda_{i}\) for the other training objective components are kept fixed.
and output representations for each view. Figure 5(a) shows the trends as \(\lambda_{2}\) increases. Once again, the decrease in these metrics is expected because the output representations lose information common across views and their overlap with the input decreases. Thus, we note that the intra-view information transfer is influenced by the disentanglement of views because \(\lambda_{3}\), which directly influences the corresponding loss component, was held constant in these experiments. Similarly, the reconstruction loss (Figure 5(b)) increases despite \(\lambda_{4}\) being held constant in these experiments.
We also observe anomalous behavior of the view-specific representations when using large values of \(\lambda_{2}\). For larger values of \(\lambda_{2}\), \(\mathcal{L}_{rec}\) converges at relatively higher values indicating difficulty in reconstructing the input representations using these view-specific representations. In Figure 7, we show an alternative view of the decreased correlation between input and output representations of the same view, by plotting a histogram of all pairwise similarities between images in the validation set (computed over the disentangled representations) as we sweep over \(\lambda_{2}\). When \(\lambda_{2}=0.0\), no disentangling has been enforced and the observed distribution closely resembles the distribution of similarities in the input embedding spaces. With increasing \(\lambda_{2}\) values, we notice a shift in similarity distributions for all the views, indicating departure from the information captured in the input representations. For \(\lambda_{2}=5.0\), we observe peaks at similarity values of \(-1.0\) and \(1.0\), indicating that most representations are either completely orthogonal or nearly identical to others. This is a degenerate scenario that we would like to avoid. The optimal value of \(\lambda_{2}\) would therefore be somewhere in the middle. To operate in higher disentanglement regimes, future work may incorporate recently proposed regularization methods (e.g. (Beng et al., 2019)) to prevent degenerate situations.
We would like to choose hyperparameters based on this intrinsic evaluation, and evaluate the model and learnt representations in the downstream collection-based retrieval task. Since only the relative values of the loss function components matter, we retain \(\lambda_{1}\), \(\lambda_{3}\) and \(\lambda_{4}\) as before and choose \(\lambda_{2}\) based on the trends described above. Choosing a very low value of \(\lambda_{2}\) would not allow us to investigate the benefits of disentanglement, while we have seen that larger values of \(\lambda_{2}\) lead to degenerate behavior. From the HSIC and Pearson correlation values of the inter/intra-view representations, we pick an intermediate value, \(\lambda_{2}=0.05\), to evaluate under the collection expansion task. We have also conducted a complete grid search of the hyper-parameters; since they do not add many more insights to the current findings, they have not been reported here.
### Expansion of Collections
In this section, we evaluate the performance of the output view-specific representations on the task of collection-based retrieval.
#### 5.2.1. Evaluating Intent Inference
The first step in expanding a moodboard involves predicting the intent of the collection of images, and we evaluate this component in isolation. We define a _pure_ collection where all member images belong to a single attribute class (e.g. emotion = '_scary_'). By injecting images belonging to other attribute classes into _pure_ collections, we obtain _impure_ collections. The fraction of images in a collection that belong to the attribute class used to simulate a collection provides a measure of its _purity_.
We vary the purity of collections and show that the computed intent weights respond as expected in Figure 6. Each subplot is obtained by simulating collections that contain images belonging to a specified attribute class (indicated as the subplot title) and a given purity level (\(x-axis\)). For a given collection, the view-specific intents are computed as defined in Section 3.2.2. For each purity value, we have plotted the average view-specific intent weights computed over 100 collection simulations.
Specifically, when \(purity=0\) (the leftmost points), a collection contains a uniform mixture of images from all attribute classes; the intent weights reflect this by being \(1/3\) across the three views. As the purities of media-type and emotion-type collections are increased, the style intents increase while color and object intents reduce. Similarly, as we increase the purity of content-type collections, the object intent consistently increases and reaches a maximum value for pure collections. These findings agree with the known attribute-view correlations discussed in Section 4.1, and therefore validate our method for inferring the intent. The disentanglement of view-specific representations is critical to having this behavior - a rising trend of intent with increasing purity is observed only for the view correlated with a collection's attribute class.
#### 5.2.2. Retrieving Images for Collection Queries
Our method for collection-based retrieval involves computation of intent weights and the subsequent retrieval of images using a weighted similarity. It is not necessary that the representations used for computing intent weights (in Equation 8) be the same as the ones used for computing _sim_ scores for collection-based retrieval (in Equation 9). We therefore consider the following variations in our experiments:
Figure 6. View intents as a function of collection purity. Experiments with disentangled representations at \(\lambda_{2}=0.05\).
1. input-uniform - Input representations are used for \(sim\) score computation and intent weights are uniform across views. This is a multi-view setting without intent inference.
2. input-output - Input representations are used for \(sim\) score computation while view-specific (output) representations are used for intent weight inference.
3. output-output - View-specific (output) representations are used for both computing intent weights and \(sim\) scores.
Table 2 presents MAP and MRR values obtained across our experiments. The first three rows are our baselines: for each view (object, style, and color), we use the corresponding out-of-the-box view representation (ResNet, ALADIN, and LAB Histogram) only for collection expansion via a simple nearest neighbor search without intent inference. The remaining three rows show the results for the variations discussed above. The Attribute-wise MAP & MRR columns are computed by fixing the attribute while simulating collection queries. Finally, Aggregate MAP & MRR values are computed by selecting the attribute label for each simulated collection at random and averaging across them. The results reported are the averages over 100 simulated collections for each configuration. In each case, the number of images in the query collection varies uniformly between 10 and 30. We make the following observations about the results shown:
1. Among the baselines, for each attribute, the correlated view representation scores the highest Attribute-wise MAP & MRR. This is especially noticeable in the performance of the ResNet representation for content-type collections, which we expect to have high object intent.
2. For input-uniform, the Aggregate MAP value is higher than two of three baselines, with Aggregate MRR higher than all three baselines; this indicates that each view provides incremental value when used with others, validating the use of multi-view representations for image retrieval.
3. input-output outperforms the preceding methods that do not use intent inference; this validates our intent prediction mechanism which is able to selectively invoke the relevant view based on the query collection.
4. Finally, disentangled multi-view representations combined with the inferred intent, (output-output), provides the best MAP and MRR in all cases, showing the utility of cross-view disentangled representations.
### Relevance - Diversity Trends
In Table 2, we show that the ranking effectiveness by using multiple views is on average better than what can be achieved from a single view. When the goal is to provide interesting additions to a designer's moodboard, a core requirement of the retrieval system is to ensure that the user has visibility into the full range of possibilities. This need for exploration is well-studied in the information retrieval community under the notion of diversity (Beng et al., 2015; Chen et al., 2016; Chen et al., 2017).
We are interested in measuring the diversity of results in the returned list of candidate images \(\mathcal{R}\) (Section 3.2.3). Since we are
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{**Representation**} & \multicolumn{2}{c}{**Attribute-wise MAP**} & \multicolumn{2}{c}{**Aggregate**} & \multicolumn{2}{c}{**Attribute-wise MRR**} & \multicolumn{1}{c}{**Aggregate**} \\ \cline{2-9} & content & media & emotion & MAP & content & media & emotion & MRR \\ \hline ResNet & 0.823 & 0.400 & 0.305 & 0.523 & 0.922 & 0.557 & 0.409 & 0.668 \\ ALADIN & 0.565 & 0.697 & 0.483 & 0.585 & 0.736 & 0.880 & 0.708 & 0.800 \\ LAB Histogram & 0.202 & 0.149 & 0.108 & 0.158 & 0.371 & 0.237 & 0.213 & 0.269 \\ input-uniform & 0.685 & 0.617 & 0.440 & 0.581 & 0.913 & 0.845 & 0.660 & 0.809 \\ input-output & 0.813 & 0.692 & 0.497 & 0.685 & 0.950 & 0.899 & 0.715 & 0.889 \\ output-output & **0.857** & **0.719** & **0.513** & **0.713** & **0.983** & **0.924** & **0.797** & **0.896** \\ \hline \hline \end{tabular}
\end{table}
Table 2. Evaluation of different representations on the collection expansion task. The first three representations correspond to the baselines of using single view (object, style and color respectively) representations for the images.
Figure 7. Similarities between representations of all pairs of images measured using the output representations of each view. The output representations are obtained by varying \(\lambda_{2}\).
operating in a multi-view space, we can make diversity measurements along each view. Specifically, we measure diversity along view \(m\) as \(\delta_{m}=1/\beta_{m}\), where \(\beta_{m}\) is computed as in Section 3.2.2 by treating \(\mathcal{C}=\mathcal{R}\)(Kang et al., 2019). This definition reflects our assumption that intent and diversity are inverse notions of each other. The results from our experiments are shown graphically in Figure 8, where each subplot shows the diversity with respect to the specified view on the \(x-axis\), and MAP on the \(y-axis\). We provide results for media-type collections and content-type collections.
We make the following observations concerning Figure 7(a):
1. ALADIN shows higher relevance scores than LAB Histogram or ResNet. As media-type collections are anticipated to be correlated with the style view, this behavior is expected. Further, ALADIN has greater diversities along object and color views, and the least diversity along the style view.
2. Among the variations that use intent weights while ranking, input-uniform has the least diversity along object and color views. This undesired behavior is also expected - by giving uniform intent, we weigh uncorrelated views more than necessary.
3. In a multi-view setting, using the output representations solely for intent computation (input-output) leads to higher relevance scores when compared to uniform intent weights. We observe higher diversities along uncorrelated views as desired.
4. As shown in Table 2, when using the disentangled view-specific representations for both similarity measurement and intent computation (output-output), we observe the highest relevance scores. In Figure 7(a), we additionally observe that this scenario produces the highest diversity along the object view, with comparable diversities to ALADIN along the other views.
Similar observations can be made from Figure 7(b) with respect to the object view as well. A minor difference is that ResNet obtains the highest diversities for uncorrelated views, indicating the alignment between ResNet representations and the object view.
Thus we show that our weighted nearest neighbor computation enables us to retrieve images that are similar along the view corresponding to the user's intent, while allowing diversity along the other views. Often, the relationship between relevance and diversity is described as a trade-off. The use of our model outputs for computing both intents and similarities leads to MAP values comparable to those observed with the correlated view but with increased diversity along uncorrelated views.
## 6. Composing Collections
Deriving disentangled multi-view representations for a collection of images enables the novel use-case of composing multiple collections as a query to retrieve images that selectively adhere to the collections in the query. By picking desired view representations from each collection in the query, we can create a composite representation for a new (hypothetical) collection which can be used as a query for expansion. Figure 9 illustrates this idea qualitatively using two examples. Consider the top row from Figure 9. The query comprises a pair of collections (shown as outlined boxes) and the view that is relevant for each collection. Specifically, we consider
Figure 8. MAP-diversity trade-off for two attribute types.
the object view from the collection of bicycles and the style view from the collection of vector art images. By selecting the object representations from the former, style representations from the latter, and averaging out the color representations between the two, we obtain composite representations for a hypothetical collection that has the object features of the former and style features of the latter. Since we are only interested in the style and object views, we can split the intent weights between only these for the composite collection before ranking images from the index. Proceeding as described in Section 3.2, we obtain the ranked list of images shown on the top-right in Figure 9, which are images of bicycles styled in vector art form. The disentanglement process described in our paper is critical to enable this behavior. By ensuring that representations along different views are de-correlated, these views can be mixed and matched, allowing for a powerful visual querying mechanism. A similar example, representing the intent of flower images in oilpaint style, is shown in the bottom row of Figure 9.
## 7. Conclusion and Discussion
In this work, we have introduced the notion of multi-view representations for image collections and enumerated a well-known set of image similarity axes as views - object, style, color. The baseline multi-view representation of an image is taken be to a union of popular feature extractors for each view. Our primary contribution is in transforming these input representations to minimize correlations among the views using a self-supervised approach. We have shown that this leads to output representations that better capture the overall characteristics of an image.
To illustrate the benefits of our approach, we have defined a novel collection level task involving retrieval of images relevant to a set of seed images. We have defined the intent of a collection of images to be a distribution over views - a higher weight assigned to a view indicates greater homogeneity for that view across images in the collection. Finally, we have shown that using these intent weights allows us to effectively score candidate images with respect to the query collection.
We have also proposed a new querying mechanism for image search driven by composing multiple collections of images. This is enabled using the ideas and techniques presented in this paper such as _views_ of images and representations and _intents_ of collections. While we have presented qualitative results here, future work can investigate this quantitatively on datasets more suited for this task.
The work described here is related to the active topic of representation learning. We have borrowed intuitions like factorized representations (Sohn et al., 2015) and disentanglement (Sohn et al., 2015) from the domain of NLP and applied it to the setting of image retrieval. As future work, we will look into training our model in an end-to-end manner customized for the retrieval task. We also intend to further evaluate the benefits of our approach via a thorough user study. From the perspective of the application that we have considered, extending our current setup to multiple iterations (Sohn et al., 2015; Krizhevsky et al., 2014) is a natural next step. While the treatment in the current paper is restricted to specific visual axes or views, the proposed framework is generalizable to a broader range of visual representation spaces. Finally, we intend to generalize the benefits of our approach to other datasets as well.
Figure 9. Qualitative results for composing collections using disentangled multi-view representations. (Top) The left subplot represents a collection with the _object_ intent of bicycles and the center subplot represents a collection with the _style_ intent of vector art images. Disentangled multi-view representations allows constructing a collection query resulting in a natural combined set of results – bicycles styled in vector art form. (Bottom) The left subplot is a collection with the _object_ intent of flowers, while the center subplot is a collection with the _style_ intent of oilpaint images. A query formed by composing the relevant disentangled views from these collections retrieves flowers styled in oilpaint form. |
2301.04368 | On the functional form of the radial acceleration relation | We apply a new method for learning equations from data -- Exhaustive Symbolic
Regression (ESR) -- to late-type galaxy dynamics as encapsulated in the radial
acceleration relation (RAR). Relating the centripetal acceleration due to
baryons, $g_\text{bar}$, to the total dynamical acceleration, $g_\text{obs}$,
the RAR has been claimed to manifest a new law of nature due to its regularity
and tightness, in agreement with Modified Newtonian Dynamics (MOND). Fits to
this relation have been restricted by prior expectations to particular
functional forms, while ESR affords an exhaustive and nearly prior-free search
through functional parameter space to identify the equations optimally trading
accuracy with simplicity. Working with the SPARC data, we find the best
functions typically satisfy $g_\text{obs} \propto g_\text{bar}$ at high
$g_\text{bar}$, although the coefficient of proportionality is not clearly
unity and the deep-MOND limit $g_\text{obs} \propto \sqrt{g_\text{bar}}$ as
$g_\text{bar} \to 0$ is little evident at all. By generating mock data
according to MOND with or without the external field effect, we find that
symbolic regression would not be expected to identify the generating function
or reconstruct successfully the asymptotic slopes. We conclude that the limited
dynamical range and significant uncertainties of the SPARC RAR preclude a
definitive statement of its functional form, and hence that this data alone can
neither demonstrate nor rule out law-like gravitational behaviour. | Harry Desmond, Deaglan J. Bartlett, Pedro G. Ferreira | 2023-01-11T09:26:16Z | http://arxiv.org/abs/2301.04368v2 | # On the functional form of the radial acceleration relation
###### Abstract
We apply a new method for learning equations from data--_Ezhaustive Symbolic Regression_ (ESR)--to late-type galaxy dynamics as encapsulated in the radial acceleration relation (RAR). Relating the centripetal acceleration due to baryons, \(g_{\rm bar}\), to the total dynamical acceleration, \(g_{\rm obs}\), the RAR has been claimed to manifest a new law of nature due to its regularity and tightness, in agreement with Modified Newtonian Dynamics (MOND). Fits to this relation have been restricted by prior expectations to particular functional forms, while ESR affords an exhaustive and nearly prior-free search through functional parameter space to identify the equations optimally trading accuracy with simplicity. Working with the SPARC data, we find the best functions typically satisfy \(g_{\rm obs}\propto g_{\rm bar}\) at high \(g_{\rm bar}\), although the coefficient of proportionality is not clearly unity and the deep-MOND limit \(g_{\rm obs}\propto\sqrt{g_{\rm bar}}\) as \(g_{\rm bar}\to 0\) is little evident at all. By generating mock data according to MOND with or without the external field effect, we find that symbolic regression would not be expected to identify the generating function or reconstruct successfully the asymptotic slopes. We conclude that the limited dynamical range and significant uncertainties of the SPARC RAR preclude a definitive statement of its functional form, and hence that this data alone can neither demonstrate nor rule out law-like gravitational behaviour.
keywords: galaxies: kinematics and dynamics - dark matter - methods: data analysis
## 1 Introduction
Kinematic measurements of galaxies relate their visible and dynamical masses, affording constraints on the distribution of dark matter and/or the behaviour of gravity. These measurements are simplest to perform for late-type galaxies supported predominantly by rotation, as the enclosed dynamical mass may be calculated from the centripetal acceleration and the law of gravity. Such studies have revealed a striking correlation between the enclosed baryonic and total dynamical mass assuming Newtonian gravity, dubbed the mass discrepancy-acceleration (Sanders, 1990; McGaugh, 2004) or radial acceleration relation (RAR; Lelli et al., 2017). It has been claimed that the RAR indicates that at high accelerations the Newtonian dynamical mass follows the baryonic mass (indicating little dark matter and the validity of Newtonian mechanics), while as acceleration drops below a new constant of nature \(g_{0}\approx 10^{-10}\) m s\({}^{-2}\) the dynamical mass increasingly exceeds the baryonic mass in a regular way.
One may attempt to understand these observations from either a dark matter or modified gravity perspective. In \(\Lambda\)CDM the difference between the dynamical and baryonic mass is due to the dark matter that makes up most of the mass of the galaxy. The RAR must therefore be explained by the relative distributions of dark and visible mass established by the process of galaxy formation. Interactionless cold dark matter is influenced only gravitationally by the baryonic mass so the emergence of the RAR must be somewhat fortuitous; it is not established directly by a baryon-dark matter coupling (although see Blanchet & Le Tiec, 2008; Berezhiani & Khoury, 2015; Famaey et al., 2018 for alternative ideas). In contrast, the modified gravity (or modified inertia) interpretation posits a breakdown of Newtonian mechanics at low acceleration so that the dynamical mass inferred by a Newtonian analysis is not the true dynamical mass of the galaxy. The prototypical instantiation of this idea is Modified Newtonian Dynamics (MOND; Milgrom, 1983a, c, b), in which the kinematic acceleration \(g_{\rm obs}\) follows the square root of the Newtonian acceleration \(g_{\rm bar}\) in the weak-field regime. This enables the total dynamical mass of the galaxy to remain equal to the baryonic mass across galaxies' rotation curves, eliminating the need for dark matter in them. The MOND paradigm attempts to dispense with dark matter entirely, and has cosmologically viable relativistic extensions (most recently Skordis & Zlosnik, 2021). It is reviewed in Famaey & McGaugh (2012) and Banik & Zhao (2022).
Central to the dark matter-modified gravity debate in the context of galaxy dynamics is the functional form of the RAR. This is because MOND makes a very specific predic
tion (absent the external field effect: \(g_{\rm obs}=g_{\rm bar}\) in the high-acceleration "Newtonian regime" and \(g_{\rm obs}\propto g_{\rm bar}^{1/2}\) in the low-acceleration "deep-MOND regime") while dark matter could accommodate a range of possibilities depending on the effect of galaxy formation on halo density profiles, which remains highly uncertain (e.g. Duffy et al., 2010; Maccio et al., 2012; Grudic et al., 2020; Tenneti et al., 2018; Ludlow et al., 2017; Navarro et al., 2017; Keller and Wadsley, 2017). The only potentially unambiguous prediction is that the RAR tends to \(g_{\rm obs}=\Omega_{\rm m}/\Omega_{\rm b}g_{\rm bar}\) at radii sufficiently large to encompass the cosmic baryon fraction, but it is unclear where or even if this occurs in galaxies. Thus, while the \(\Lambda\)CDM prediction for the full RAR can be tested only by applying potentially restrictive priors on galaxy formation effects (Di Cintio and Lelli, 2016; Desmond, 2017; Paranjape and Sheth, 2021), a more direct route towards informing the dark matter-modified gravity debate is to test the MOND prediction, specifically the limiting behaviour at \(g\ll g_{0}\) and \(g\gg g_{0}\), the small intrinsic scatter and the lack of residual correlations.
Here we focus on the asymptotic behaviour. This can be assessed to some extent by fitting a functional form with free power-law slopes at both ends (Lelli et al., 2017), but this assumes that the slope tends to a constant at each end and restricts to a specific part of the functional parameter space for which this is the case. These are in question when assessing the accuracy of the MOND prescription. A fully satisfactory fit should therefore make no such assumptions, eliminating potential confirmation bias and testing without any priors the assertion that the RAR implies no dynamically relevant dark matter at high \(g\) and the deep-MOND limit at low \(g\). We accomplish this here by means of a novel regression algorithm dubbed _Exhaustive Symbolic Regression_(ESR; Bartlett et al., 2022), and hence assess the degree to which the RAR supports the tenets of MOND. Within the MOND paradigm, this method also enables optimisation of the "interpolating function" (IF) \(g_{\rm obs}=\mathcal{F}(g_{\rm bar})\) between the two stipulated limits.
The structure of the paper is as follows. In Sec. 2 we describe the RAR data that we use, and in Sec. 3 our algorithm for generating functions and assessing their aptitude for describing the data. Sec. 4 presents the results. In Sec. 5 we discuss the broader ramifications, potential remaining uncertainties and ways in which the programme could be furthered in the future. Sec. 6 concludes. Full details on ESR are given in the companion paper Bartlett et al. (2022). Units not explicitly given are \(10^{-10}\) m s\({}^{-2}\), and all logarithms are natural.
## 2 Observational data
We use the SPARC data set (Lelli et al., 2016),1 a compilation of 175 rotation curves from the literature combined with _Spitzer_\(3.6\mu\)m photometry. We apply the same quality cuts as the RAR study of Lelli et al. (2017), removing galaxies with quality flag 3 (indicating large asymmetries, non-circular motions and/or offsets between stellar and HI distributions) and those with inclinations \(i<30\) deg, and points for which the quoted fractional uncertainty on the observed rotation velocity is greater than 10 per cent. This leaves 2,696 points from 147 galaxies.
Footnote 1: [http://astroweb.cwru.edu/SPARC/](http://astroweb.cwru.edu/SPARC/)
## 3 Method
We describe our method for generating and assessing trial functions in Sec. 3.1, and our likelihood function in Sec. 3.2. In Sec. 3.3 we outline our criteria for assessing whether a function displays MOND-like behaviour.
### Exhaustive Symbolic Regression
While algorithms for symbolic regression (SR)--the search for good functional descriptions of a dataset--are becoming mature, they remain fallible (La Cava et al., 2021). Unless the generating function of the data is known at the outset (in which case SR is not required), it is not possible to determine whether any SR algorithm has uncovered the best function. This motivated us to develop "Exhaustive Symbolic Regression" (ESR) which, given a set of basis functions, produces and evaluates _every_ possible function up to a given complexity of equation, defined here as the number of nodes in its tree representation. This enables a brute-force solution to relatively simple problems and provides a touchstone for assessing the results of stochastic algorithms at higher complexity. As shown in detail in Bartlett et al. (2022), stochastic searches regularly fail to the best functions at even moderate complexity \(\sim\)6, so that we would not be confident of obtaining the functional form of the RAR through any algorithm besides ESR.
Presented in full in the companion paper, ESR has two main steps: _i)_ generating, and optimising the parameters of, all functions up to a given complexity, and _ii)_ ranking these functions using an information-theoretic metric combining accuracy and simplicity. For part \(i\), the steps are:
1. Generate all possible trees containing a given number of nodes (equal to the complexity of functions considered).
2. Generate the complete set of such functions by decorating these trees with all permutations of the operators from the operator list specified in advance, utilising the constraints on the arity of the operator that can occupy a given node.
3. Simplify the functions and remove duplicates. Variants of the same function (e.g. \(x(x+\theta_{0})\) and \(x^{2}+\theta_{0}x\)) are however retained as these may have different model complexities (used in step _ii_, below). For each unique function the variant is retained that minimises this.
4. Determine the values of the free parameters appearing in the functions that maximise the likelihood of the data (see Sec. 3.2).
5. Repeat for all complexities under consideration.
The only degrees of freedom in this procedure are the maximum complexity considered (here set at 9 as higher complexity is computationally prohibitive) and the set of operators of which the functions are composed. Here we choose:2
* **Nullary:**\(g_{\rm bar}\), \(\theta\)
* **Unary:**\(\exp\), sqrt, square, inv
* **Binary:** +, \(-\), \(*\), /, pow
where \(\theta\) is a free parameter. We implicitly take the absolute value of the argument of any square root or power.
The result of this procedure is a list of all functions up to the maximum complexity (of which there are 2.24\(\times\)10\({}^{7}\)), along with the parameter values that maximise the likelihood of the RAR data. As in regular regression, using the maximum likelihood as the model selection criterion would favour overfitting, whereby a function fits the data near-perfectly but generalises or extrapolates poorly. SR therefore typically uses a two-objective optimisation, where the second objective is the "simplicity" of the function. In the absence of a metric for trading accuracy (the first objective) with complexity, optimal functions form a "Pareto front" where accuracy cannot be increased without reducing simplicity and vice versa. Simplicity has been defined analogously to model complexity (the number of nodes in the tree representation; e.g. in PySR; Cranmer et al.2020), among others, but such definitions are typically arbitrary and thus compromise the objectivity of the regression results.
To remedy this, part _ii_ of ESR implements the _minimum description length principle_ (MDL; Rissanen 1978; Grunwald & Roos 2019; Grunwald 2007) as a model selection criterion, which has an information-theoretic motivation and provides a natural framework for making commensurable the two objectives. MDL states that functions are preferred to the extent that they compress the data, i.e. minimise the number of bits required to communicate the data with the aid of the function. We implement this with a two-step code in which the description length (also called codelength) is comprised of a component describing the function and a component describing the residuals of the data around the function's expectation. We use the Shannon-Fano coding scheme for the latter (Cover & Thomas 1991), and for the former include contributions both from the structure of the function (penalising those employing more operators) and from the free parameters (penalising more parameters, especially ones that must be specified to high precision to achieve a high likelihood). The overall codelength of the compressed data, \(L(D)\), is derived in sec. 3 of Bartlett et al. (2022):
\[\begin{split} L(D)&=L(D|H)+L(H)\\ &=-\log(\hat{\mathcal{L}})+k\log(n)-\frac{p}{2}\log(3)\\ &\quad+\sum_{i}^{p}\left(\frac{1}{2}\log(\hat{I}_{ii})+\log(| \hat{\theta}_{i}|)\right)+\sum_{j}\log(c_{j}),\end{split} \tag{1}\]
where \(L\) is the description length, \(D\) the dataset, \(H\) the hypothesis (i.e. function in question), \(\mathcal{L}\) the likelihood, \(\theta\) a free parameter of the function, \(k\) the number of nodes in the function's tree representation, \(n\) the number of unique operators involved, \(p\) the total number of free parameters, \(I\) the Fisher information matrix of the parameters and \(c_{j}\) any constant natural numbers generated by simplifications. A hat denotes evaluation at the maximum-likelihood point. With all logarithms natural, this is the number of nats required to communicate the data with the aid of the function. \(L(D)\) supports a probabilistic interpretation over function space that generalises the likelihood: the relative probability of a function is \(\exp(-L(D))\)(Grunwald 2007).
The structure of the function alone determines the \(k\log(n)\) term, but the remaining terms require the free parameters to be numerically optimised to maximise the likelihood (which we use interchangeably with minimising the loss).3 We now describe our choice of likelihood for the RAR data.
Footnote 3: The \(p\log(3)/2\) term is only affected by the numerical optimisation if any parameters are set to 0 due to their maximum-likelihood values being less than 1 precision unit from 0 (see sec. 3 of Bartlett et al.2022).
### Loss function
As is typical (e.g. Lelli et al.2017), we assume that \(g_{\rm bar}\), \(g_{\rm obs}\) and their uncertainties are uncorrelated across the dataset. We further assume that the true \(g_{\rm bar}\) and \(g_{\rm obs}\) values, denoted \(g_{\rm bar}^{\rm t}\) and \(g_{\rm obs}^{\rm t}\), generate the observed values with lognormal probability distributions centred at the true values with widths given by their uncertainties \(\delta g_{\rm bar}\) and \(\delta g_{\rm obs}\). Following Lelli et al. (2017), we fix the mass-to-light ratios \(\Upsilon_{\rm gas}=1.33\), \(\Upsilon_{\rm disk}=0.5\) and \(\Upsilon_{\rm bulge}=0.7\) and assign them 10, 25 and 25 per cent uncertainties respectively, summing these in quadrature to estimate \(\delta\)V\({}_{\rm bar}\) and hence \(\delta g_{\rm bar}\) (assuming no uncertainty in radial position). We likewise assume the uncertainties on distance \(D\) and inclination \(i\) to be statistical and hence sum their contributions in quadrature with the quoted statistical uncertainty on \(V_{\rm obs}\) according to Lelli et al. (2017, eq. 2) to estimate \(\delta g_{\rm bar}\).
The likelihood of an observation given the function in question, \(f(g_{\rm bar}^{\rm t})\), is then:
\[\begin{split}&\mathcal{L}(\log(g_{\rm obs}))=\int_{-\infty}^{ \infty}\mathcal{L}(\log(g_{\rm obs})|\log(g_{\rm bar}^{\rm t}))\,\mathcal{L}( \log(g_{\rm bar}^{\rm t}))\,\mathrm{d}\log(g_{\rm bar}^{\rm t})\\ &=\frac{1}{2\pi\,\delta\log(g_{\rm bar})\,\delta\log(g_{\rm obs} )}\int_{-\infty}^{\infty}\exp\left(-\frac{\left(\log(g_{\rm obs})-\log(f(g_{ \rm bar}^{\rm t}))\right)^{2}}{2\,\delta\log(g_{\rm obs})^{2}}\right)\\ &\times\exp\left(-\frac{\left(\log(g_{\rm bar}^{\rm t})-\log(g_{ \rm bar})\right)^{2}}{2\,\delta\log(g_{\rm bar})^{2}}\right)\mathrm{d}\log(g_{ \rm bar}^{\rm t})\\ &\approx\frac{1}{\sqrt{2\pi\sigma_{\rm tot}^{2}}}\exp\left(-\frac{ \left(\log(g_{\rm obs})-\log(f(g_{\rm bar}))\right)^{2}}{2\sigma_{\rm tot}^{2} }\right),\end{split} \tag{2}\]
where
\[\sigma_{\rm tot}^{2}\equiv\delta\log(g_{\rm obs})^{2}+\left(\frac{\mathrm{d} \log(f(g_{\rm bar}))}{\mathrm{d}\log(g_{\rm bar})}\right)^{2}\,\delta\log(g_{ \rm bar})^{2}. \tag{3}\]
This is derived by keeping the leading-order term in the Taylor expansion of \(\log(f(g_{\rm bar}^{\rm t}))\) around \(\log(f(g_{\rm bar}))\) and therefore assumes this to be small relative to the rate of change of \(f\). An advantage of working in \(\log(g_{\rm bar})-\log(g_{\rm obs})\) space as opposed to \(g_{\rm bar}-g_{\rm obs}\) is that, the RAR being roughly a product of power-laws, this minimises the error due to the first-order approximation. We find this to be good for all of the best functions. The likelihood is then the product over all data points. We discuss in Sec. A the limitations of this likelihood model and how it could be improved.
### Assessing MOND
The core of the MOND paradigm is that \(g_{\rm obs}=g_{\rm bar}\) at \(g\gg g_{0}\) and \(g_{\rm obs}=\sqrt{g_{\rm bar}g_{0}}\) at \(g\ll g_{0}\)(Milgrom 1983a,c,b). This
implies
\[s\equiv\frac{\mathrm{d}\log(g_{\mathrm{obs}})}{\mathrm{d}\log(g_{\mathrm{ bar}})}=\begin{cases}1,&\quad g_{\mathrm{bar}}\to\infty\\ 1/2,&\quad g_{\mathrm{bar}}\to 0.\end{cases} \tag{4}\]
Common choices for the IF covering the intermediate region \(g\approx g_{0}\) include
* "**Simple?**\(g_{\mathrm{obs}}=g_{\mathrm{bar}}/2+\sqrt{g_{\mathrm{bar}}^{2}/4+g_{\mathrm{bar}}g_{ 0}}\)(Famaey & Binney, 2005),
* "**Standard?**\(g_{\mathrm{obs}}=\frac{1}{\sqrt{2}}\sqrt{g_{\mathrm{bar}}^{2}+\sqrt{g_{\mathrm{ bar}}^{2}(g_{\mathrm{bar}}^{2}+4g_{0}^{2})}}\)(Milgrom, 1983c),
* "**RAR?**\(g_{\mathrm{obs}}=g_{\mathrm{bar}}/(1-\exp(-\sqrt{g_{\mathrm{bar}}/g_{0}}))\)(Lelli et al., 2017).
Fig. 1 plots these functions on top of the RAR data for the best-fit values on the SPARC data (shown in the lower rows of Table 1 in Sec. 4.1). The Simple and RAR FFs are distinguished from the Standard IF principally by a more gradual transition between the Newtonian and deep-MOND regimes, although the Standard IF also prefers a significantly higher value of \(g_{0}\). While the basic MOND framework is not committed to any particular IF, it is committed to Eq. 4 providing an optimal description of the data. Our assessment of the theory will therefore be based on the extent to which the best functions (those with lowest description length) conform to these limits: any function that does so, in addition to possessing a coefficient of proportionality of unity in \(g_{\mathrm{obs}}\propto g_{\mathrm{bar}}\) at high \(g_{\mathrm{bar}}\), may be considered a new MOND IF. (The low-\(g_{\mathrm{bar}}\) coefficient of proportionality is \(\sqrt{g_{0}}\) but \(g_{0}\) is unknown a priori, so this does not supply an additional requirement.) Following Lelli et al. (2017), we will also consider a double power law fit:
\[g_{\mathrm{obs}}=\theta_{1}\,\left(1+\frac{g_{\mathrm{bar}}}{\theta_{0}} \right)^{\theta_{2}-\theta_{3}}\,\left(\frac{g_{\mathrm{bar}}}{\theta_{0}} \right)^{\theta_{3}} \tag{5}\]
which has limiting logarithmic slopes of \(\theta_{3}\) and \(\theta_{2}\), and plot the best-fit in Fig. 1. We define \(s_{-}\equiv\lim_{g_{\mathrm{bar}}\to 0}s\) and \(s_{+}\equiv\lim_{g_{\mathrm{bar}}\to\infty}s\).
The MOND interpretation of the RAR is complicated by the possibility of the external field effect (EFE), a breakdown of the strong equivalence principle due to the nonlinear, acceleration-based modification to Newtonian mechanics (Milgrom, 1983a). The EFE implies that otherwise identical galaxies in different external gravitational fields have different dynamics, which is a function of the external field strength \(g_{\mathrm{ex}}\) relative to \(g_{0}\) and the internal field \(g_{\mathrm{in}}\). In the quasi-Newtonian regime \(g_{\mathrm{in}}<g_{\mathrm{ex}}<g_{0}\), Kepler's laws are recovered with dynamical masses scaled by \(g_{\mathrm{ex}}/g_{0}\), while in the external field-dominated regime \(g_{\mathrm{in}}<g_{0}<g_{\mathrm{ex}}\), Newtonian mechanics are fully recovered (Famaey & McGaugh, 2012). This steepens the RAR at low \(g_{\mathrm{bar}}\).
The precise effect of the EFE is difficult to calculate in general because it depends both on the underlying MOND theory and on a galaxy's morphology and orientation with respect to the external field direction. The most sophisticated fitting functions to MOND simulations are currently to be found in Zonoozi et al. (2021) for QUMOND (Milgrom, 2010) and Chae & Milgrom (2022) for AQUAL (Bekenstein & Milgrom, 1984). Chae et al. (2022) tested these expectations by fitting the SPARC rotation curves for the average external field strength, finding this to be in good agreement with independent estimates based on the baryonic mass surrounding the SPARC galaxies (Chae et al., 2021) for AQUAL, but less so for QUMOND. We will therefore use AQUAL to explore the effect of the EFE on the expected low-\(g_{\mathrm{bar}}\) slope and functional form more generally. Chae & Milgrom (2022) eq. 15 gives
\[g_{\mathrm{obs}}=g_{\mathrm{bar}}\left(\frac{1}{2}+\left(\frac{ 1}{4}+\left(\left(\frac{g_{\mathrm{bar}}}{g_{0}}\right)^{2}+(1.1e_{\mathrm{N }})^{2}\right)^{-\frac{1}{2}}\right)^{\frac{1}{2}}\right)\times \tag{6}\] \[\left(1+\tanh\left(\frac{1.1\mathrm{e}_{\mathrm{N}}}{g_{\mathrm{ bar}}/g_{0}}\right)^{1.2}\times\left(-\frac{1}{3}\right)\times\right.\] \[\left.\frac{\left(\left(\left(\frac{g_{\mathrm{bar}}}{g_{0}} \right)^{2}+(1.1e_{\mathrm{N}})^{2}\right)^{-\frac{1}{2}}\right)\left(\frac{1} {4}+\left(\left(\frac{g_{\mathrm{bar}}}{g_{0}}\right)^{2}+(1.1e_{\mathrm{N}}) ^{2}\right)^{-\frac{1}{2}}\right)^{-\frac{1}{2}}}{1+\left(\frac{1}{2}+2\left( \left(\frac{g_{\mathrm{bar}}}{g_{0}}\right)^{2}+(1.1e_{\mathrm{N}})^{2} \right)^{-\frac{1}{2}}\right)^{\frac{1}{2}}}\right)}.\]
This allows for variable disk thickness and scale length, and is azimuthally averaged to reduce sensitivity to the orientation of the field relative to the disk axis. It recovers the Simple IF as \(e_{\mathrm{N}}\equiv g_{\mathrm{ex}}/g_{0}\to 0\) and hence we refer to it as "Simple IF + EFE".
Note that, while some form of the EFE is generically predicted by MOND, in modified inertia formulations it may be very different (e.g. a function of the entire past trajectory of an object) or negligible (Milgrom, 2011). While there is evidence for the EFE in many systems (McGaugh & Milgrom, 2013; Haghi et al., 2019; Chae et al., 2020), in others it appears conspicuously absent (Hernandez et al., 2019; Freundlich et al., 2022). The black curve in Fig. 1 shows the best fit to the data using Eq. 6.
### Mock data generation
To shed light on the significance of our results we apply ESR also to two stacks of mock data sets. We generate each mock data set using exactly the same number of points as
Figure 1: The Simple, Standard and RAR IFs, a double power law, and Simple IF with a global external field strength in AQUAL, overlaid on the SPARC data (blue points). The parameters are set to their maximum-likelihood values shown in Table 1. The dashed black line shows the one-to-one relation (Newtonian limit) and the cross in the lower right shows the average uncertainty size.
the SPARC data, and with identical \(\log g_{\rm bar}\), \(\delta\log g_{\rm bar}\) and \(\delta\log g_{\rm obs}\) values, but with \(\log g_{\rm obs}\) generated using a MON-Dian function. This assumes \(g_{\rm bar}^{\rm sh}\) equal to the SPARC \(g_{\rm bar}\) (the maximum a priori estimate), \(\log g_{\rm bar}\) for each mock realisation drawn from \(\mathcal{N}(\log g_{\rm bar}^{\rm b},\delta\log g_{\rm bar})\), and \(\log g_{\rm obs}\) from \(\mathcal{N}(\mathcal{F}(\log g_{\rm bar}^{\rm b}),\delta\log g_{\rm obs})\). To reduce the impact of noise in the mock data we apply ESR to a stack of 10 independent realisations.4 The only terms in the description length that depend on the dataset size are \(\log(\mathcal{L})\) and the Fisher matrix \(I\), both of which scale linearly. To make the results compatible with the real data we therefore divide these terms by 10.
Footnote 4: The reason not to do more is that the time required for parameter optimisation scales with the number of data points, and this is already expensive at complexity 9. We assess convergence by fitting the complexity 4-6 equations to an independent stack of 10 realisations. We find that the order of equations when sorted by \(L(D)\) is identical between the two stacks and that the difference between the description lengths of particular equations falls with complexity, from \(\sim 40\) at complexity 4 to \(\sim 5\) at complexity 6. This implies that the best equations, mostly at complexity 8 and 9, are not sensitive to the random number generation.
The two mock data set stacks differ in the generating function \(\mathcal{F}\). For the first, we use the RAR IF with the best-fit value on the data \(g_{0}=1.127\) (see Sec. 4). This function is already known to describe the RAR well (Lelli et al., 2017) and has low enough complexity to be included (as a special case of a more general function, see below) in our function list. Since Eq. 4 is satisfied by construction in this case, evaluating it on the best functions from ESR will address the question of whether the dynamic range of the data is sufficiently high--and the uncertainties sufficiently low--to pick out unambiguously a correctly MONDian solution, as only in this case could one expect to obtain such behaviour for the real data were it generated by MOND.
The second stack is created using Eq. 6. We adopt \(g_{0}=1.2\) and \(\langle g_{\rm ex}\rangle=1.2\times 10^{-2}\) (\(e_{\rm N}=0.01\)), corresponding roughly to maximal clustering of unobserved baryons (as expected in a MOND cosmology and maximising agreement with the rotation curve fits; Chae et al., 2021) and hence providing an upper bound on the impact of the EFE. This is similar to the value inferred in Chae (2022) and Chae et al. (2022) from fits to the SPARC rotation curves, and from our fit to the SPARC data in Table 1.5
Footnote 5: Chae & Milgrom (2022) argue that Eq. 6 is only reliable with the inner points of galaxies’ rotation curves removed. As we are interested only in the approximate effect of the EFE on the low-\(g_{\rm bar}\) slope of the RAR, mainly sourced by outer rotation curve points, we do not apply a cut.
## 4 Results
### SPARC data
We show in Table 1 the statistics of the best functions found by ESR on the SPARC data. We split the codelength of Eq. 1 into terms describing the residuals of the data around the functional expectation, the functional form and the parameter values as shown in the table footnotes. Below the horizontal line we give the results of the three MOND IFs, for which the free parameter corresponds to \(g_{0}\), the double power law (Eq. 5) and the Simple IF + EFE (Eq. 6). \(P(f)\equiv\exp(-L(D))/\sum(\exp(-L(D)))\) is the probability of the function given its description length, where the sum is over all functions up to complexity 9. (Note that these values would be changed by low-\(L(D)\) functions at higher complexity.) For reference, \(L(D)\) for the raw data (corresponding to the hypothesis \(\log g_{\rm obs}=0\)) is 53471, showing that significant compression is possible.
We find the best-fit \(g_{0}\) value for the RAR IF to be 1.13, somewhat lower than the 1.20 quoted by Lelli et al. (2017) although the data are the same. This is because Lelli et al. (2017) used scipy.odr to perform the optimisation rather than using the full first-order likelihood Eqs. 2-3, and also did the analysis in the \(\log(g_{\rm obs})-g_{\rm bar}\) rather than \(\log(g_{\rm bar})-\log(g_{\rm obs})\) plane (F. Lelli private communication). The double power law fit has much higher maximum likelihood than the MOND IFs, outweighing its increased codelength due to its four free parameters. Although the RAR IF has complexity
Figure 2: The top 20 functions found by ESR overlaid on the SPARC data (blue points), colour-coded by their relative probability in the full function list. The top panel fits the SPARC data, the middle panel mock data generated by the RAR IF, and the bottom panel mock data generated by the Simple IF with universal external field strength \(g_{\rm ex}=1.2\times 10^{-12}\) m s\({}^{-2}\). The mock datasets are 10 times larger than SPARC, although this is factored out in the description length calculation.
9 it is not explicitly produced by ESR due to the constant "1" appearing,6 a generalised form in which this is replaced by a free parameter appears at rank 17, with a probability \(4\times 10^{10}\) times lower than the top-ranked function (\(\Delta L(D)=24.5\)). When \(\theta_{0}\neq 0\) the low-\(g_{\rm bar}\) logarithmic slope \(s_{-}\) of this function is 1 rather than 1/2, so it does not function as a MOND IF. We refer to it as the"generalised RAR IF".
Footnote 6: This will be changed in a future version of ESR that performs “integer snap” of parameters where this reduces \(L(D)\).
The best ESR functions are clearly superior to the MOND functions or double power law. While the best metric for this is \(L(D)\) (or equivalently \(P(f)\)), other statistics lead to the same conclusion. There are many functions more accurate (lower \(-\log(\mathcal{L})\)) than even the double power law. Although the functions at rank 1-9 have more free parameters than the IFs this is more than compensated for by their greater accuracy: as an alternative metric, the Bayesian Information Criterion (BIC) of the rank 1 function is 108 lower than the Simple IF and even that of the rank 3 function with four free
Figure 4: The Pareto fronts identified by ESR for the SPARC, RAR IF mock and Simple IF + EFE mock datasets, for both \(\log(\mathcal{L})\) (blue) and total description length \(L(D)\) (red). The quantities plotted have the minimum values subtracted so that the best results appear at 0. Also shown are the results of the RAR, Simple and Standard IFs, Simple IF + EFE and double power law fits. ESR significantly outperforms these “by eye” guesses, even for mock data generated from them. Short diagonal lines on the \(x\)-axis indicate breaks. In the left and right panels both red and blue points for the Standard IF at complexity 14 lie above the top of the plot.
Figure 3: The logarithmic slopes \(s\equiv\frac{d\log(d_{\rm blue})}{d\log(d_{\rm blue})}\) of the top 10 ESR functions on each dataset, for comparison with the low- and high-\(g_{\rm bar}\) MONDian expectations 1/2 and 1 respectively (blue and red vertical dashed lines). The blue and red points are the limiting slopes \(s_{-}\equiv\lim_{g_{\rm bar}\to 0^{+}}s\) and \(s_{+}\equiv\lim_{g_{\rm bar}\to\infty}s\), while cyan and magenta indicate the slopes at the minimum and maximum \(g_{\rm bar}\) of the SPARC data (0.0083 and 65.4). In case a slope depends on a parameter value we show the 95% confidence interval as a bar (often very thin), obtained from an MCMC fit. Arrowheads indicate points or bars beyond the range of the plot.
parameters is 94.5 lower, corresponding to a very strong preference. In agreement with Chae et al. (2020, 2021, 2022), when fitting the Simple IF + EFE we find that \(e_{\rm N}>0\) is clearly preferred, and recovers a value around the large-scale structure expectation. Although this function is significantly more accurate than any IF on its own, more than compensating for its additional free parameter (\(\Delta{\rm BIC}=-35\) compared to the Simple IF), it has a poor description length due to the large functional contribution. The complexity is 59 using our current basis set of operators, although this would fall to 45 if \(\tanh\) were explicitly included. In general, the great improvement in accuracy and simplicity of the ESR functions demonstrates the advantage of this method over guessing functions "by eye".
In the top panel of Fig. 2 we plot the best 20 functions on top of the SPARC data. The functions are colour-coded by \(P(f)\), with darker colouring indicating functions favoured by MDL. The top six functions have discontinuities in \(s\) at \(g_{\rm bar}\approx 0.02\). We have checked that this is not due to outlying points, does not invalidate the first-order approximation of Eq. 2, and does not appear to map onto any local feature of the data. Instead it is likely due to the relative simplicity (i.e. complexity \(\leq 9\)) of the functions considered, as we discuss further in Sec. 5. While such behaviour could be excluded by requiring no discontinuity in the derivative within the range of the data (or more generally weighting functions with an \(s\)-dependent prior), we see no principled reason to do so. The first function without a discontinuity is at rank 7, which has the Newtonian limit and \(s_{-}=0.52\) at maximum likelihood. Although this is 9.3 \(\sigma\) from \(s_{-}=1/2\), and hence this equation cannot function as a pure-MOND IF, the EFE leads to the expectation that \(s>1/2\) at low \(g_{\rm bar}\) as discussed in more detail below. Several of the remaining highly ranked functions have similar quasi-MONDian behaviour.
While some of the best functions have MONDian limits around \(g_{\rm bar}=10^{43}\) others do not, and in particular \(s_{-}<1/2\) is common. To explore this further we calculate \(s_{-}\) and \(s_{+}\) analytically for the top ten equations as a function of their free parameters, showing the results in Table 11. For each of the functions where \(s_{-}\) and/or \(s_{+}\) depend on \(\boldsymbol{\theta}\) (i.e. are not fixed by the functional form alone) we perform a Markov Chain Monte Carlo (MCMC) inference using the numpy sampler (Phan et al., 2019; Bingham et al., 2019) with broad flat priors to constrain the parameters and hence derive the posterior predictive distributions of the limiting slopes. Fig. 3 (left panel) shows the results for the top ten functions, using a dot to indicate a limit fixed by the functional form, a bar to show the 95 per cent confidence interval in cases where the slope depends on the parameters, and an arrowhead to indicate a limit outside of the plotting range (including \(\pm\infty\)). In many cases the 95% confidence interval is extremely narrow.
Although it is \(s_{-}\) and \(s_{+}\) that directly relate to the MOND hypothesis, they require extrapolation far beyond the range of the data. To understand how the slopes behave near the limits of the SPARC data we also calculate \(s\) at the minimum, \(g_{\rm bar,\;min}=8.32\times 10^{-13}\) ms\({}^{-2}\), and maximum, \(g_{\rm bar,\;max}=6.54\times 10^{-9}\) ms\({}^{-2}\), measured baryonic accelerations. These are plotted in cyan and magenta respectively in Fig. 3. At \(g_{\rm bar,\;max}\) the logarithmic slope is \(\lesssim 1\) for almost all functions, as expected for the Newtonian limit as only at \(g_{\rm bar}\to\infty\) does \(s\) become 1. However, we find that the \(g_{\rm bar,min}\) slopes of the top five functions are not \(\sim 1/2\) but actually \(<0\) due to the aforementioned discontinuity. The remaining top functions have low-acceleration slopes typically slightly larger than \(1/2\).
These results show that the SPARC data do not unambiguously favour \(s_{-}=1/2\) and \(s_{+}=1\). The requirement for an interpolating function to be MONDian is in fact even more stringent than this, since the coefficient of proportionality in the limiting high-\(g_{\rm bar}\) power-law relation must be unity, i.e. \(g_{\rm obs}=g_{\rm bar}\). We find that among the functions in the top ten for which \(s_{+}=1\), four have such a coefficient (at rank 2, 4, 7, 10) while for the rank 1 function this is \(0.84\pm 0.006\) and at rank 8 it is \(0.72\pm 0.01\), where the uncertainties are obtained by fitting the functions with MCMC. At low \(g_{\rm bar}\) the coefficient in \(g_{\rm obs}\propto g_{\rm bar}^{1/2}\) should be \(\sqrt{g_{0}}\approx 1\). The only function with \(s_{-}=1/2\) (rank 6) has the coefficient 1.12 (close to the 1.10 expected from the canonical \(g_{0}=1.2\)), while those further down with \(s_{\rm bar,min}\approx 1/2\) have a coefficient of 1. The relative simplicity of the functions we consider here should preference a coefficient of 1, and hence again it is not clear to what extent the data may be said to be MONDian. The double power law has the limits \(g_{\rm obs}=0.81\,g_{\rm bar}^{1.03}\) at high \(g_{\rm bar}\) and \(g_{\rm obs}=1.57\,g_{\rm bar}^{0.60}\) at low \(g_{\rm bar}\).
Next, we show in the left panel of Fig. 4 the separate Pareto fronts of description length and negative log-likelihood, with the second ("simplicity") objective measured by functional complexity. Unlike the Pareto fronts produced by traditional SR algorithms, those of ESR are guaranteed to be optimal. \(L(D)\) and \(-\log(\mathcal{L})\) are minimised separately at each complexity, and have their minimum values over all complexity subtracted so that the globally best functions appear at 0. We show the MONDian functions and double power law as separate symbols, all of which we find to be strongly Pareto-dominated by the best ESR functions at lower complexity. Note that while the "knee" of the Pareto front (where \(L(D)\) or \(-\log(\mathcal{L})\) turns over) would appear to be at complexity 6-7, there is a significant improvement in going from complexity 8 to 9. This cautions against automatically selecting functions at the knee (the default for example in PySR), and indicates that further improvement would likely be achievable by going beyond complexity 9. This is beyond the scope of the present work; we are content here to have discovered simple functional forms for the RAR surpassing any that have been considered heretofore.
### Mock data
The above results suggest a Newtonian limit (\(g_{\rm obs}\to g_{\rm bar}\) as \(g_{\rm bar}\to\infty\)) is somewhat favoured by the data while a deep-MOND limit (\(g_{\rm obs}\to\sqrt{g_{0}\,g_{\rm bar}}\) as \(g_{\rm bar}\to 0\)) is questionable. However, given the limited dynamical range and significant uncertainties of the data it is unclear to what extent we should expect to find these limits even if the generating function were MONDian. In addition, the EFE would imply \(s>1/2\) at low \(g_{\rm bar}\). To investigate these issues we now apply ESR to the mock data of Sec. 3.4.
#### 4.2.1 RAR IF generating function
Table 2 shows the best functions found by ESR for the RAR IF mock data, along with the results for the IFs and double power law. The RAR IF is by construction a good fit to
this data, but there are 15 functions with lower \(L(D)\), including several at lower complexity. This indicates that the characteristics of the SPARC data (dynamic range and uncertainties) are insufficient to pick out the true generating function: ESR prefers simpler functions which may achieve slightly higher likelihoods. Since the \(-\log(\mathcal{L})\) term in \(L(D)\) becomes dominant at large dataset size, simply increasing the number of (mock) observations with otherwise identical properties would not be sufficient to push the generating function to the top of the list. For this dataset we find that the generalised RAR IF, appearing at rank 41, has higher \(L(D)\) than the RAR IF despite slightly higher likelihood, a success of MDL's penalisation of more complex functions. Note that the best-fit generalised RAR IF is \(x/(0.995-\exp(-\sqrt{x/1.068}))\), somewhat offset from the ground-truth values {1, 1.127}. This results from a combination of the limited dataset size (introducing random noise) and a small bias in the maximum-likelihood estimator which we discuss further in Appendix A. At rank 27 we find a close cousin of the RAR IF in which the "1" is free and \(g_{0}\) is pinned to 1. This performs only slightly worse than the RAR IF itself because the true \(g_{0}\) is close to 1.
The highest ranked ESR function is better by \(\Delta L(D)=6.3\) than the RAR IF. Although the relative probability between the best function and the best RAR-like function is smaller than in the observations (\(\sim 1600\) compared to \(4\times 10^{10}\)), it is interesting that the best RAR-like function appears further up the list in the real data (rank 17 vs 27). The double power law is disfavoured relative to the RAR IF and many ESR functions despite having the highest likelihood shown in the table, another success of the complexity penalisation. There are a few functions overall with lower \(-\log(\mathcal{L})\), the lowest being \(-2048.0\) at rank 51 (\((\theta_{0}+|\theta_{1}+\theta_{2}/x|^{1/2})^{-1}\), \(L(D)=-2018.4\)). Also as expected, the Simple IF + EFE has \(e_{\rm N}\) mapped to 0 and hence behaves identically to the Simple IF, albeit with larger functional complexity.
The top 20 ESR functions for the RAR IF mock are overplotted on that data in the middle panel of Fig. 2. We find a slightly reduced spread in \(s\) at both the high-\(g_{\rm bar}\) and low-\(g_{\rm bar}\) ends compared to the real data, without any discontinuities within the data range. However there is still significant uncertainty beyond the range of the data. This is quantified in the middle panel of Fig. 3, where the top ten functions are all observed to have slopes of approximately 1/2 at \(g_{\rm bar,min}\) and 1 at \(g_{\rm bar,max}\), although only in two cases is \(s_{-}=1/2\): for the others it is lower. This indicates that constraining the slope to be near 1/2 at \(g_{\rm bar,min}\) is insufficient to conclude that \(s_{-}\) takes a similar value, at least up to complexity 9. One would need to reduce the uncertainties, or preferably, lower \(g_{\rm bar,min}\). That this applies to a lesser extent at high-\(g_{\rm bar}\) is shown by the functions at rank 4 and 7 with \(s_{+}=\infty\).
Adding to the conclusion that the mock data characteristics are insufficient to pick out a MONDian generating function, we find that the coefficient in \(g_{\rm obs}\propto g_{\rm bar}\) at high \(g_{\rm bar}\) is only
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \multirow{2}{*}{Rank} & \multirow{2}{*}{Function} & \multirow{2}{*}{Comp.} & \multirow{2}{*}{\(P(f)\)} & \multicolumn{3}{c}{Parameters} & \multicolumn{3}{c}{Description length} \\ & & & & \(\theta_{0}\) & \(\theta_{1}\) & \(\theta_{2}\) & \(\theta_{3}\) & Resid.\({}^{1}\) & Func.\({}^{2}\) & Param.\({}^{3}\) & Total \\ \hline
1 & \(\theta_{0}\) (\(|\theta_{1}+x|^{\theta_{2}}+x\)) & 9 & 9.3\(\times 10^{-1}\) & 0.84 & -0.02 & 0.38 & — & -1279.1 & 14.5 & 14.0 & -1250.6 \\
2 & \(|\theta_{1}|^{2}+\theta_{0}|^{\theta_{2}}+x\) & 9 & 6.4\(\times 10^{-2}\) & -0.99 & 0.64 & 0.36 & — & -1279.9 & 12.5 & 19.6 & -1247.9 \\
3 & \(|\theta_{0}|^{|\theta_{1}-x|^{\theta_{2}}-\theta_{3}}\) & 9 & 2.0\(\times 10^{-3}\) & -1.4\(\times 10^{2}\) & 0.02 & 0.14 & 0.89 & -1276.4 & 12.5 & 19.5 & -1244.4 \\
4 & \(|\theta_{0}(\theta_{1}+x)|^{\theta_{2}}+x\) & 9 & 1.4\(\times 10^{-4}\) & 0.35 & -0.02 & 0.34 & — & -1268.9 & 14.5 & 12.7 & -1241.7 \\
5 & \(|\theta_{0}-|\theta_{1}-x|^{\theta_{2}}|^{\theta_{3}}\) & 9 & 1.0\(\times 10^{-5}\) & -0.30 & 0.02 & 0.42 & 2.14 & -1271.1 & 12.5 & 19.5 & -1239.1 \\
6 & \(\sqrt{x}\exp\left(\frac{|\theta_{0}+x|^{\theta_{1}}}{x}\right)\) & 9 & 1.5\(\times 10^{-9}\) & -0.02 & 0.36 & — & — & -1257.9 & 17.5 & 10.0 & -1230.3 \\
7 & \(\left(\frac{|\theta_{0}|^{x}}{x}\right)^{\theta_{1}}+x\) & 9 & 2.4\(\times 10^{-10}\) & 1.87 & -0.52 & — & — & -1250.6 & 14.5 & 7.6 & -1228.5 \\
8 & \(\sqrt{|\theta_{0}+x|}+\theta_{1}x\) & 8 & 1.8\(\times 10^{-10}\) & -1.8\(\times 10^{-3}\) & 0.72 & — & — & -1245.6 & 12.9 & 4.5 & -1228.2 \\
9 & \(\left|\theta_{0}+\frac{1}{\sqrt{2}}\right|^{\theta_{1}}\) & 8 & 9.6\(\times 10^{-11}\) & -0.22 & -2.14 & — & — & -1251.1 & 14.3 & 9.2 & -1227.6 \\
10 & \(\left(\sqrt{x}+\frac{1}{x}\right)^{\theta_{0}}+x\) & 9 & 8.2\(\times 10^{-11}\) & -0.53 & — & — & — & -1248.3 & 16.1 & 4.8 & -1227.4 \\ \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\
17 & \(x/(\exp(\theta_{0})-|\theta_{1}|^{\sqrt{2}})\) & 9 & 2.2\(\times 10^{-11}\) & 0.03 & 0.44 & — & — & -1250.9 & 17.5 & 7.3 & -1226.1 \\ \hline — & Double power law & 11 & 9.7\(\times 10^{-16}\) & 4.65 & 3.96 & 1.03 & 0.60 & -1252.3 & 17.7 & 18.5 & -1216.1 \\ — & Simple IF & 10 & 5.5\(\times 10^{-25}\) & 1.11 & — & — & — & -1217.3 & 18.6 & 3.9 & -1194.8 \\ — & RAR IF & 9 & 6.7\(\times 10^{-26}\) & 1.13 & — & — & — & -1212.8 & 16.1 & 3.9 & -1192.7 \\ — & Simple IF + EFE & 59 & 5.0\(\times 10^{-69}\) & 1.16 & 6.8\(\times 10^{-3}\) & — & — & -1238.9 & 139.9 & 5.6 & -1093.4 \\ — & Standard IF & 14 & 9\(\times 10^{-150}\) & 1.54 & — & — & — & -939.5 & 27.9 & 4.1 & -907.5 \\ \hline \end{tabular} \({}^{1}-\log\mathcal{L}(\hat{\theta})\)
\end{table}
Table 1: Top functions found by ESR applied to the SPARC data, ranked by total description length, compared to four MOND IFs and a double power law below the horizontal line. \(x\equiv g_{\rm bar}/10^{-10}\) n s\({}^{-2}\). We include also the “generalised RAR IF” at rank 17, although this does not have a deep-MOND limit. The IFs and double power law are not produced explicitly by our implementation of ESR and hence their ranks are unknown, although they are clearly worse than the best ESR functions in both description length and likelihood. For the Simple, RAR and Standard IFs the parameter is \(g_{0}\), while for the “Simple IF + EFE” (Eq. 6), the first is \(g_{0}\) and the second \(e_{\rm N}\).
unity for one of the top-10 functions for which \(s_{+}=1\) (at rank 6). For all the others it is 0.64, with the exception of that at rank 1 where it is 0.63. These values have uncertainties \(\sim 0.003\) when constrained by MCMC, and vary by \(\sim 0.02\) over mock datasets differing only in the random seed. Thus the best functions almost always fail to recover the Newtonian limit even when it is the truth, presumably due to an insufficient \(g_{\rm bar,max}\). The origin of 0.64 is unclear, but presumably results from the way the lower-\(g_{\rm bar}\) behaviour is filtered through the forms of the functions found to be optimal. In cases where \(s_{-}=1/2\), the coefficient of proportionality is 1 (to be compared to \(\sqrt{g_{0}}\) in MOND), and the double power law limits are \(g_{\rm obs}=1.20\,g_{\rm bar}^{0.90}\) at high \(g_{\rm bar}\) and \(g_{\rm obs}=1.30\,g_{\rm bar}^{0.54}\) at low \(g_{\rm bar}\).
The middle panel of Fig. 4 shows the Pareto front for these data. We find more smooth behaviour than for the real data, with the optimum solution achieved already by complexity 8, reflecting the relatively simple nature of the generating function. This shows that if the RAR IF was generating the real data (and our likelihood and mock data generation method were accurate), we would have achieved the \(L(D)\) minimum on those data too. However, the MONDian functions (including the RAR IF itself) and double power law are Pareto-dominated by the ESR results even on these mock data, showing that one should not expect to be able to recover unambiguously even this simplest of MOND generating functions.
#### 4.2.2 Simple IF + EFE generating function
Analogous results for the mock data generated using the Simple IF with inclusion of the EFE are shown in Table 3 and the bottom/right panels of Figs. 2-4. This dataset behaves more similarly to the real data in terms of the relative ordering of the IFs, double power law and ESR functions, including the generalised RAR IF. Indeed, the best-fit parameters of all three non-EFE IFs are identical to the SPARC data to two decimal places, while those of the generalised RAR IF and the high-\(g_{\rm bar}\) slope of the double power law are the same to one. Here the IFs provide a significantly worse compression of the data than the best ESR functions, and the double power law also performs relatively poorly due to the curvature at low \(g_{\rm bar}\) (see Fig. 1). There is again a small bias between the maximum-likelihood (1.19 and 8.56\(\times 10^{-3}\)) and true (1.2 and 1\(\times 10^{-2}\)) \(g_{0}\) and \(e_{\rm N}\) values for the Simple IF + EFE fit. Although this function has among the highest likelihoods achievable by ESR up to complexity 9, its functional complexity makes it a poor compression of its own SPARC-like mock data. This reinforces the conclusion that the characteristics of these data are insufficient to identify a MONDian
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline Rank & Function & Comp. & \(P(f)\) & \multicolumn{4}{c}{Parameters} & \multicolumn{4}{c}{Description length} \\ & & & \(\theta_{0}\) & \(\theta_{1}\) & \(\theta_{2}\) & \(\theta_{3}\) & Resid.\({}^{1}\) & Func.\({}^{2}\) & Param.\({}^{3}\) & Total \\ \hline
1 & \(\theta_{0}+\theta_{1}x+\sqrt{x}\) & 8 & 5.6\(\times 10^{-1}\) & 9.1\(\times 10^{-3}\) & 0.63 & — & — & -2045.2 & 12.9 & 4.9 & -2027.4 \\
2 & \(\sqrt{|\theta_{0}+x|}+\theta_{1}x\) & 8 & 2.8\(\times 10^{-1}\) & 3.0\(\times 10^{-3}\) & 0.64 & — & — & -2044.4 & 12.9 & 4.8 & -2026.7 \\
3 & \(\theta_{0}x+x^{0.1}\) & 7 & 8.2\(\times 10^{-2}\) & 0.64 & 0.49 & — & — & -2045.2 & 11.3 & 8.5 & -2025.5 \\
4 & \(\sqrt{x}\exp\left(\frac{x^{\theta_{0}}}{2}\right)\) & 7 & 3.5\(\times 10^{-2}\) & 0.36 & — & — & — & -2040.7 & 12.5 & 3.5 & -2024.7 \\
5 & \((\theta_{0}+x)\left(\theta_{1}+\frac{1}{\sqrt{x}}\right)\) & 9 & 1.1\(\times 10^{-2}\) & 1.3\(\times 10^{-3}\) & 0.64 & — & — & -2044.5 & 16.1 & 4.8 & -2023.5 \\
6 & \(\frac{1}{\sqrt{|\theta_{0}+\frac{1}{2}|}}+x\) & 8 & 8.8\(\times 10^{-3}\) & 1.74 & — & — & — & -2038.5 & 12.9 & 2.3 & -2023.3 \\
7 & \((x|\theta_{0})(x^{|\theta_{1}|\theta_{2}})^{2}\) & 9 & 3.1\(\times 10^{-3}\) & -2.09 & -1.4\(\times 10^{-4}\) & 0.04 & — & -2045.3 & 12.5 & 10.6 & -2022.2 \\
8 & \(\theta_{0}x+|\theta_{1}+x|^{\theta_{2}}\) & 9 & 2.4\(\times 10^{-3}\) & 0.64 & 1.4\(\times 10^{-3}\) & 0.49 & — & -2045.4 & 14.5 & 8.9 & -2022.0 \\
9 & \(x\left(|\theta_{0}-x|^{\theta_{1}}-\theta_{2}\right)\) & 9 & 2.3\(\times 10^{-3}\) & 1.2\(\times 10^{-3}\) & -0.51 & -0.64 & — & -2045.3 & 14.5 & 8.9 & -2021.9 \\
10 & \((\theta_{0}-x)\left(\theta_{1}-x^{\theta_{2}}\right)\) & 9 & 2.2\(\times 10^{-3}\) & -6.5\(\times 10^{-4}\) & -0.64 & -0.51 & — & -2045.4 & 14.5 & 9.0 & -2021.9 \\ \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\
27 & \(x/(\exp(\theta_{0})-\exp(-\sqrt{x}))\) & 9 & 3.2\(\times 10^{-4}\) & -0.01 & — & — & — & -2039.3 & 17.5 & 1.9 & -2020.0 \\ \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\
41 & \(x/(\exp(\theta_{0})-|\theta_{1}|^{\sqrt{x}})\) & 9 & 1.1\(\times 10^{-4}\) & -5.0\(\times 10^{-3}\) & 0.38 & — & — & -2042.1 & 17.5 & 5.7 & -2018.9 \\ \hline — & RAR IF & 9 & 1.0\(\times 10^{-3}\) & 1.14 & — & — & — & -2041.1 & 16.1 & 3.9 & -2021.1 \\ — & Double power law & 11 & 3.4\(\times 10^{-8}\) & 1.25 & 1.47 & 0.90 & 0.54 & -2047.2 & 17.7 & 18.7 & -2010.8 \\ — & Simple IF & 10 & 2.8\(\times 10^{-11}\) & 1.12 & — & — & — & -2026.2 & 18.6 & 3.9 & -2003.7 \\ — & Standard IF & 14 & 2.9\(\times 10^{-55}\) & 1.54 & — & — & — & -1934.4 & 27.9 & 4.1 & -1902.4 \\ — & Simple IF + EFE & 59 & 5.9\(\times 10^{-64}\) & 1.12 & 0 & — & — & -2026.2 & 139.9 & 3.9 & -1882.4 \\ \hline \end{tabular} \({}^{1}-\log\mathcal{L}(\hat{\boldsymbol{\theta}})\)
\end{table}
Table 2: As Table 1 but for the RAR IF mock data. We find the generalised RAR IF at rank 41, and a closely related function at rank 27 in which the free parameter appears in the other term in the denominator. For this dataset the RAR IF itself is superior to both of these modified forms, as would be expected given that it generated the data.
generating function: this one in particular would require far more data than the RAR IF to be favoured by MDL.
For the Simple IF + EFE mock all top-10 functions have \(s_{+}=1\) and \(s(g_{\rm bar,max})>0.9\). However, only six of them recover the true Newtonian limit \(g_{\rm obs}=g_{\rm bar}\): the others find \(g_{\rm obs}=0.71\;g_{\rm bar}\) or \(g_{\rm obs}=0.79\;g_{\rm bar}\), again with sub-percent uncertainty from MCMC. Thus under this model too one would not expect the Newtonian limit to be identified robustly. While \(s(g_{\rm bar,min})>1/2\), indicating the significant impact of the EFE, \(s_{-}\) is typically 0 as opposed to 1 as expected from Eq. 6. Thus \(g_{\rm bar,min}\) is too high to constrain \(s_{-}\) reliably, although this may also be a reflection of the relative simplicity of the functions we consider. The double power law limits are \(g_{\rm obs}=0.96\,g_{\rm bar}^{0.98}\) at high \(g_{\rm bar}\) and \(g_{\rm obs}=1.55\,g_{\rm bar}^{0.60}\) at low \(g_{\rm bar}\). The Pareto front indicates this dataset to be somewhat more complex than the RAR IF mock, as \(L(D)\) continues to fall to complexity 9, although the smoothness shows it to be simpler than the real data.
All functions considered achieve considerably higher likelihood on the mock datasets than the real data, showing that the mocks are simpler. This could be because the data do not conform to the MOND expectation--one would expect any given \(g_{\rm obs}=f(g_{\rm bar})\) to be relatively inaccurate in a chaotic ACDM galaxy formation scenario--or because the model for scattering the mock data points is overly simplistic. This is discussed further in Sec. 5. Relatedly, the \(P(f)\) values of the top functions are very closely spaced in the mock datasets, indicating that there is little to distinguish them. On the contrary, on the real data the integrated probability of all functions besides the top five is \(\lesssim\)\(10^{-8}\), suggesting that these functions perhaps ought not to be considered at all. The limiting slopes of the top ten equations as a function of the parameters can be found in Tables B2 and B3 for the RAR and EFE mocks respectively.
## 5 Discussion
Our main conclusion is that the SPARC data are insufficient to determine robustly the limiting behaviour of the RAR, and hence cannot verify or refute the MOND hypothesis. This is reached by studying mock data generated by MOND; in particular, generating data according to the RAR IF, not only are we unable to identify that as the generating function, but, more seriously, we cannot reconstruct \(s_{-}=1/2\). At the high-\(g_{\rm bar}\) end, the logarithmic slope of the Newtonian limit (\(s_{+}=1\)) is typically well recovered, although the coefficient of proportionality in \(g_{\rm obs}\propto g_{\rm bar}\) is not: in the RAR IF mock data this takes a values \(\sim 0.64\) far more often than 1.
Improving this situation requires increasing the dynamical range of the RAR. At the low-\(g_{\rm bar}\) end this may be achieved by studying ultra-diffuse galaxies (e.g. Freundlich et al., 2022), or local dwarf spheroidals (e.g. McGaugh and Wolf, 2010; McGaugh and Milgrom, 2013), some of which seem seem to indicate \(s_{-}\approx 0\), as found by many of the best ESR functions (Lelli et al., 2017). Alternatively, one may attempt to probe the outer regions of galaxies including the Milky Way (e.g. Oman et al., 2020). Particularly promising for a large gain is to use stacked weak lensing to probe galaxy outskirts that would have insufficient signal-to-noise on an individual-object basis (Brouwer et al., 2021). This appears to indicate
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline Rank & Function & Comp. & \(P(f)\) & \multicolumn{4}{c}{Parameters} & \multicolumn{4}{c}{Description length} \\ & & & \(\theta_{0}\) & \(\theta_{1}\) & \(\theta_{2}\) & \(\theta_{3}\) & Resid.\({}^{1}\) & Func.\({}^{2}\) & Param.\({}^{3}\) & Total \\ \hline
1 & \(\theta_{0}+\sqrt{x^{2}+2x}\) & 9 & 8.9\(\times 10^{-1}\) & -0.06 & — & — & — & -2017.7 & 14.5 & 3.1 & -2000.0 \\
2 & \(\theta_{0}+\sqrt{x|\theta_{1}+x|}\) & 8 & 9.3\(\times 10^{-2}\) & -0.06 & 1.97 & — & — & -2017.9 & 12.9 & 7.3 & -1997.8 \\
3 & \(-|\theta_{0}|^{\sqrt{x}}+\theta_{1}+x\) & 8 & 5.6\(\times 10^{-3}\) & 0.26 & 0.95 & — & — & -2017.9 & 12.9 & 10.1 & -1995.0 \\
4 & \((\theta_{0}-x)\left(\theta_{1}-x^{\theta_{2}}\right)\) & 9 & 3.3\(\times 10^{-3}\) & 3.1\(\times 10^{-3}\) & -0.71 & -0.53 & — & -2019.7 & 14.5 & 10.7 & -1994.4 \\
5 & \(x^{\theta_{0}}-\theta_{1}(\theta_{2}-x)\) & 9 & 2.4\(\times 10^{-3}\) & 0.39 & 0.79 & 0.12 & — & -2020.9 & 14.5 & 12.3 & -1994.1 \\
6 & \(|\theta_{0}-x|^{\theta_{1}}-\theta_{2}x\) & 9 & 2.0\(\times 10^{-3}\) & 5.5\(\times 10^{-3}\) & 0.48 & -0.71 & — & -2019.1 & 14.5 & 10.6 & -1994.0 \\
7 & \(x|\theta_{0}|^{-|\theta_{1}|^{\sqrt{x}}}\) & 9 & 1.7\(\times 10^{-3}\) & 0.04 & -0.16 & 0.33 & — & -2018.1 & 12.5 & 11.9 & -1993.8 \\
8 & \(x\left(\theta_{0}+|\theta_{1}+x|^{\theta_{2}}\right)\) & 9 & 1.5\(\times 10^{-3}\) & 0.71 & 0.01 & -0.53 & — & -2018.7 & 14.5 & 10.6 & -1993.7 \\
9 & \(|\theta_{0}|^{|\theta_{1}|^{6}\theta_{2}}+x\) & 9 & 6.5\(\times 10^{-4}\) & 7.0\(\times 10^{-6}\) & 0.03 & 0.17 & — & -2016.7 & 12.5 & 11.4 & -1992.8 \\
10 & \(\exp\left(\theta_{0}-\frac{1}{\sqrt{x}}\right)+x\) & 9 & 5.5\(\times 10^{-4}\) & 0.57 & — & — & -2014.0 & 17.5 & 3.9 & -1992.6 \\ \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\
21 & \(x/(\exp(\theta_{0})-|\theta_{1}|^{\sqrt{x}})\) & 9 & 1.8\(\times 10^{-5}\) & 0.03 & 0.44 & — & — & -2014.2 & 17.5 & 7.4 & -1989.3 \\ \hline — & Double power law & 11 & 3.4\(\times 10^{-11}\) & 3.53 & 3.31 & 0.98 & 0.60 & -2012.3 & 17.7 & 18.6 & -1976.0 \\ — & Simple IF & 10 & 1.2\(\times 10^{-22}\) & 1.11 & — & — & — & -1972.1 & 18.6 & 3.9 & -1949.6 \\ — & RAR IF & 9 & 7.0\(\times 10^{-24}\) & 1.13 & — & — & -1966.9 & 16.1 & 3.9 & -1946.8 \\ — & Simple IF + EFE & 59 & 3.8\(\times 10^{-57}\) & 1.19 & 8.6\(\times 10^{-3}\) & — & — & -2016.0 & 139.9 & 5.9 & -1870.2 \\ — & Standard IF & 14 & 2\(\times 10^{-141}\) & 1.54 & — & — & — & -1708.3 & 27.9 & 4.1 & -1676.3 \\ \hline & & & & & & & & & & \\ \({}^{1}-\log\mathcal{L}(\hat{\mathbf{\theta}})\) & \({}^{2}k\log(n)+\sum_{j}\log(c_{j})\) & \({}^{3}-\frac{p}{2}\log(3)+\sum_{i}^{p}(\log(I_{ii})^{1/2}+\log(|\hat{\theta}_{ i}|))\) & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: As Table 1 but for the Simple IF + EFE mock data.
\(s_{-}\approx 1/2\). Increasing \(g_{\rm bar,max}\) requires probing the central regions of high-mass ellipticals, as well as groups and clusters of galaxies (Chae et al., 2020, 2019; Gopika and Desai, 2021; Chan and Del Popolo, 2020; Tian et al., 2020; Pradyunma and Desai, 2021). Such data already exists and may readily be folded into our framework to increase its constraining power. A smaller information gain may be achieved by reducing the uncertainties in \(g_{\rm bar}\) and \(g_{\rm obs}\): in the limit of no uncertainty any generating function will be assigned \(P(f)=1\) by MDL. By generating mock data with different characteristics one could ascertain the requirements for various features of the functional form to be unambiguously determined.
It is likely that there exist functions at higher complexity superior to those of Tables 1-3, especially for the real data where \(L(D)\) drops significantly from complexity 8 to 9. While uncovering lower-\(L(D)\) functions at complexity \(>\)9 may update the optimal limiting behaviours of the functional form of the RAR, and hence its compatibility with MOND, it cannot compromise our discovery of simpler and more accurate functions than the IFs and double power law. Indeed, the fact that the (or at least _a_) knee of the Pareto front is reached around complexity 7 in the left panel of Fig. 4 shows that such functions already offer a powerful compression, and the commonalities between the top functions at complexity \(\lesssim\)9 suggests that similar features are likely to be present in more complex functions also.
Discovering such functions would likely be computationally prohibitive for the ESR algorithm, and thus a stochastic search (e.g. using a genetic algorithm) may be required. This search may be seeded by the ESR functions: the fact that many of the best-fitting functions have similar features (such as \(g_{\rm obs}\approx 0.64\;g_{\rm bar}\) as \(g_{\rm bar}\rightarrow\infty\) for the RAR IF mock) suggests these may be useful for higher-complexity functions also. Thus ESR may be used to validate the underlying assumption of stochastic searches--that there exist features of functions responsible for their fitness--and the identification of these features may be useful for tuning hyperparameters. It would also be possible to combine ESR with deterministic symbolic regression algorithms (e.g. Worm and Chiu, 2013; Rivero et al., 2022; Kammerer et al., 2021) to search systematically the neighbourhood of good functions towards higher complexity.
Our best functions on the real data have a discontinuity in \(s\) around \(g_{\rm bar}=0.02\). This is likely due to the limited complexity of the equations we consider: a cusp is the simplest way of changing \(s\) sharply. It is probable that the optimal functions at higher complexity will have a smoothed form of this behaviour in which \(s\) does not become negative and may not tend to 0. We therefore doubt that the \(s_{-}\) and \(s(g_{\rm bar,min})\) values of the best functions in Table 1 are robust. One could attempt to construct more complex functions inspired by the ESR results with similar but not discontinuous behaviour and calculate their \(-\log(\mathcal{L})\) and \(L(D)\) separately, or feed them into a genetic algorithm as mentioned above. On the other hand, below complexity 9, there is only a single low description length function that is discontinuous, the third best function at complexity 8 (\(P(f)=4.2\times 10^{-11}\)). The best functions at lower complexity more frequently have \(s_{-}=1/2\) and \(s_{+}=1\), although again they rarely satisfy \(g_{\rm obs}=g_{\rm bar}\) as \(g_{\rm bar}\rightarrow\infty\). For example, the top function at complexity 6--marking the (first) knee of the \(L(D)\) Pareto front in Fig. 4--is \(g_{\rm obs}=0.70\;g_{\rm bar}+\sqrt{g_{\rm bar}}\), exhibiting similar high-\(g_{\rm bar}\) behaviour to the 1*- and 8th-ranked functions overall.
To generate the mock data we assumed that all \(g_{\rm bar}\) values are uncorrelated. While this is likely true between galaxies, it is not within a single galaxy because the uncertainty in \(g_{\rm bar}\) is dominated by the mass-to-light ratio, a global galaxy parameter in the simplest approximation. This may be seen from the data in Fig. 1, where lines of points (e.g. scattering low around \(g_{\rm bar}=2\) or high around \(g_{\rm bar}=10\)) are all from the same galaxy. A more robust procedure may be to generate \(\Upsilon\) values for each mock galaxy by randomly drawing from their priors, use this to transform \(g_{\rm bar}\) and then add any other, random sources of noise (e.g. from the uncertainty in 3.6 \(\mu\)m luminosity). By enhancing inter-galaxy variations this may increase the complexity of the mock datasets, moving their ESR results towards those of the SPARC data. Alternatively, one may fit each galaxy separately to assess compatibility of their individual RARs (analogously to Li et al., 2018 but not just for the RAR IF). The assumption of uncorrelated data points is also present in our likelihood, as discussed further in Appendix A. A complete analysis would infer \(D\), \(i\), \(L_{3.6}\) and \(\Upsilon\) for each SPARC galaxy along with the parameters of the function being fitted.
We have assumed no intrinsic scatter in the RAR, such that all deviations from the hypothetical functional expectation must come from the observational uncertainties. While this is expected in MOND, in \(\Lambda\)CDM the complex process of galaxy formation would lead to a significant and parameter-dependent effective intrinsic scatter (Desmond, 2017). Even the EFE would introduce some scatter due to galaxy-by-galaxy variation in \(g_{\rm ex}\)(Chae et al., 2021). It would be straightforward to add this (in some direction on the RAR plane) as an additional free parameter of all functions, which would alter the results. MDL naturally penalises the addition of this parameter, allowing one to determine whether it is justified for any given function. This would provide further evidence concerning the optimality of MOND by assessing the extent to which the data implies law-like modified gravity behaviour.
Our current implementation of MDL treats the parameter values as part of the model and chooses them to maximise the likelihood. An alternative would be to treat the hypothesis in Eq. 1 as the functional form alone, assigning codelengths and probabilities to functions regardless of their parameter values. In a Bayesian formulation this corresponds to marginalising over the parameters, and enables a simpler one-part coding scheme where the description length is simply the negative logarithm of the model evidence including any functional prior. An even higher-level approach would be to group functions into sets with specific properties, e.g. limiting behaviour. This would enable calculation of the posterior predictive distribution of any feature of the functional representation of the dataset, and hence enable model comparison at any level of generality.
The relative simplicity of the RAR and conformity to the Newtonian and deep-MOND limits are the key differences between the expectations of MOND and the more chaotic galaxy formation scenario of \(\Lambda\)CDM: it is only under the simpler scenario that one would _expect_ to find a simple \(g_{\rm obs}=\mathcal{F}(g_{\rm bar})\). While our results are therefore not particularly supportive of the MOND hypothesis, this is not to say either that the data could not plausibly have been generated
by MOND or that it could plausibly have been generated under another hypothesis, as only MOND currently has sufficient predictivity for a test of this precision. We look to future SR studies with more data to establish the functional form of the RAR--if it exists--definitively.
## 6 Conclusion
The radial acceleration relation (RAR) has become central to debates about the mass discrepancy problem on astrophysical scales. Its tightness and regularity have been used to argue for a violation of Newtonian gravity in accordance with Modified Newtonian Dynamics (MOND), but the functions used to fit the data have been constructed to conform to this theory. As the first detailed application of the brand-new technique of Exhaustive Symbolic Regression, we rank objectively _all_ simple functions in terms of their aptitude for describing the SPARC RAR. We employ the minimum description length principle to trade accuracy with simplicity and hence perform model selection, and calibrate our method on mock MOND data generated both with and without the external field effect (EFE). Our conclusions are as follows:
* ESR discovers functions which are better descriptions, in both accuracy and simplicity and for both observed and simulated data, than MOND functions or a double power law.
* While the majority of best-fitting functions on the SPARC data recover \(g_{\mathrm{obs}}\propto g_{\mathrm{bar}}\) at high accelerations, not all have a best-fit coefficient of proportionality near unity. Thus the Newtonian limit is not clearly evidenced.
* The SPARC data do not prefer functions with the deop-MOND limit of \(g_{\mathrm{obs}}\propto\sqrt{g_{\mathrm{bar}}}\) as \(g_{\mathrm{bar}}\to 0\). Instead, we find that functions with \(g_{\mathrm{bar}}\to\) const typically compress the data more efficiently, albeit with considerable uncertainty.
* SPARC-like mock data generated assuming the MONDian RAR interpolating function do not unambiguously recover that function. Moreover, many of the best functions for those mock data have \(g_{\mathrm{obs}}\approx 0.64\,g_{\mathrm{bar}}\) rather than \(g_{\mathrm{obs}}=g_{\mathrm{bar}}\) at high \(g_{\mathrm{bar}}\), and most do not have a deep-MOND limit at all.
* The EFE in AQUAL greatly increases the logarithmic slope of the best-fitting functions at the low-\(g_{\mathrm{bar}}\) end of the data, but does not appreciably impact the limiting slope at \(g_{\mathrm{bar}}\to 0\). Incorporating the EFE in the mock data produces more generally similar results to the real data, so our analysis (within the MOND paradigm) hints at it.
* We conclude that the data have too small a dynamic range (and too large uncertainties) to unambiguously favour MOND even if it is in fact generating the data. The SPARC RAR alone, therefore, does not supporting that theory unambiguously. The best prospect for improving this situation is to increase the acceleration range of the data, e.g. using stacked weak lensing at low \(g_{\mathrm{bar}}\) and groups and clusters at high \(g_{\mathrm{bar}}\).
* Our results are a function of the maximum complexity of equation considered. Future symbolic regression algorithms--exhaustive or non-exhaustive--will reach the true description length minimum and hence uncover the optimal functional representation of the RAR and determine whether the relation implies novel law-like gravitational behaviour.
Exhaustive Symbolic Regression provides for the first time a guaranteed complete search through functional parameter space, making it the ideal tool to determine the analytic form of observed relations, extract physics from data theory-agnostically, and create fitting functions. We make the ESR and RAR codes, full function sets and the best 50 functions for each dataset we consider publicly available to facilitate future applications.
## 7 Data Availability
The code and data associated with ESR and its application to the RAR are released at \(\overline{\raisebox{-0.86pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{6.5pt}} \rule{6.5pt}{6.5pt}}\rule{6.5pt}{6.5pt}}\hskip-0.86pt\) and in Bartlett et al. (2022a). The SPARC data is available at [http://astroweb.cwru.edu/SPARC](http://astroweb.cwru.edu/SPARC). Other data may be shared on request to the corresponding authors.
## Acknowledgements
We thank Kyu-Hyun Chae, Andrei Constantin, Miles Cranmer, Mario Figueiredo, Gianluca Gregori, Thomas Harvey, Mark Kotanchek, Federico Lelli, Stacy McGaugh, Andre Lukas, Richard Stiskalek and Tariq Yasin for useful inputs and discussion.
HD is supported by a Royal Society University Research Fellowship (grant no. 211046). DJB is supported by the Simons Collaboration on "Learning the Universe" and was supported by STFC and Oriel College, Oxford. PGF acknowledges support from European Research Council Grant No: 693024 and the Beecroft Trust.
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 693024).
This work used the DiRAC Complexity and DiRAC@Durham facilities, operated by the University of Leicester IT Services and Institute for Computational Cosmology, which form part of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment is funded by BIS National E-Infrastructure capital grants ST/K000373/1, ST/P002293/1, ST/R002371/1 and ST/S002502/1, STFC DiRAC Operations grant ST/K0003259/1, and Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National E-Infrastructure.
For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
|
2308.08059 | Topological Properties of Almost Abelian Groups | An almost Abelian Lie group is a non-Abelian Lie group with a codimension 1
Abelian subgroup. We show that all discrete subgroups of complex simply
connected almost Abelian groups are finitely generated. The topology of
connected almost Abelian Lie groups is studied by expressing each connected
almost Abelian Lie group as a quotient of its universal covering group by a
discrete normal subgroup. We then prove that no complex connected almost
Abelian group is compact, and give conditions for the compactness of connected
subgroups of such groups. Towards studying the homotopy type of complex
connected almost Abelian groups, we investigate the maximal compact subgroups
of such groups. | Zhirayr Avetisyan, Oderico-Benjamin Buran, Andrew Paul, Lisa Reed | 2023-08-15T22:04:23Z | http://arxiv.org/abs/2308.08059v1 | # Topological properties of almost abelian Lie groups
###### Abstract.
An almost Abelian Lie group is a non-Abelian Lie group with a codimension \(1\) Abelian subgroup. We show that all discrete subgroups of complex simply connected almost Abelian groups are finitely generated. The topology of connected almost Abelian Lie groups is studied by expressing each connected almost Abelian Lie group as a quotient of its universal covering group by a discrete normal subgroup. We then prove that no complex connected almost Abelian group is compact, and give conditions for the compactness of connected subgroups of such groups. Towards studying the homotopy type of complex connected almost Abelian groups, we investigate the maximal compact subgroups of such groups.
## 1. Introduction
Almost Abelian Lie groups are prevalent in math and nature. Most Bianchi groups (those having Lie algebras Bi(II)-Bi(VII)), and therefore many cosmological models, are almost Abelian. Other applications include integrable systems, PDEs, and linear dynamical systems. Of particular import is the fact that the three dimensional Heisenberg group is almost Abelian. As the aforementioned Bianchi and Heisenberg groups indicate, almost Abelian groups include some of the most computationally friendly Lie groups.
Another area in pure mathematics where almost Abelian Lie groups appear is in the study of solvmanifolds. A solvmanifold is a quotient \(G/H\) of a simply connected solvable Lie group \(G\) and a discrete subgroup \(H\). Almost Abelian solvmanifolds have seen extensive study in recent years, and complex almost Abelian solvmanifolds have been of particular interest; see: [1], [2], [10], [11], [12], [13], [14], and [15].
General properties of almost Abelian Lie algebras over arbitrary fields were studied in [1] and [1]. Meanwhile, general properties of real almost Abelian Lie groups were studied in [1]. We now provide the complex analogue: we analyze various important structures of complex almost Abelian groups. Indeed, the observant reader will notice that some (but not all) of our proofs and results mirror those in [1]. We try to give explicit descriptions for as many important (topological and algebraic) structures as we can.
The main result of this paper is Theorem 7.5, where we prove that every discrete subgroup of a complex connected almost Abelian Lie group is finitely generated.
Recall (as we will also go over in Section 2) a _multiplicity function_\(\mathbf{N}\) completely determines an almost Abelian group via the Jordan matrix \(J(\mathbf{N})\) and its Lie algebra, which we denote by \(\mathcal{A}(\mathbf{N})\) (also see [1]).
In Proposition 3.3 we find that one faithful matrix representation for a simply connected almost Abelian Lie group \(G\) with multiplicity function \(\mathbf{N}\) is given by:
\[G\coloneqq\left\{\begin{pmatrix}1&0&0\\ v&e^{tJ(\mathbf{N})}&0\\ t&0&1\end{pmatrix}\right|\,(v,t)\in\mathbb{C}^{d}\oplus\mathbb{C}\right\},\]
and in Lemma 3.4 we calculate that the exponential map \(\exp:\mathcal{A}(\mathbf{N})\longrightarrow G\) for a simply connected almost Abelian group \(G\) with the above representation.
In Proposition 4.2 we give a complete description of the center of a simply connected almost Abelian group \(G\):
\[Z(G)=\{(u,s)\in\mathbb{C}^{d}\rtimes\mathbb{C}\,\,\big{|}\,u\in\ker(J(\mathbb{N} )),\,\,e^{sJ(\mathbb{N})}=\mathbb{1}\,\}.\]
In Proposition 5.2 we find that every discrete normal subgroup \(N\subseteq G\) of a simply connected almost Abelian group \(G\) with Lie algebra \({}^{a}\!{\mathcal{A}}(\mathbb{N})\) is free and finitely generated, and the rank \(k\) of a discrete normal subgroup is bounded:
\[k\leq\dim_{\mathbb{R}}\big{(}\ker(J(\mathbb{N}))\big{)}+2.\]
As mentioned earlier, we further prove in Theorem 7.5 that every (not necessarily normal) discrete subgroup of a simply connected almost Abelian group is also finitely generated.
In Proposition 6.2, we give the explicit form of all connected Lie subgroups of a simply connected almost Abelian group \(G\).
In Proposition 6.6 we find that there are no compact connected almost Abelian groups, and give a necessary and sufficient condition (Proposition 6.7) for a connected Lie subgroup of a connected almost Abelian group to be compact.
Lastly, in section 8 we lay the groundwork for future investigations into homogeneous spaces by proving that the intersection of complex connected Lie subgroups of a simply connected almost Abelian group is again a complex connected Lie subgroup (Lemma 8.2), and find that the maximal compact subgroup of a connected almost Abelian group \(G:=\widetilde{G}/\Gamma\) (where \(\widetilde{G}\) is the universal cover and \(\Gamma\) is a discrete subgroup) is exactly \(\mathcal{C}(\Gamma)/\Gamma\), where \(\mathcal{C}(\Gamma)\) is the minimal connected complex Lie subgroup of \(\widetilde{G}\) containing \(\Gamma\) (Proposition 8.3).
## 2. Preliminaries
An almost Abelian Lie algebra is a Lie algebra with a codimension 1 Abelian subalgebra. For a finite-dimensional almost Abelian Lie algebra, this data can be fully captured by a formal device known as an \(\mathbb{N}\)-graded multiplicity function, and we now summarize from [1] this correspondence.
Let \(\mathcal{C}\) be the class of cardinals, and let \(\mathbb{F}\) be a field. An \(\mathbb{N}\)_-graded multiplicity function_\(\mathbb{N}\) is a map \(\mathbb{N}:\mathbb{F}\times\mathbb{N}\to\mathcal{C}\). For our purposes, we take \(\mathbb{N}:\mathbb{C}\times\mathbb{N}\to\mathcal{C}\). It is known (Prop. 1 in [1]) that an almost Abelian Lie algebra is necessarily of the form \(\mathbb{V}\rtimes_{\operatorname{ad}_{e_{0}}}\mathbb{C}\,e_{0}\). A multiplicity function \(\mathbb{N}\) completely and uniquely determines the structure of a complex almost Abelian Lie algebra by determining a Jordan matrix \(J(\mathbb{N})\) that serves as a matrix representation for \(\operatorname{ad}_{e_{0}}\). We now give details to how \(J(\mathbb{N})\) is defined.
**Definition 2.1**.: Define \(\operatorname{supp}\left(\mathbb{N}\right)\coloneqq\{p\in\mathbb{C}[X]\,\, \big{|}\,\exists n\in\mathbb{N}\,\,\,\text{s.t.}\,\,\,\mathbb{N}(p,n)\neq 0\}\).
Then, we define \(J(p,n)=\lambda_{p}\mathbb{1}+N_{n}\), where \(\lambda_{p}\) is the complex number identified with the monic irreducible polynomial \(p\in\mathbb{C}[x]\) that has it as a root, and where \(N_{n}\) is the \(n\times n\) matrix with 1's on the superdiagonal and zeroes everywhere else. Then
\[J(\mathbb{N})\coloneqq\bigoplus_{p\in\operatorname{supp}(\mathbb{N})}\bigoplus _{n=1}^{\infty}\bigoplus_{\mathbb{N}(p,n)}J(p,n).\]
For the entirety of this paper, we only consider (finite-dimensional) complex almost Abelian Lie groups with a (finite-dimensional) complex almost Abelian Lie algebra, which then corresponds to a finitely-supported multiplicity function \(\mathbb{N}\). We represent the Lie algebra uniquely determined by \(\mathbb{N}\) as:
**Definition 2.2**.: We define \({}^{a}\!{\mathcal{A}}(\mathbb{N})\coloneqq{}^{a}\!{\mathcal{A}}_{\mathbb{C}} (\mathbb{N})\coloneqq\mathbb{V}\rtimes_{\operatorname{ad}_{e_{0}}}\mathbb{C}\,e _{0}\) where \(\operatorname{ad}_{e_{0}}=J(\mathbb{N}),\,\,\mathbb{V}=\mathbb{C}^{\dim_{ \mathbb{C}}(\mathbb{N})}\).
Using \(\mathbb{N}\), we can also define the following sets.
**Definition 2.3**.: We define \(T_{\mathbf{N}}\coloneqq\left\{z\in\mathbb{C}\,|\;e^{zJ(\mathbf{N})}=\mathbb{1}\right\}\).
**Definition 2.4**.: We define \(\mathcal{X}_{\mathbf{N}}\coloneqq\left\{\omega\in\mathbb{C}\colon\operatorname{ supp}\left(\mathbf{N}\right)\cong S\subseteq i\omega\mathbb{Z}\right\}\).
**Lemma 2.1**.: _For a given finitely-supported multiplicity function \(\mathbf{N}\), \(T_{\mathbf{N}}\neq\left\{0\right\}\) if and only if one of the following two conditions holds._
1. \(\mathbf{N}(p,n)=0\) _for all_ \(p\in\operatorname{supp}\left(\mathbf{N}\right)\) _and_ \(n>1\)_, and_ \(\mathcal{X}_{\mathbf{N}}\neq\varnothing\)_. In this case,_ \[T_{\mathbf{N}}=z_{0}\mathbb{Z},\] _where_ \(z_{0}=2\pi/\omega_{0}\) _and_ \(\omega_{0}\) _is an element of_ \(\mathcal{X}_{\mathbf{N}}\) _such that_ \(\left|\omega_{0}\right|=\max\left\{\left|\omega\right|\colon\omega\in\mathcal{ X}_{\mathbf{N}}\right\}\)_._
2. \(\operatorname{supp}\left(\mathbf{N}\right)=\left\{p_{0}\right\}\) _and_ \(x_{p_{0}}=0\)_. In this case_ \(T_{\mathbf{N}}=\mathbb{C}\)_._
Proof.: The second case follows immediately from the definitions of \(T_{\mathbf{N}}\) and the matrix exponential. We prove that the first case is the only remaining case.
Note that it is clear that \(0\in T_{\mathbf{N}}\). We can decompose the exponential \(e^{zJ(\mathbf{N})}\) as a direct sum:
\[e^{zJ(\mathbf{N})}=\bigoplus_{p\in\operatorname{supp}\left(\mathbf{N}\right)} \bigoplus_{n=1}^{\infty}\bigoplus_{\mathbf{N}(p,n)}e^{zJ(p,n)}.\]
Recall that \(J(p,n)\) is the \(n\times n\) matrix \(x_{p}\mathbb{1}+N_{n}\), where \(x_{p}\) is a root of the polynomial \(p\) and \(N_{n}\) is nilpotent with ones above the main diagonal and zeros elsewhere. Since the commutator of \(x_{p}\mathbb{1}\) and \(N_{n}\) vanishes, we have
\[e^{zJ(p,n)} =e^{z(x_{p}1+N_{n})}\] \[=e^{zx_{p}}e^{zN_{n}} \tag{1}\] \[=e^{zx_{p}}\left(\mathbb{1}+zN_{n}+\frac{z^{2}}{2!}N_{n}^{2}+ \cdots+\frac{z^{n-1}}{(n-1)!}N_{n}^{n-1}\right).\]
Note that \(e^{zJ(\mathbf{N})}\) is the identity if and only if the exponential of each Jordan block \(e^{zJ(p,n)}\) are themselves the identity. By the expansion (1), we can see that the exponentials of the Jordan blocks are the identity precisely when the higher order terms vanish and \(e^{zx_{p}}=1\). First, we study when the higher order terms vanish.
When \(n=1\), we have that \(N_{1}\) is the \(1\times 1\) matrix [0], so the higher order terms vanish irrespective of our choice of \(z\). Suppose that \(n>1\) and there exists \(z\) such that the higher order terms vanish:
\[zN_{n}+\frac{z^{2}}{2!}N_{n}^{2}+\cdots+\frac{z^{n-1}}{(n-1)!}N_{n}^{n-1}=[0]_ {n}. \tag{2}\]
Since \(n>1\), the second column of \(N_{n}\) consists of \(1\) in the first component and zeros elsewhere. It follows that for the entry in the first row, second column of both sides of (2) to match, we must have \(z\cdot 1=z=0\). Hence, nontrivial solutions to \(e^{zJ(\mathbf{N})}=\mathbb{1}\) can exist only if higher order terms vanish independently of \(z\), which can only occur if \(\mathbf{N}\) vanishes for \(n>1\) so that the only nilpotent matrix we deal with is \(N_{1}\).
Restricting ourselves to \(\mathbf{N}\) that vanishes for \(n>1\) and equating (1) with \(\mathbb{1}\) gives us
\[e^{zx_{p}}\mathbb{1}=\mathbb{1}.\]
So we must have \(e^{zx_{p}}=1\). In particular, we must have this equation hold _for all_\(p\in\operatorname{supp}\left(\mathbf{N}\right)\) so that all of the Jordan blocks are the identity. Symbolically, we have established that \(T_{\alpha}\neq\left\{0\right\}\) if and only if
\[\forall n>1,\mathbf{N}(p,n)=0\text{ and }\exists z\neq 0\text{ s.t. }\forall p\in \operatorname{supp}\left(\mathbf{N}\right),\ e^{zx_{p}}=1.\]
Now we show that
\[\exists z\neq 0\text{ s.t. }\forall p\in\operatorname{supp}\left(\mathbf{N} \right),e^{zx_{p}}=1\Longleftrightarrow\mathcal{X}_{\mathbf{N}}\neq\emptyset.\]
In one direction, suppose that for all \(p\in\operatorname{supp}\left(\textbf{N}\right)\), \(e^{zx_{p}}=1\), and \(z\neq 0\) is arbitrary. This implies that for any \(p\in\operatorname{supp}\left(\textbf{N}\right)\), we can find an integer \(N_{p}\) such that \(zx_{p}=2\pi iN_{p}\). Hence \(\frac{2\pi}{z}\in\mathcal{X}_{\textbf{N}}\) and so \(\mathcal{X}_{\textbf{N}}\) is nonempty.
In the other direction, suppose \(\mathcal{X}_{\textbf{N}}\) is nonempty, with \(\omega\in\mathcal{X}_{\textbf{N}}\). Observe that \(\omega=0\) would imply that \(\operatorname{supp}\left(\textbf{N}\right)=\{0\}\). Since we restrict ourselves to **N** that vanishes for \(n>1\), we would have that \(J(\textbf{N})\) is the zero matrix, which is impossible since our Lie algebra is almost Abelian. Thus \(\omega\neq 0\) and we can set \(z=\frac{2\pi}{\omega}\). By definition, for every \(p\in\operatorname{supp}\left(\textbf{N}\right)\) there exists integers \(N_{p}\) such that
\[x_{p}=\frac{2\pi iN_{p}}{z}\Longrightarrow zx_{p}=2\pi iN_{p}\Longrightarrow e ^{zx_{p}}=1,\]
which completes the last direction.
Observe that the map \(f\colon z\mapsto e^{zJ(\textbf{N})}\) is a Lie group homomorphism and \(T_{\textbf{N}}\) is precisely the kernel of this homomorphism. Since \(\{1\}\) is discrete and \(f\) is continuous, we must have that \(T_{\textbf{N}}=f^{-1}(\{1\})\) is a discrete subgroup of \(\mathbb{C}\). So \(T_{\textbf{N}}\) is a lattice of the form
\[T_{\textbf{N}}=z_{0}\mathbb{Z}\oplus w_{0}\mathbb{Z},\]
where \(z_{0}\) and \(w_{0}\) are \(\mathbb{R}\)-linearly independent as long as both are nonzero. Since \(z_{0}=w_{0}=0\) yields the degenerate case \(T_{\textbf{N}}=\{0\}\), if \(T_{\textbf{N}}\neq\{0\}\), at least one of \(z_{0}\) and \(w_{0}\) is nonzero, so \(T_{\textbf{N}}\neq\{0\}\) is isomorphic to either \(\mathbb{Z}\) or \(\mathbb{Z}^{2}\).
Suppose \(p\in\operatorname{supp}\left(\textbf{N}\right)\) and \(z\in T_{\textbf{N}}\) is nonzero (so \(T_{\textbf{N}}\) is nontrivial). We have \(e^{zx_{p}}=1\). So there exists a nonzero integer \(N\) such that \(zx_{p}=2\pi iN\). Now pick \(w\in T_{\textbf{N}}\) nonzero. Since \(e^{wx_{p}}=1\), there exists an integer \(M\) such that \(w=\frac{2\pi iM}{x_{p}}\). It follows that \(w=\frac{M}{N}z\). Therefore, if \(T_{\textbf{N}}\) is nontrivial, all of its elements are colinear in the complex plane. In particular, \(T_{\textbf{N}}\ncong\mathbb{Z}^{2}\). Nontrivial \(T_{\textbf{N}}\) thus take the form
\[T_{\textbf{N}}=z_{0}\mathbb{Z},\]
where \(z_{0}\) is a complex number in \(T_{\textbf{N}}\) that has the smallest positive magnitude. Since \(|z_{0}|\) is minimal amongst the nonzero elements of \(T_{\textbf{N}}\), \(\frac{2\pi}{|z_{0}|}\) is maximal amongst elements of \(\mathcal{X}_{\textbf{N}}\). Hence,
\[z_{0}=\frac{2\pi}{\omega_{0}},\quad\omega_{0}\in\mathcal{X}_{\textbf{N}}\text{ such that }|\omega_{0}|=\max\left\{|\omega|\colon\omega\in\mathcal{X}_{\textbf{N}}\right\},\]
and we are done.
The upshot of Lemma 2.1 is that in the interesting cases, \(\operatorname{rank}T_{\textbf{N}}\leq 1\). This condition comes into play in the proof of Prop 6.7, which characterizes when a connected Lie subgroup of an almost Abelian Lie group is compact.
## 3. Group Representations and Corresponding Exponential Maps
The core results of this paper depend on some convenient matrix representations of almost Abelian Lie groups. Given an almost Abelian Lie algebra \({}^{a}\!\mathcal{A}(\textbf{N})\) of dimension \(d+1\), we recall from Prop. 2 in [1] that we have the matrix representation
\[{}^{a}\!\mathcal{A}(\textbf{N})=\left\{\begin{pmatrix}0&0\\ v&tJ(\textbf{N})\end{pmatrix}:(v,t)\in\mathbb{C}^{d}\oplus\mathbb{C}\right\}. \tag{3}\]
Looking at the exponential of this matrix representation, we can conjecture a matrix representation (that of Prop. 3.1) for a _connected_ almost Abelian Lie group. However, as Proposition 3.1 will show, this representation unfortunately is often not _simply connected_. So, we will use this representation and the calculation of the corresponding matrix exponential (Lemma 3.2) as intuition for the simply connected representation (Prop. 3.3) and corresponding matrix exponential (Lemma 3.4) which we need.
**Proposition 3.1**.: _For a finitely-supported multiplicity function \(\aleph\), let_
\[G\coloneqq\left\{\begin{pmatrix}1&0\\ v&e^{tJ(\aleph)}\end{pmatrix}\right|(v,t)\in\mathbb{C}^{d}\oplus\mathbb{C} \right\}.\]
_Then \(G\) is a connected Lie group with Lie algebra \({}^{a}\!\mathcal{A}(\aleph)\), and it is simply connected if and only if \(T_{\aleph}=\{0\}\)._
Proof.: That \(G\) is a connected Lie group because every element is path connected to the identity is clear from the definition. Then for all \((u,s)\in\mathbb{C}^{d}\oplus\mathbb{C}\), let \(\gamma_{(u,s)}:(-1,1)\to G\) be a smooth curve defined by
\[\gamma_{(u,s)}(\tau)\coloneqq\begin{pmatrix}1&0\\ v(\tau)&e^{t(\tau)J(\aleph)}\end{pmatrix},\]
with
\[(v(0),t(0))=(0,0),\qquad(v^{\prime}(0),t^{\prime}(0))=(u,s).\]
Then since we can split the derivative of \(e^{t(\tau)J(\aleph)}\) into its real and complex parts, we may calculate
\[\frac{\mathrm{d}}{\mathrm{d}\tau}\begin{pmatrix}1&0\\ v(\tau)&e^{t(\tau)J(\aleph)}\end{pmatrix}\bigg{|}_{\tau=0}=\begin{pmatrix}0&0 \\ u&sJ(\aleph)\end{pmatrix}\in{}^{a}\!\mathcal{A}(\aleph),\]
where the last inclusion is follows from Prop. 3 of [1]. Thus \({}^{a}\!\mathcal{A}(\aleph)\) is the Lie algebra of \(G\).
Consider the map \(\varphi:\mathbb{C}^{d}\times\mathbb{C}\to G\), defined by:
\[(v,t)\mapsto\begin{pmatrix}1&0\\ v&e^{tJ(\aleph)}\end{pmatrix}.\]
Let \(\pi\colon\mathbb{C}^{d}\times\mathbb{C}\to\mathbb{C}^{d}\times(\mathbb{C}\,/ T_{\aleph})\) be the natural quotient map. In particular, we define an equivalence relation \(\sim\) on \(\mathbb{C}\) where \(t\sim t^{\prime}\) if and only if \(t-t^{\prime}\in T_{\aleph}\). Then \(\pi\) maps \((v,t)\mapsto(v,[t])\) where \([t]\) is the equivalence class of \(t\) under this relation.
Suppose that \(t\sim t^{\prime}\). Then, \(e^{tJ(\aleph)}=e^{t^{\prime}J(\aleph)}\) so that \(\varphi(v,t)=\varphi(v,t^{\prime})\). Now, we may define the map \(\psi\colon\mathbb{C}^{d}\times(\mathbb{C}\,/T_{\aleph})\to G\) that maps
\[(v,[t])\mapsto\begin{pmatrix}1&0\\ v&e^{tJ(\aleph)}\end{pmatrix}.\]
\(\psi\) is smooth with a smooth inverse. So \(G\) is diffeomorphic to \(\mathbb{C}^{d}\times(\mathbb{C}\,/T_{\aleph})\).
Suppose \(T_{\aleph}\) is trivial. Then \(G\) is diffeomorphic to \(\mathbb{C}^{d+1}\), which is simply connected.
On the other hand, suppose that \(G\) is simply connected. By Lemma 2.1, we have that either \(T_{\aleph}\) is trivial or \(T_{\aleph}\cong\mathbb{Z}\).
If \(T_{\aleph}\cong\mathbb{Z}\), we have
\[\mathbb{C}\,/T_{\aleph}\cong\mathbb{C}\,/\,\mathbb{Z}\cong\mathbb{R}\times( \mathbb{R}\,/\,\mathbb{Z})\]
But the map \(t\mapsto e^{2\pi it}\) is a homomorphism from \(\mathbb{R}\) to \(S^{1}\) and \(\mathbb{Z}\) is the kernel of the homomorphism, hence \(\mathbb{R}\,/\,\mathbb{Z}\cong S^{1}\) and so \(G\) is diffeomorphic to \(\mathbb{R}^{2d+1}\times S^{1}\). In particular, the fundamental group of \(G\) is \(\pi_{1}(G)\cong\pi_{1}(S^{1})\cong\mathbb{Z}\), so \(G\) is not simply connected, a contradiction. Therefore, \(T_{\aleph}\not\cong\mathbb{Z}\).
We conclude that \(G\) is simply connected if and only if \(T_{\aleph}\) is trivial.
We now calculate matrix exponential on the almost Abelian Lie algebra representation of (3) and see that it lands in the group representation of Prop. 3.1.
**Lemma 3.2**.: _The matrix exponential map of the matrix Lie algebra \({}^{a}\!\mathcal{A}(\aleph)\) represented as in Prop. 3.1 is given by_
\[\exp\begin{pmatrix}0&0\\ v&tJ(\aleph)\end{pmatrix}=\begin{pmatrix}1&0\\ \frac{e^{tJ(\aleph)}-1}{tJ(\aleph)}v&e^{tJ(\aleph)}\end{pmatrix}.\]
Proof.: First, we show that
\[\begin{pmatrix}0&0\\ v&tJ(\mathbf{N})\end{pmatrix}^{n}=\begin{pmatrix}0&0\\ [tJ(\mathbf{N})]^{n-1}v&[tJ(\mathbf{N})]^{n}\end{pmatrix}, \tag{4}\]
for all \(n\in\mathbb{N}\) by inducting on \(n\). For \(n=1\), we indeed have
\[\begin{pmatrix}0&0\\ v&tJ(\mathbf{N})\end{pmatrix}^{1}=\begin{pmatrix}0&0\\ [tJ(\mathbf{N})]^{0}v&[tJ(\mathbf{N})]^{1}\end{pmatrix}, \tag{5}\]
and thus (5) is our inductive base case. Assume (4) is true for \(n=k\in\mathbb{N}\). We show that (4) holds for \(k+1\):
\[\begin{pmatrix}0&0\\ v&tJ(\mathbf{N})\end{pmatrix}^{k+1}=\begin{pmatrix}0&0\\ [tJ(\mathbf{N})]^{k-1}v&[tJ(\mathbf{N})]^{k}\end{pmatrix}\begin{pmatrix}0&0 \\ v&tJ(\mathbf{N})\end{pmatrix}=\begin{pmatrix}0&0\\ [tJ(\mathbf{N})]^{k}v&[tJ(\mathbf{N})]^{k+1}\end{pmatrix}.\]
Thus by induction (4) holds for all \(n\in\mathbb{N}\).
Now, by the series expansion of the matrix exponential, we have:
\[\exp\begin{pmatrix}0&0\\ v&tJ(\mathbf{N})\end{pmatrix} =\sum_{n=0}^{\infty}\frac{1}{n!}\begin{pmatrix}0&0\\ v&tJ(\mathbf{N})\end{pmatrix}^{n}\] \[=\mathbb{1}+\sum_{n=1}^{\infty}\frac{1}{n!}\begin{pmatrix}0&0\\ [tJ(\mathbf{N})]^{n-1}v&[tJ(\mathbf{N})]^{n}\end{pmatrix}\] \[=\begin{pmatrix}1&0\\ \frac{e^{tJ(\mathbf{N})}-1}{tJ(\mathbf{N})}v&e^{tJ(\mathbf{N})}\end{pmatrix},\]
where the last equality comes from the component-wise series expansions, and the term \(\frac{e^{tJ(\mathbf{N})}-1}{tJ(\mathbf{N})}\) denotes the series of the matrix exponential, subtracted by the identity matrix, and with one less power of the argument, \(tJ(\mathbf{N})\), in each summed term.
We now find a representation for the simply connected almost Abelian Lie group corresponding to a given almost Abelian Lie algebra \(\mathcal{A}(\mathbf{N})\).
**Proposition 3.3**.: _For a finitely-supported multiplicity function \(\mathbf{N}\), let_
\[G\coloneqq\left\{\begin{pmatrix}1&0&0\\ v&e^{tJ(\mathbf{N})}&0\\ t&0&1\end{pmatrix}\Bigg{|}\ (v,t)\in\mathbb{C}^{d}\oplus\mathbb{C}\right\}.\]
_Then \(G\) is a complex simply connected Lie group with Lie algebra isomorphic to \(\mathcal{A}(\mathbf{N})\)._
Proof.: Note that Prop. 3 in [1] showed that a finite-dimensional almost Abelian Lie algebra \(\mathcal{A}(\mathbf{N})\) corresponding to a finite dimensional multiplicity function \(\mathbf{N}:\mathbb{C}\times\mathbb{N}\to\mathbb{N}\) has a faithful matrix representation:
\[\mathcal{A}(\mathbf{N})\cong\mathbb{C}^{d}\rtimes\mathbb{C}\ni(v,t)\mapsto \begin{pmatrix}0&0\\ v&tJ(\mathbf{N})\end{pmatrix}. \tag{6}\]
Moreover, note that the map \(\Phi\) defined by
\[\mathcal{A}(\mathbf{N})\ni\begin{pmatrix}0&0\\ v&tJ(\mathbf{N})\end{pmatrix}\mapsto\begin{pmatrix}0&0&0\\ v&tJ(\mathbf{N})&0\\ t&0&0\end{pmatrix} \tag{7}\]
is a complex Lie algebra isomorphism, and so we have another faithful matrix representation of \({}^{\circ}\!A(\mathbf{N})\). For completeness, we check that this is indeed a Lie algebra isomorphism. It is clear that the map is bijective, so it suffices to check that it preserves Lie brackets. We compute
\[\Phi\left[\begin{pmatrix}0&0\\ v&tJ(\mathbf{N})\end{pmatrix},\begin{pmatrix}0&0\\ u&sJ(\mathbf{N})\end{pmatrix}\right] =\Phi\left(\begin{pmatrix}0&0\\ tJ(\mathbf{N})u&ts(J(\mathbf{N}))^{2}\end{pmatrix}-\begin{pmatrix}0&0\\ sJ(\mathbf{N})v&ts(J(\mathbf{N}))^{2}\end{pmatrix}\right)\] \[=\Phi\begin{pmatrix}0&0\\ tJ(\mathbf{N})u-sJ(\mathbf{N})v&0\end{pmatrix}\] \[=\begin{pmatrix}0&0&0\\ tJ(\mathbf{N})u-sJ(\mathbf{N})v&0&0\\ 0&0&0\end{pmatrix}\] \[=\begin{pmatrix}0&0&0\\ tJ(\mathbf{N})u&ts(J(\mathbf{N}))^{2}&0\\ 0&0&0\end{pmatrix}-\begin{pmatrix}0&0&0\\ sJ(\mathbf{N})v&ts(J(\mathbf{N}))^{2}&0\\ 0&0&0\end{pmatrix}\] \[=\begin{bmatrix}\Phi\begin{pmatrix}0&0\\ v&tJ(\mathbf{N})\end{pmatrix},\Phi\begin{pmatrix}0&0\\ u&sJ(\mathbf{N})\end{pmatrix}\end{bmatrix}.\]
We define \(\Phi\)^-1 by
\[\Phi\]
\[\Phi\]
\[\cdot^{1}\begin{pmatrix}0&0&0\\ v&tJ(\mathbf{N})&0\\ t&0&0\end{pmatrix}=\begin{pmatrix}0&0\\ v&tJ(\mathbf{N})\end{pmatrix}.\]
Then we check:
\[\Phi\]
\[=\Phi\]
\[=\Phi\]
\[=\Phi\]
\[=\Phi\]
\[=\Phi\]
\[=\begin{pmatrix}0&0&0\\ tJ(\mathbf{N})u&ts(J(\mathbf{N}))^{2}\end{pmatrix}-\begin{pmatrix}0&0&0\\ sJ(\mathbf{N})v&ts(J(\mathbf{N}))^{2}\end{pmatrix}\] \[=[\Phi\]
\[=[\Phi\]
\[\cdot^{1}(v,t),\Phi\]
\[\cdot^{1}(u,s)]\]
That \(G\) is a closed subset of \(\operatorname{GL}_{n}(\mathbb{C})\) is apparent from its definition. That it is closed under multiplication is verified in the course of the proof of Prop. 4.2 below. That every element has an inverse is seen by observing that:
\[\begin{pmatrix}1&0&0\\ v&e^{tJ(\mathbf{N})}&0\\ t&0&1\end{pmatrix}\begin{pmatrix}1&0&0\\ e^{-tJ(\mathbf{N})}(-v)&e^{-tJ(\mathbf{N})}&0\\ -t&0&1\end{pmatrix}=1.\]
Thus \(G\) is a complex Lie group as a closed subgroup of \(\operatorname{GL}_{n}(\mathbb{C})\). Consider the map \(\varphi:\mathbb{C}^{d}\oplus\mathbb{C}\to G\) given by
\[\varphi(v,t)\coloneqq\begin{pmatrix}1&0&0\\ v&e^{tJ(\mathbb{N})}&0\\ t&0&1\end{pmatrix}.\]
This is certainly injective because \((v,t)\) is represented in the image. Furthermore, it is easily seen to be surjective by the definition of \(G\). Since \(\varphi\) is obviously a holomorphism, we have that \(\varphi\) is a biholomorphism. Thus \(G\cong_{\operatorname{biholo}}\mathbb{C}^{d}\oplus\mathbb{C}\), so \(G\) is simply connected.
Now consider a path \(\gamma_{(u,s)}:(-1,1)\to G\) defined by
\[\gamma(\tau)\coloneqq\begin{pmatrix}1&0&0\\ v(\tau)&e^{t(\tau)J(\mathbb{N})}&0\\ t(\tau)&0&1\end{pmatrix},\]
with \((v(0),t(0))=(0,0)\) and \((v^{\prime}(0),t^{\prime}(0))=(u,s)\). Then
\[\frac{\mathrm{d}}{\mathrm{d}\tau}\begin{pmatrix}1&0&0\\ v(\tau)&e^{t(\tau)J(\mathbb{N})}&0\\ t(\tau)&0&1\end{pmatrix}\bigg{|}_{\tau=0}=\begin{pmatrix}0&0&0\\ v&J(\mathbb{N})&0\\ t&0&0\end{pmatrix}\in\mathcal{A}(\mathbb{N}),\]
where the inclusion at the end follows from (7). Thus \(\operatorname{Lie}(G)\cong\mathcal{A}(\mathbb{N})\) by the faithful representation of (7).
Having found a faithful matrix representation for simply connected almost Abelian Lie groups, it is convenient for us to find the exponential map corresponding to this faithful representation. In the complex case, different matrix representations of the Lie algebra may yield different identities between the geometric exponential maps and the matrix exponential.
**Proposition 3.4**.: _For a complex simply connected almost Abelian group \(G\) with Lie algebra \(\mathcal{A}(\mathbb{N})\), the exponential map \(\exp:\mathcal{A}(\mathbb{N})\to G\) is given by_
\[\exp\begin{pmatrix}0&0&0\\ v&tJ(\mathbb{N})&0\\ t&0&0\end{pmatrix}=\begin{pmatrix}1&0&0\\ \frac{e^{tJ(\mathbb{N})}-1}{tJ(\mathbb{N})}v&e^{tJ(\mathbb{N})}&0\\ t&0&1\end{pmatrix}\in G.\]
Proof.: We first show that
\[\begin{pmatrix}0&0&0\\ v&tJ(\mathbb{N})&0\\ t&0&0\end{pmatrix}^{n}=\begin{pmatrix}0&0&0\\ [tJ(\mathbb{N})]^{n-1}v&[tJ(\mathbb{N})]^{n}&0\\ 0&0&0\end{pmatrix}, \tag{8}\]
for all integers \(n\geq 2\). We proceed by inducting on \(n\). When \(n=2\), we have
\[\begin{pmatrix}0&0&0\\ v&tJ(\mathbb{N})&0\\ t&0&0\end{pmatrix}^{2}=\begin{pmatrix}0&0&0\\ v&tJ(\mathbb{N})&0\\ t&0&0\end{pmatrix}\begin{pmatrix}0&0&0\\ v&tJ(\mathbb{N})&0\\ t&0&0\end{pmatrix}=\begin{pmatrix}0&0&0\\ [tJ(\mathbb{N})]^{1}v&[tJ(\mathbb{N})]^{2}&0\\ 0&0&0\end{pmatrix}, \tag{9}\]
so the base case holds. Next, assume (9) is true when \(n=k\) for some \(k\in\mathbb{N}\). When \(n=k+1\),
\[\begin{pmatrix}0&0&0\\ v&tJ(\mathbf{N})&0\\ t&0&0\end{pmatrix}^{k+1} =\begin{pmatrix}0&0&0\\ v&tJ(\mathbf{N})&0\\ t&0&0\end{pmatrix}^{k}\begin{pmatrix}0&0&0\\ v&tJ(\mathbf{N})&0\\ t&0&0\end{pmatrix}\] \[=\begin{pmatrix}0&0&0\\ [tJ(\mathbf{N})]^{k-1}v&[tJ(\mathbf{N})]^{k}&0\\ 0&0&0\end{pmatrix}\begin{pmatrix}0&0&0\\ v&tJ(\mathbf{N})&0\\ t&0&0\end{pmatrix}\] \[=\begin{pmatrix}0&0&0\\ [tJ(\mathbf{N})]^{k}v&[tJ(\mathbf{N})]^{k+1}&0\\ 0&0&0\end{pmatrix}.\]
This completes the induction.
By the series expansion of the matrix exponential we have,
\[\exp\begin{pmatrix}0&0&0\\ v&tJ(\mathbf{N})&0\\ t&0&0\end{pmatrix} =\sum_{n=0}^{\infty}\frac{1}{n!}\begin{pmatrix}0&0&0\\ v&tJ(\mathbf{N})&0\\ t&0&0\end{pmatrix}^{n}\] \[=1+\begin{pmatrix}0&0&0\\ v&tJ(\mathbf{N})&0\\ t&0&0\end{pmatrix}+\sum_{n=2}^{\infty}\frac{1}{n!}\begin{pmatrix}0&0&0&0\\ [tJ(\mathbf{N})]^{n-1}v&[tJ(\mathbf{N})]^{n}&0\\ 0&0&0\end{pmatrix}\] \[=\begin{pmatrix}1&0&0\\ \frac{e^{tJ(\mathbf{N})}-1}{tJ(\mathbf{N})}v&e^{tJ(\mathbf{N})}&0\\ t&0&1\end{pmatrix}.\]
The last equality comes from the component-wise series expansions, and the term \(\frac{e^{tJ(\mathbf{N})}-1}{tJ(\mathbf{N})}\) denotes the series of the matrix exponential, subtracted by the identity matrix, and with one less power of the argument, \(tJ(\mathbf{N})\), in each summed term. Note that \(\frac{e^{tJ(\mathbf{N})}-1}{tJ(\mathbf{N})}v\in\mathbb{C}^{d}\), so it follows that
\[\begin{pmatrix}1&0&0\\ \frac{e^{tJ(\mathbf{N})}-1}{tJ(\mathbf{N})}v&e^{tJ(\mathbf{N})}&0\\ t&0&1\end{pmatrix}\in G.\]
Finally, we note that as a consequence of Lemma 3.4, the exponential map is particularly simple to understand on the Abelian subalgebra of an almost Abelian Lie algebra.
**Remark 3.5**.: _Let \(G\) be the simply connected group that has Lie algebra \(\mathcal{A}(\mathbf{N})\). It follows that on the Abelian Lie subalgebra \(\ker(J(\mathbf{N}))\oplus\mathbb{C}\) the exponential map \(\exp:\mathcal{A}(\mathbf{N})\to G\) associated with \(G\) is given by:_
\[\exp(v,t)=[v,t],\qquad\forall(v,t)\in\ker(J(\mathbf{N}))\oplus\mathbb{C}\,.\]
Proof.: If \(v\in\ker(J(\mathbf{N}))\) then:
\[\frac{e^{tJ(\mathbf{N})}-1}{tJ(\mathbf{N})}v =\left(\sum_{n=1}^{\infty}\frac{1}{n!}(tJ(\mathbf{N}))^{n-1} \right)v\] \[=v+\sum_{n=2}^{\infty}\frac{1}{n!}t^{n-1}\big{(}(J(\mathbf{N}))^{ n-1}v\big{)}\] \[=v.\]
## 4. The Center of a Complex Almost Abelian Group
We recall the following standard fact from Lie group theory.
**Lemma 4.1**.: _Let \(\mathfrak{g}\) be an arbitrary Lie algebra, \(G\) be a connected matrix Lie group that has Lie algebra \(\mathfrak{g}\), and let \(\exp_{G}:\mathfrak{g}\to G\) be the corresponding exponential map (specific to \(G\)). Then \(\exp_{G}(Z(\mathfrak{g}))\subseteq Z(G)\)._
**Proposition 4.2**.: _Let \(G\) be a simply connected almost Abelian Lie group with Lie algebra \(\mathcal{A}(\mathbf{N})\). Recall Definition 2.3. The center of \(G\) is given by:_
\[Z(G) =\exp_{G}[Z(\mathcal{A}(\mathbf{N}))]\times T_{\mathbf{N}}\] \[=\exp_{G}[Z(\mathcal{A}(\mathbf{N}))\times T_{\mathbf{N}}]\] \[=\{(u,s)\in\mathbb{C}^{d}\rtimes\mathbb{C}\,\left|\,u\in\ker(J( \mathbf{N})),\ e^{sJ(\mathbf{N})}=\mathbb{1}\,\right.\}\]
_where \(\exp_{G}:\mathcal{A}(\mathbf{N})\to G\) is the associated exponential map with \(G\)._
_Also, the preimage under the exponential map (associated with \(G\)) of the identity component of the center is:_
\[\exp_{G}\mbox{${}^{\text{-}1}$}[Z(G)_{0}]=Z(\mathcal{A}(\mathbf{N}))\]
Proof.: We represent the standard matrix exponential that is a matrix series as \(e^{A}\) where \(A\) is understood to be a matrix. By Prop. 3.3 we may use the representation:
\[G\coloneqq\left\{\begin{pmatrix}1&0&0\\ v&e^{tJ(\mathbf{N})}&0\\ t&0&1\end{pmatrix}\right|\,(v,t)\in\mathbb{C}^{d}\oplus\mathbb{C}\right\}.\]
For simplicity, we represent an element of \(G\) with this matrix representation by a bracket-tuple as follows:
\[[v,t]\coloneqq\begin{pmatrix}1&0&0\\ v&e^{tJ(\mathbf{N})}&0\\ t&0&1\end{pmatrix}.\]
Thus we may compactly represent the multiplication of group elements by:
\[[v,t][u,s]=[v+e^{tJ(\mathbf{N})}u,t+s]. \tag{10}\]
Now suppose \([u,s]\in Z(G)\), so \([v,t][u,s]=[u,s][v,t]\). Then by (10):
\[[u,s][v,t]=[u+e^{sJ(\mathbf{N})}v,s+t].\]
Thus the condition \([v,t][u,s]=[u,s][v,t]\) is equivalent to \(v+e^{tJ(\mathbf{N})}u=u+e^{sJ(\mathbf{N})}v\). We can then rewrite this latter expression as
\[(e^{sJ(\mathbf{N})}-\mathbb{1}\,)v=(e^{tJ(\mathbf{N})}-\mathbb{1}\,)u. \tag{11}\]
Setting \(v=0\) we have that \((e^{tJ(\mathbf{N})}-\mathbb{1})u=0\) and thus we must have \(J(\mathbf{N})u=0\), that is, we have \(u\in\ker(J(\mathbf{N}))\), as desired.
Since we are supposing \([u,s]\in Z(G)\), equation (11) must hold for all \(v\). So if \(u\in\ker(J(\mathbf{N}))\), then \((e^{sJ(\mathbf{N})}-\mathbb{1})v=0\) for all \(v\), which means that \(e^{sJ(\mathbf{N})}=\mathbb{1}\). Thus we have proven
\[Z(G)\subseteq\{[u,s]\in\mathbb{C}^{d}\rtimes\mathbb{C}\,\left|\,u\in\ker(J( \mathbf{N})),\ e^{sJ(\mathbf{N})}=\mathbb{1}\right.\}.\]
For notational convenience, define
\[X\coloneqq\{(u,s)\in\mathbb{C}^{d}\oplus\mathbb{C}=\mbox{${}^{a}$}\mathcal{A} (\mathbf{N})\,\left|\,u\in\ker(J(\mathbf{N}))=Z(\mathcal{A}(\mathbf{N})),\ e^{sJ(\mathbf{N})}= \mathbb{1}\},\]
where we recall (Remark 2 in [1]) that \(\ker(J(\mathbf{N}))=Z(\mathcal{A}(\mathbf{N}))\).
Now suppose \((u,s)\in X\). By the conditions of the set \(X\) on the last component of an element belonging to it, \(s\in T_{\mathbf{N}}\) by definition. Thus \(X=Z(\mbox{\textc{4}}(\mathbf{N}))\times T_{\mathbf{N}}\). By Remark 3.5, \(\exp_{G}((v,t))=[v,t]\;\;\forall(v,t)\in Z(\mbox{\textc{4}}(\mathbf{N}))\times T _{\mathbf{N}}\). Notice that \([v,t]=[v,0][0,t]\), thus
\[\exp_{G}\left(Z(\mbox{\textc{4}}(\mathbf{N}))\times T_{\mathbf{N}}\right)=\exp_ {G}\left(Z(\mbox{\textc{4}}(\mathbf{N}))\right)\times T_{\mathbf{N}}.\]
Now by Lemma 4.1, we have \(\exp_{G}\left(Z(\mbox{\textc{4}}(\mathbf{N}))\right)\subseteq Z(G)\). Then we calculate: let \([v,t]\in G,\;[u,s]\in\exp_{G}\left(Z(\mbox{\textc{4}}(\mathbf{N}))\right) \times T_{\mathbf{N}}\). Then
\[[v,t][u,s] =[v+e^{tJ}u,t+s]\] \[=[v+\left(\sum_{n=0}^{\infty}\frac{1}{n!}(tJ)^{n}\right)u,t+s]\] \[=[v+\left(\sum_{n=0}^{\infty}\frac{1}{n!}(tJ)^{n}u\right),t+s]\] \[=[v+u,t+s]=[u+v,s+t]=[u,s][v,t].\]
Thus \(\exp_{G}\left(Z(\mbox{\textc{4}}(\mathbf{N}))\right)\times T_{\mathbf{N}} \subseteq Z(G)\).
By Remark 3.5, we have that as sets
\[\exp_{G}\left(Z(\mbox{\textc{4}}(\mathbf{N}))\right)=Z(\mbox{\textc{4}}( \mathbf{N}))=\ker(J(\mathbf{N})).\]
Thus by bidirectional inclusion,
\[Z(G) =\{[u,s]\in\mathbb{C}^{d}\rtimes\mathbb{C}\;\big{|}\,u\in\ker(J( \mathbf{N})),\;e^{sJ(\mathbf{N})}=1\}.\]
Thus we have:
\[Z(G) =\{[u,s]\in\mathbb{C}^{d}\rtimes\mathbb{C}\;\big{|}\,u\in\ker(J( \mathbf{N})),\;e^{sJ(\mathbf{N})}=1\}\] \[=\exp_{G}\left(Z(\mbox{\textc{4}}(\mathbf{N}))\right)\times T_{ \mathbf{N}}.\] \[=\exp_{G}\left(Z(\mbox{\textc{4}}(\mathbf{N}))\times T_{\mathbf{N}}\right)\]
Finally, if \(\exp(v,t)=[u,s]\in Z(G)_{0}\), then since we have \(Z(G)=\exp_{G}(\mbox{\textc{4}}(\mathbf{N}))\times T_{\mathbf{N}}\), we have by connectedness that \(Z(G)_{0}=\exp_{G}(\mbox{\textc{4}}(\mathbf{N}))\times\{0\}\). Hence \(v=u\) and \(t=s=0\). This concludes the proof.
## 5. Discrete Normal Subgroups
We study discrete subgroups extensively because we will want to take quotients of simply connected complex almost Abelian groups in order to study connected complex almost Abelian groups.
The following Lemma is adapted from Lemma 11.3 in [10], which is stated and proven for the reals, but the same proof shows the result holds over \(\mathbb{C}\) as well.
**Lemma 5.1**.: _Let \(V\) be a finite-dimensional inner product space over \(\mathbb{C}\), viewed as a group under vector addition, and let \(\Gamma\) be a discrete subgroup of \(V\). Then there exist \(\mathbb{R}\)-linearly independent vectors \(v_{1},\ldots,v_{k}\) in \(V\) such that \(\Gamma\) is precisely the set of vectors of the form \(\sum_{i=1}^{k}m_{i}v_{i}\) with each \(m_{i}\in\mathbb{Z}\)._
Armed with the above Lemma, we now provide a bound on the rank of discrete normal subgroups of a simply connected almost Abelian group in terms of the data of \(J(\mathbf{N})\), which completely and uniquely determines the simply connected almost Abelian group.
**Proposition 5.2**.: _Every discrete normal subgroup \(N\subseteq G\) of a simply connected almost Abelian group \(G\) with Lie algebra \({}^{a}\!{\mathcal{A}}({\mathbf{N}})\) is a free group of rank_
\[k\leq\dim_{\mathbb{R}}\big{(}\ker(J({\mathbf{N}}))\big{)}+2,\]
_generated by \(\mathbb{R}\)-linearly independent elements \([v_{1},t_{1}],\ldots,[v_{k},t_{k}]\in Z(G)\subseteq G=\mathbb{C}^{d}\rtimes \mathbb{C}\)._
Proof.: It is known that any discrete normal subgroup is central. In Prop. 4.2 we proved that \(Z(G)=\exp\big{(}Z({}^{a}\!{\mathcal{A}}({\mathbf{N}}))\big{)}\times T_{{ \mathbf{N}}}\). Also, recall from the proof of Prop. 4.2 that for any \([v,t],[u,s]\in G\), we may express the product as:
\[[v,t][u,s]=[v+e^{tJ({\mathbf{N}})}u,t+s]. \tag{12}\]
Now by Prop. 4.2 if \([u,s]\in Z(G)\) then \(u\in\ker(J({\mathbf{N}}))\) implies \(J({\mathbf{N}})u=0\), which in turn implies \(e^{tJ({\mathbf{N}})}u=u\). Thus by (12), we have that for \([v,t],[u,s]\in Z(G)\),
\[[v,t][u,s]=[v+u,t+s]. \tag{13}\]
Thus if we define \(f:G\to\mathbb{C}^{d+1}\) to be the homeomorphism \(f([v,t])=(v,t)\), then \(f|_{Z(G)}\) is a group homomorphism as well by (13), and since we are working with matrix Lie groups, a Lie group homomorphism. A Lie group homomorphism maps discrete subgroups to discrete subgroups, thus \(f(N)\) is a discrete subgroup.
Thus by Lemma 5.1, \(f(N)\) is a free Abelian group generated by \(\mathbb{R}\)-linearly independent elements \(v_{1},\ldots,v_{k}\in\mathbb{C}^{d+1}\), and their span satisfies
\[\mathbb{C}\{v_{i}\}_{i=1}^{k}\subseteq\mathbb{C}\{f(Z(G))\},\]
which implies that
\[k\leq\dim_{\mathbb{R}}\big{(}\,\mathbb{C}\{f(Z(G))\}\big{)}. \tag{14}\]
What remains to be shown is that \(k\leq\dim_{\mathbb{C}}\big{(}\ker(J({\mathbf{N}}))\big{)}+2\), which we will show by proving
( \[*\] )
Recall from Prop. 4.2 that \(Z(G)=\exp_{G}\big{(}Z({}^{a}\!{\mathcal{A}}({\mathbf{N}}))\big{)}\times T_{{ \mathbf{N}}}\). Now \(\dim_{\mathbb{R}}(T_{{\mathbf{N}}})\leq 2\) implies that (\(*\) *> 4.2) holds if and only if
\[\dim_{\mathbb{R}}\big{(}\,\mathbb{R}\{f(\exp_{G}[Z({}^{a}\!{\mathcal{A}}({ \mathbf{N}}))])\}\big{)}\leq\dim_{\mathbb{R}}(\ker(J({\mathbf{N}}))) \tag{15}\]
holds. Let \(\{w_{i}\}_{i=1}^{m}\) be a basis for \(\mathbb{C}\{(f\circ\exp_{G})(Z({}^{a}\!{\mathcal{A}}({\mathbf{N}})))\}\). Without loss of generality, we may suppose \(\{w_{i}\}_{i=1}^{m}\subseteq(f\circ\exp)(Z({}^{a}\!{\mathcal{A}}({\mathbf{N}})))\). Recall from Remark 3.5 that \(\exp_{G}(v,t)=[v,t]\) for all \((v,t)\in\ker(J({\mathbf{N}}))\oplus\mathbb{C}\supseteq Z({}^{a}\!{\mathcal{A} }({\mathbf{N}}))\). Then (15) follows from the fact that \(\exp_{G}((v,t))=[v,t]\) implies \((f\circ\exp_{G})(v,t)=(v,t)\).
## 6. Subgroups and Subalgebras
We classify connected subgroups, prove the nonexistence of compact connected subgroups of a simply connected group \(\widetilde{G}\), and study some relationships between subgroups of \(\widetilde{G}\) and quotients \(G\coloneqq\widetilde{G}/N\) of \(\widetilde{G}\) by discrete normal subgroups \(N\).
**Remark 6.1**.: _Let \(G\) be a simply connected almost Abelian Lie group with Lie Algebra \({}^{a}\!{\mathcal{A}}({\mathbf{N}})=\mathbb{C}^{d}\rtimes\mathbb{C}\). Then by Proposition 4 in [1] every Lie subalgebra \({\mathbf{L}}\subset{}^{a}\!{\mathcal{A}}({\mathbf{N}})\) takes one of the following two forms:_
1. \({\mathbf{L}}={\mathbf{W}}\subset\mathbb{C}^{d}\) _is an Abelian Lie subalgebra._
2. \(\mathbf{L}\) _is of the form_ \[\mathbf{L}=\left\{(w+tv_{0},t)\in\mathbb{C}^{d}\rtimes\mathbb{C}\left|w\in \mathbf{W},t\in\mathbb{C}\right\},\right.\] _where_ \(v_{0}\in\mathbb{C}^{d}\) _is fixed and_ \(\mathbf{W}\subset\mathbb{C}^{d}\) _is an_ \(\mathrm{ad}_{e_{0}}\)_-invariant vector subspace. Here_ \(\mathbf{L}\) _is Abelian if and only if_ \(\mathbf{W}\subset Z(\mathrm{\text{\textcteq}}(\mathbf{N}))\)_._
Recall that,1 to every Lie subalgebra \(\mathbf{L}\) of a (any) Lie algebra there exists a unique connected Lie subgroup \(H_{\mathbf{L}}\) with Lie algebra \(\mathbf{L}\). We now find explicit forms for these connected Lie subgroups.
Footnote 1: Theorem 5.20 in [11]
**Proposition 6.2**.: _The connected Lie subgroup \(H_{\mathbf{L}}\subset G\) of the simply connected almost Abelian Lie group \(G\) with Lie algebra \(\mathbf{L}\) as in Remark 6.1 is given by either of the following two forms, accordingly:_
1. \[H_{\mathbf{L}}=\left\{[w,0]\in\mathbb{C}^{d}\rtimes\mathbb{C}\,\mid w\in \mathbf{W}\right\}=\exp(\mathbf{L}),\]
2. \[H_{\mathbf{L}}=\left\{\left[w+\frac{e^{tJ(\mathbf{N})}-\mathbbm{1}}{J( \mathbf{N})}v_{0},t\right]\in\mathbb{C}^{d}\rtimes\mathbb{C}\,\mid w\in \mathbf{W},\,\,\,t\in\mathbb{C}\right\}\cong\,\,\,\exp(\mathbf{W})\rtimes \mathbb{C}\,.\]
Proof.: That \(H_{\mathbf{L}}\) is indeed a Lie group can be checked via the faithful matrix representation in Prop. 3.3 and further the product rule as given in Prop. 4.2. To show closure under the group operation in Case 2, we observe that for any
\[\left[w+\frac{e^{tJ(\mathbf{N})}-\mathbbm{1}}{J(\mathbf{N})}v_{0},t\right], \left[u+\frac{e^{sJ(\mathbf{N})}-\mathbbm{1}}{J(\mathbf{N})}v_{0},s\right] \in H_{\mathbf{L}},\quad w,u\in W,\quad t,s\in\mathbb{C},\]
we have
\[\left[w+\frac{e^{tJ(\mathbf{N})}-\mathbbm{1}}{J(\mathbf{N})}v_{0},t\right]\left[u+\frac{e^{sJ(\mathbf{N})}-\mathbbm{1}}{J(\mathbf{N})}v_{0},s\right]\] \[=\left[(w+e^{tJ(\mathbf{N})}u)+\frac{e^{(t+s)J(\mathbf{N})}- \mathbbm{1}}{J(\mathbf{N})}v_{0},t+s\right].\]
For this to be in \(H_{\mathbf{L}}\) we need \(e^{tJ(\mathbf{N})}u\in\mathbf{W}\), which is guaranteed because \(\mathbf{W}\) is \(J(\mathbf{N})\)-invariant.
For Case 1, the exponential map as given in Lemma 3.4 gives the desired result directly. For Case 2, take some \((w_{0}+t_{0}v_{0},t_{0})\in\mathbf{L}\). Consider a path \(\gamma:(-1,1)\longrightarrow\mathbf{W}\oplus\mathbb{C}\) defined by
\[\gamma(\tau)=(w(\tau),t(\tau)),\]
where
\[(w(0),t(0))=(0,0),\qquad(w^{\prime}(0),t^{\prime}(0))=(w_{0},t_{0})\in \mathbf{W}\oplus\mathbb{C}\,.\]
Then we have,
\[\left.\frac{\mathrm{d}}{\mathrm{d}\tau}\left[w+\frac{e^{t(\tau)J(\mathbf{N})}- \mathbbm{1}}{J(\mathbf{N})}v_{0},t(\tau)\right]\right|_{\tau=0}=(w_{0}+t_{0} v_{0},t_{0}).\]
Thus the Lie algebra of \(H_{\mathbf{L}}\) is \(\mathbf{L}\).
Next, consider the map \(\Phi:H_{\mathbf{L}}\to\exp(\mathbf{W})\rtimes\mathbb{C}\) given by
\[\Phi[v,t]=\left[-\frac{e^{tJ(\mathbf{N})}-\mathbbm{1}}{J(\mathbf{N})}v_{0}+v, t\right].\]
It can be easily shown that \(\Phi\) is a Lie group isomorphism and we provide the details for completeness. First, we check that \(\Phi\) is bijective by showing its inverse is given by:
\[\Phi^{-1}[v,t]=\left[\frac{e^{tJ(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N})}v_{0}+v,t\right].\]
Indeed,
\[\Phi^{-1}\circ\Phi[v,t]=\left[\frac{e^{tJ(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N })}v_{0}+\left[-\frac{e^{tJ(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N})}v_{0}+v, \right]t\right]=[v,t].\]
We next show \(\Phi\) is a Lie group homomorphism,
\[\Phi[v,t]\cdot\Phi[u,s] =\left[-\frac{e^{tJ(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N})}v_{0} +v,t\right]\cdot\left[-\frac{e^{tJ(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N})}v_{0 }+u,s\right]\] \[=\left[-\frac{e^{(t+s)J(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N})}v _{0}+(v+e^{tJ(\mathbf{N})}u),t+s\right]\] \[=\Phi[v+e^{tJ(\mathbf{N})}u,t+s]=\Phi([v,t]\cdot[u,s]).\]
Lastly note that for \(\left[w+\frac{e^{tJ(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N})}v_{0},t\right]\in H _{\mathbf{L}}\),
\[\Phi\left[w+\frac{e^{tJ(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N})}v_{0},t\right] =\left[-\frac{e^{tJ(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N})}v_{0}+w+\frac{e^{tJ( \mathbf{N})}-\mathbb{1}}{J(\mathbf{N})}v_{0},t\right]=[w,t],\]
which is indeed and element of \(\exp(\mathbf{W})\rtimes\mathbb{C}\), showing that \(\Phi\) maps \(H_{\mathbf{L}}\to\exp(\mathbf{W})\rtimes\mathbb{C}\) as desired.
**Remark 6.3**.: _Prop. 6.2 implies that no connected subgroup \(H\) of a simply connected almost abelian group \(G\) is compact._
**Lemma 6.4**.: _Let \(\widetilde{G}\) be a simply connected almost Abelian Lie group and \(N\subseteq\widetilde{G}\) a normal subgroup. Let \(G\coloneqq\widetilde{G}/N\) be the resultant connected almost Abelian Lie group. Then every connected subgroup \(H\subseteq G\) is the projection \(H=\widetilde{H}/N\) of a unique connected Lie subgroup \(\widetilde{H}\subseteq\widetilde{G}\)._
Proof.: Since we have a simply connected almost abelian group, we may use the matrix representation given in Prop. 3.3, and thus our almost abelian group is a matrix Lie group, which is to say it is a closed subgroup of \(GL_{n}(\mathbb{C})\).
Let \(\mathbf{L}_{\widetilde{G}}\) be the Lie algebra of \(\widetilde{G}\), and let \(\mathbf{L}_{G}\) be the Lie algebra of \(G\). The quotient map \(q_{N}:\widetilde{G}\to G\) is a surjective complex Lie group homomorphism, and its derivative \(dq_{N}:\mathbf{L}_{\widetilde{G}}\to\mathbf{L}_{G}\) is a surjective Lie algebra homomorphism. The preimage \(dq_{N}\) -\(\mathbb{1}(\mathbf{L}_{H})\) of the Lie algebra \(\mathbf{L}_{H}\) of \(H\) is a Lie subalgebra of \(\mathbf{L}_{\widetilde{G}}\), and thus is the Lie algebra of the unique connected subgroup \(\widetilde{H}\leq\widetilde{G}\). The image \(q_{N}(\widetilde{H})\leq G\) is a connected subgroup with Lie algebra \(\mathbf{L}_{H}\), which by uniqueness must be \(q_{N}(\widetilde{H})=H\). Finally, if \(\widetilde{H}^{\prime}\leq\widetilde{G}\) is another connected subgroup with \(q_{N}(\widetilde{H}^{\prime})=H\) then \(\mathbf{L}_{\widetilde{H}^{\prime}}=\mathbf{L}_{H}\), so that again by uniqueness \(\widetilde{H}^{\prime}=\widetilde{H}\).
**Lemma 6.5**.: _Let \(G\) be a simply connected almost Abelian group, \(N\subseteq G\) a discrete normal subgroup and \(H\subseteq G\) a connected subgroup. Then there exists a subgroup \(B\subseteq N\) such that \(N=(N\cap H)\times B\)._
Proof.: We use Prop. 6.2 to write:
\[H\cong\begin{cases}\exp(\mathbf{W})\\ \exp(\mathbf{W})\times\mathbb{C}\\ \exp(\mathbf{W})\rtimes\mathbb{C}\end{cases} \tag{16}\]
where \(\mathbf{W}\subseteq\mathbb{C}^{d}\) is a vector subspace.
We will prove \(N\cap H\) is a pure subgroup, and use Corollary 28.3 in [10] to reach our desired conclusion that \(N\cap H\) is a direct factor. Let \([v,t]\in N\) and \(n\in\mathbb{N}\) s.t. \([v,t]^{n}=[nq,nt]\in N\cap H\). Then \(nv\in\mathbf{W}\) implies \(v\in\mathbf{W}\). Furthermore, if we are in the first case in (16), then \(t=0\) implies \(nt=0\). Meanwhile, if we are in the second or third case case in (16), we would have \(t\in\mathbb{C}\) implies \(\frac{t}{n}\in\mathbb{C}\). In either case we therefore have \([v,t]\in H\). Since \([v,t]\in N\) by assumption, we therefore have \([v,t]\in N\cap H\). By Corollary 28.3 in [10], we therefore have that \(N\cap H\) is a direct factor of \(N\).
**Proposition 6.6**.: _Let \(G\coloneqq\widetilde{G}/\Gamma\) be a connected almost Abelian Lie group (where \(\widetilde{G}\) is the simply connected universal cover). Then \(G\) is never compact._
Proof.: Let \({}^{a}\!\mathcal{A}(\mathbf{N})\) be the Lie algebra of \(G\) and \(\widetilde{G}\), and let \({}^{a}\!\mathcal{A}(\mathbf{N})=\mathbb{C}^{d}\oplus\mathbb{C}\) (where \(\mathbb{C}^{d}\) is an Abelian subalgebra). Recall from [1] that for any almost Abelian Lie algebra
\[\ker(\mathrm{ad}_{e_{0}})=Z({}^{a}\!\mathcal{A}(\mathbf{N})),\]
and so in particular
\[\dim_{\mathbb{C}}(\ker(\mathrm{ad}_{e_{0}}))=\dim_{\mathbb{C}}(\ker(J( \mathbf{N})))=\dim_{\mathbb{C}}(Z({}^{a}\!\mathcal{A}(\mathbf{N}))).\]
Assume for contradiction \(\dim_{\mathbb{C}}(\ker(J(\mathbf{N})))=d\) (note that we in general cannot have that \(\dim_{\mathbb{C}}(\ker(J(\mathbf{N})))=d+1\) as the equality would force algebra to be Abelian). First, notice that if \(\mathbf{V}\) is the orthogonal space to the dimension \(d\) central subspace \(\ker(J(\mathbf{N}))=Z({}^{a}\!\mathcal{A}(\mathbf{N}))\) of \({}^{a}\!\mathcal{A}(\mathbf{N})\), then \(\mathbf{V}\) is Abelian because it has dimension \(1\). Now any element \(W\in{}^{a}\!\mathcal{A}(\mathbf{N})\) can be linearly decomposed: \(W=W_{1}+W_{2}\), such that \(W_{1}\in\mathbf{V}\) and \(W_{2}\in Z({}^{a}\!\mathcal{A}(\mathbf{N}))\). By linearity of the Lie bracket then, we have that for all \(\alpha V\in\mathbf{V},\ [\alpha V,W]=\alpha[V,W_{1}]+\alpha[V,W_{2}]=0+0=0\). Thus \(\mathbf{V}\subseteq Z({}^{a}\!\mathcal{A}(\mathbf{N}))\) which implies \(Z({}^{a}\!\mathcal{A}(\mathbf{N}))={}^{a}\!\mathcal{A}(\mathbf{N})\), and thus \({}^{a}\!\mathcal{A}(\mathbf{N})\) is Abelian, a contradiction (note, \(\mathbf{V}\subseteq Z({}^{a}\!\mathcal{A}(\mathbf{N}))\) was already a contradiction).
So there exists \(X\in\mathbb{C}^{d}\) such that \(X\notin\ker(\mathrm{ad}_{e_{0}})=Z({}^{a}\!\mathcal{A}(\mathbf{N}))\). Consider the one parameter subgroup \(H_{X}=\{\exp_{\widetilde{G}}(\tau X)\,|\,\tau\in\mathbb{C}\}\). By Prop. 5.2, we know that \(Z(\widetilde{G})=\exp_{\widetilde{G}}(Z({}^{a}\!\mathcal{A}(\mathbf{N})))\times T _{\mathbf{N}}\). By construction, \(\exp_{\widetilde{G}}(X)\notin\exp_{\widetilde{G}}(Z({}^{a}\!\mathcal{A}( \mathbf{N})))\).
Now assume for contradiction there exists \(\tau\in\mathbb{C}^{\times}\) such that \(\exp_{\widetilde{G}}(\tau X)\) is an element of \(\exp_{\widetilde{G}}(Z({}^{a}\!\mathcal{A}(\mathbf{N})))\). Then \(\tau X\in Z({}^{a}\!\mathcal{A}(\mathbf{N}))\) implies \([\tau X,Y]=0\ \forall\,Y\in{}^{a}\!\mathcal{A}(\mathbf{N})\), which implies \([X,Y]=0\ \forall\,Y\in{}^{a}\!\mathcal{A}(\mathbf{N})\) and thus \(X\in Z({}^{a}\!\mathcal{A}(\mathbf{N}))\), a contradiction. Thus \(\exp_{\widetilde{G}}(\tau X)\notin\exp_{\widetilde{G}}(Z({}^{a}\!\mathcal{A}( \mathbf{N})))\ \forall\,\tau\in\mathbb{C}^{\times}\). Thus \(H_{X}\cap Z(G)=\{1\}\), and thus \(H_{X}\cap\Gamma=\{1\}\) because all discrete normal subgroups of a Lie group are central.
Now consider the quotient map
\[\pi|_{H_{X}}:H_{X}\to\widetilde{G}/\Gamma\]
which--when not restricted--is also a surjective complex Lie group homomorphism.
Now due to our result that \(H_{X}\) intersects \(\Gamma\) only at the identity of \(\widetilde{G}\), we have that \(H_{X}\cong H_{X}/(H_{X}\cap\Gamma)\). Simultaneously, we know that \(\ker(\pi|_{H_{X}})=H_{X}\cap\Gamma\). Thus, \(\pi(H_{X})\cong H_{X}\).
Consider the algebra representation for \({}^{a}\!\mathcal{A}(\mathbf{N})\) as:
\[{}^{a}\!\mathcal{A}(\mathbf{N})=\left\{\begin{pmatrix}0&0&0\\ v&tJ(\mathbf{N})&0\\ t&0&0\end{pmatrix}\middle|(v,t)\in\mathbb{C}^{d}\times\mathbb{C}\right\}\]
From Prop. 3.3, we know that the matrix exponential takes this representation of the Lie algebra to the simply connected Lie group that has \({}^{a}\!\mathcal{A}(\mathbf{N})\) as its Lie algebra. By Lemma 3.4,
we see that, specifically, the exponential of an element of \({}^{a}\!{\mathcal{A}}({\mathbb{N}})\) under this representation is:
\[\exp_{\widetilde{G}}\begin{pmatrix}0&0&0\\ v&tJ({\mathbb{N}})&0\\ t&0&0\end{pmatrix}=\begin{pmatrix}1&0&0\\ \frac{e^{tJ({\mathbb{N}})}-1}{tJ({\mathbb{N}})}v&e^{tJ({\mathbb{N}})}&0\\ t&0&1\end{pmatrix} \tag{17}\]
Now recall we defined \(X\) to be an element of \({\mathbb{C}}^{d}\). So plugging in \((\tau X,0)\) for \((v,t)\) into (17), we get
\[\exp_{\widetilde{G}}\begin{pmatrix}0&0&0\\ \tau X&0&0\\ 0&0&0\end{pmatrix}=\begin{pmatrix}1&0&0\\ \tau X&1&0\\ 0&0&1\end{pmatrix}\]
From this, it is apparent that \(H_{X}\) is closed (contains all its limit points) and path connected (and thus connected).
Since any connected subgroup of \(\widetilde{G}\) is not compact by Remark 6.3, \(H_{X}\) is not compact. Then the image of \(H_{X}\) under \(\pi\) is not compact, and therefore \(G\) contains a noncompact Lie subgroup. If \(G\) were to be compact, then any closed subgroup would be compact as well. However, we showed that \(H_{X}\) and thus2\(\pi(H_{X})\) is a closed non-compact Lie subgroup, thus \(G\) cannot be compact.
Footnote 2: Since a quotient map is an open map.
Prop. 6.6 establishes that connected almost Abelian Lie groups cannot be compact, but it says nothing about the compactness of subgroups of connected almost Abelian Lie groups. Our next result gives a necessary and sufficient condition for the compactness of a subgroup of a connected almost Abelian Lie group.
**Proposition 6.7**.: _Let \(G\) be a connected almost Abelian group, and \(H\subseteq G\) a connected Lie subgroup. Let \(\widetilde{H}\subseteq\widetilde{G}\) be the connected Lie group such that \(H=\widetilde{H}/\Gamma\) (where \(\Gamma\) is a discrete normal subgroup), and let \(\widetilde{G}\) be the simply connected almost Abelian group such that \(G=\widetilde{G}/\Gamma\). Then \(\operatorname{rank}\left(\Gamma\cap\widetilde{H}\right)=\dim_{\mathbb{R}}( \widetilde{H})=\dim_{\mathbb{R}}(H)\) if and only if \(H\) is compact._
Proof.: First we note that since \(\widetilde{H}\) is the universal covering group, the equality \(\dim_{\mathbb{R}}(H)=\dim_{\mathbb{R}}(\widetilde{H})\) holds regardless.
By Lemma 6.5, we know there exists \(B\subseteq\Gamma\) such that \(\Gamma=(\Gamma\cap\widetilde{H})\times B\). Define \(\Gamma^{\prime}\coloneqq\Gamma\cap\widetilde{H}\). Since \(B\cap\widetilde{H}=\{0\}\) by construction, we may write:
\[H=\widetilde{H}/\Gamma=\widetilde{H}/(\Gamma^{\prime}\times B)=\widetilde{H} /\Gamma^{\prime}\]
Thus we have obtained a discrete subgroup \(\Gamma^{\prime}\) contained in \(\widetilde{H}\) yielding the same (isomorphic) quotient as when viewing \(\widetilde{H}\) as a subgroup of \(\widetilde{G}\) and taking the quotient by \(\Gamma\).
Since \(H=\widetilde{H}/\Gamma^{\prime}\), \(H\) is connected. By Prop. 4 in [1], all subalgebras of an almost Abelian Lie algebra are either Abelian or almost abelian. It is known that there is a 1-1 correspondence between connected subgroups and Lie subalgebras, implying that any underlying group is either almost Abelian (by definition) or is Abelian (having an Abelian Lie algebra implies the connected component of the identity is abelian for real Lie groups, and complex Lie groups are in particular real Lie groups). We analyze the two cases:
**Case 1:** First we will consider the case where \(H\) is almost Abelian.
For the forwards implication, assume for contradiction that \(\operatorname{rank}\left(\Gamma^{\prime}\right)=\dim_{\mathbb{R}}(\widetilde {H})=\dim_{\mathbb{R}}(H)\) (i.e., we will show the claim is vacuous in this case). Now \(\Gamma^{\prime}\subseteq Z(\widetilde{H})\cong{\mathbb{C}}^{\ell}\cong{ \mathbb{R}}^{2\ell}\) for some \(\ell\in{\mathbb{N}}\), \(\ell<d\). Let \(k=2\ell+2=\dim_{\mathbb{R}}(\widetilde{H})\). Thus we have that there exists a minimal generating set \(\{[v_{1},t_{1}],\ldots,[v_{k},t_{k}]\}\).
By Lemma 2.1, we have that either \(T_{\mathbf{N}}\cong\{0\}\) or \(T_{\mathbf{N}}\cong\mathbb{Z}\). In either case, \(\operatorname{rank}T_{\mathbf{N}}\leq 1\). And thus in either case, there exists some \([u_{1},t_{0}]\in\Gamma^{\prime}\) such with minimal (in magnitude) \(t_{0}\in\mathbb{C}\) such that \(t_{j}=n_{j}t_{0},\ 1\leq j\leq k,\ \{n_{j}\}_{j=1}^{k}\subseteq\mathbb{N}\). Then by Lemma 7 in [1], or equivalently by row operations, we have that there is a generating set \(\{[u_{1},t_{0}],[u_{2},0],\ldots,[u_{k},0]\}\subseteq\Gamma^{\prime}\). By presupposition, \(\{[v_{j},t_{j}\}_{j=1}^{k}\) was a minimal generating set for \(\Gamma^{\prime}\). Now note that \(\big{|}\{u_{j}\}_{j=2}^{k}\big{|}=2\ell+1>2\ell\), while also \(\{[u_{j},0]\}_{j=2}^{k}\subseteq\mathbb{R}^{2\ell}\). Thus there must be at least two \(u_{j}\)'s that are \(\mathbb{R}\)-linearly dependent. But then we may find a minimal generating set of \(\Gamma^{\prime}\) of cardinality \(<k\), a contradiction to \(\operatorname{rank}\Gamma^{\prime}=\dim_{\mathbb{R}}(\widetilde{H})=k\). Thus it is impossible for \(\operatorname{rank}(\Gamma^{\prime})=\dim_{\mathbb{R}}(\widetilde{H})\), and thus this direction of implication holds vacuously.
Now for the reverse implication. Since \(H\) is almost Abelian, we have that \(H\) is a connected almost Abelian Lie group, and therefore by Prop. 6.6, \(H\) cannot be compact, and thus the reverse implication is vacuous as well, and so we are done with this case.
**Case 2:** We proceed to check the Abelian case. Note that \(H\) is Abelian if and only if \(\widetilde{H}\) is Abelian because they have the same Lie algebra. If \(\widetilde{H}\) is Abelian, then \(\widetilde{H}\cong\mathbb{C}^{n}\) by Prop. 5 in 6.2. By assumption, \(\Gamma^{\prime}\) is a discrete subgroup, so it is generated by \(k\leq 2n\)\(\mathbb{R}\)-linearly independent elements. Therefore there exists an isomorphism \(\varphi:\Gamma^{\prime}\to\mathbb{Z}^{k}\). So we have:
\[H\cong\widetilde{H}/\Gamma^{\prime}\cong\mathbb{C}^{n}\,/\varphi(\Gamma^{ \prime})\cong\mathbb{C}^{n}\,/\,\mathbb{Z}^{k}\cong\mathbb{T}^{(2\lfloor\frac {k}{2}\rfloor)}\times(\mathbb{C}\,/\,\mathbb{Z})^{\epsilon}\times\mathbb{C}^{\eta} \tag{18}\]
where
\[\epsilon=\begin{cases}0&k\equiv 0\mod 2\\ 1&k\equiv 1\mod 2\end{cases}\qquad\eta=\begin{cases}n-\frac{k}{2}&k\equiv 0\mod 2 \\ n-\lfloor\frac{k}{2}\rfloor-1&k\equiv 1\mod 2.\end{cases}\]
For the forwards direction, assume \(\operatorname{rank}\left(\Gamma^{\prime}\right)=\dim(\widetilde{H})\). Then by (18), we would have \(H\cong\mathbb{T}^{\lfloor\frac{k}{2}\rfloor}=\mathbb{T}^{n}\), which is compact.
Now for the reverse implication, assume instead that \(H\) is compact. Then it is apparent from (18) that the only way for this to occur is for \(\epsilon=\eta=0\) and so \(n-\frac{k}{2}=0\), implying \(k=2n\) and \(\operatorname{rank}\left(\Gamma^{\prime}\right)=\dim_{\mathbb{R}}(\widetilde{H})\), as desired.
While the necessary and sufficient condition described by Prop. 6.7 appears unwieldy, the condition can be a powerful technical tool in the proofs of more concrete results, such as Prop. 8.3 later on.
## 7. Discrete Subgroups
To study discrete subgroups, it is useful to study their images under a certain map which we will call the _projection homomorphism_. In the next lemma, we give the definition and check that the map is indeed a well-defined homomorphism.
**Lemma 7.1**.: _Let \(G\) be an \(n\)-dimensional simply connected almost Abelian Lie group with Lie algebra \(\mathcal{A}(\mathbf{N})\). Recalling that \(G=\mathbb{C}^{d}\rtimes\mathbb{C}=:N\rtimes H\), define \(P:G\to\mathbb{C}\) by_
\[P([v,t])\coloneqq t.\]
\(P\) _is a group homomorphism._
Proof.: Observe that
\[[v,0][0,t]=[v+e^{0}0,t]=[v,t].\]
Thus, we may represent \([v,t]\) as \(nh\) for some \(n\in N\) and \(h\in H\). Note that by the definition of the semidirect product, \(N\unlhd\widetilde{G}\). Then for \(n_{1},n_{2}\in N\) and \(h_{1},h_{2}\in H\), we have
\[P[n_{1}h_{1}n_{2}h_{2}] =P[n_{1}(h_{1}n_{2}h_{1}\text{\textasciitif{\char 13}}^{\text{ \textasciitif{\char 13}}^{\text{\textasciitif{\char 13}}^{\text{\textasciitif{\char 13}}^{ \text{\textasciitif{\char 13}}^{\text{\textasciitif{\char 13}}^{\text{\textasciitif{\char 13}}^{ \textasciitif{\char 13}}^{\text{\textasciitif{\char 13}}^{\textasciitif{\char 13}}^{\text{ \textasciitif{\char 13}}^{\textasciitif{\char 13}}^{\textasciitif{\char 13}}^{\textasciitif{ \char 13}}^{\textasciitif{\char 13}}^{\textasciitif{\char 13}}^{\textasciitif{\char 13}}^{\textasciitif{ \char 13}}^{\textasciitif{\char 13}}^{\textasciitif{\char 13}}^{\textasciitif{\char 13}}^{\textasciitif{ \char 13}}^{\textasciitif{\char 13}}^{\textasciitif{\char 13}}^{\textasciitif{\char 13}}^{\textasciitif{ \char 13}}^{\textasciitif{\char 13}}^{\textasciitif{\char 13}}^{\textasciitif{\char 13}}^{\textasciitif{ \char 13}}^{\textasciitif{\char 13}}^{\textasciitif{\char 13}}^{\textasciitif{\char 13}}^{\textasciitif{ \char 13}}^{\textasciitif{\char 13}}^{\textasciitif{\char 13}}^{\textasciitif{\char 13}}^{\textasciitif{ \char 13}}^{\textasciitif{\char 13}}^{\textasciitif{\char 13}}^{\textasciitif{\char 13}}^{\textasciitif{ \char 13}}^{\textasciitif{\char 13}}})h_{2}]\] \[=P[n_{3}h_{1}h_{2}]\] \[=\pi_{2}(h_{1}h_{2})\] \[=P[n_{1}h_{1}]P[n_{2}h_{2}].\]
where \(\pi_{2}\) is the projection onto the second factor.
The utility of this projection homomorphism is shown in the next result. In particular, we show that the finite generation of \(D\) is equivalent to that of its image under the projection homomorphism.
**Proposition 7.2**.: _Let \(G=\mathbb{C}^{d}\rtimes\mathbb{C}\rightleftharpoons:N\rtimes H\) be a simply connected almost Abelian group, and \(D\subseteq G\) a discrete subgroup. Then \(D\) is finitely generated if and only if \(P(D)\subseteq H\) is finitely generated._
Proof.: We identify \(N\) with an internal subgroup of \(G\).
In one direction, if \(D\) is finitely generated, then so is \(P(D)\) because \(P\) is a homomorphism by Lemma 7.1.
Conversely, suppose \(P(D)\subseteq H\) is finitely generated. So there exists \(k\in\mathbb{N}\) and \(\{\alpha_{i}\}_{i=1}^{k}\subseteq D\) such that \(\mathbb{Z}\{P(\alpha_{i}):1\leq i\leq k\}=P(D)\). Let \(x\in D\) be arbitrary. Then we know \(P(x)=P(\alpha)\) for some \(\alpha\in\mathbb{Z}\{\alpha_{i}\}_{i=1}^{k}\). Then \(x\alpha\text{\textasciitif{\char 13}}^{\text{\textasciitif{\char 13}}^{ \textasciitif{\char 13}}}\in\ker(P)=N\). Therefore \(x\alpha\text{\textasciitif{\char 13}}^{\textasciitif{\char 13}}\in D\cap N\) so,
\[D=(D\cap N)(\mathbb{Z}\{\alpha_{i}\}_{i=1}^{k}) \tag{19}\]
Now \(N=\mathbb{C}^{d}\cong\mathbb{R}^{2d}\) hence \(D\cap N\) is an additive subgroup of \(\mathbb{R}^{2d}\), so it is finitely generated. Thus by (19) we know \(D\) is also finitely generated.
When the projection of a discrete subgroup is discrete, it is finitely generated, so the discrete subgroup itself itself must be finitely generated by Prop. 7.2. The following lemma tells us what can be ascertained if the projection of a discrete subgroup fails to be discrete.
**Lemma 7.3**.: _Let \(G=N\rtimes H=\mathbb{C}^{d}\rtimes\mathbb{C}\) be a simply connected almost Abelian group, and let \(D\subseteq G\) be a discrete subgroup. If \(P(D)\) is not discrete, then \(N\cap D\subseteq(Z(G))_{0}\)._
Proof.: We identify \(N\) and \(H\) with internal subgroups of \(G\).
Note that \([v,t][u,s]=[v+e^{tJ(\mathbb{N})}u,t+s]\). Thus, if we can show that \(J(\mathbb{N})=0\), we will have shown \(N\cap D\subseteq(Z(G))\).
Recall that
\[\operatorname{ad}_{X}=\frac{\operatorname{d}}{\operatorname{d}\tau} \operatorname{Ad}_{e^{\tau X}}\bigg{|}_{\tau=0}. \tag{20}\]
In (20), let \(X=e_{0}\) (where \(H=\exp_{G}(\mathbb{C}\,e_{0})\)). So we have
\[\operatorname{ad}_{e_{0}}=\frac{\operatorname{d}}{\operatorname{d}\tau} \operatorname{Ad}_{e^{\tau e_{0}}}\bigg{|}_{\tau=0}. \tag{21}\]
Also recall from Remark 3.5 that \(\exp_{G}|_{\ker(J(\mathbb{N}))\times\mathbb{C}}((v,t))=[v,t]\). Thus we may rewrite (21) as:
\[\operatorname{ad}_{e_{0}}=\frac{\operatorname{d}}{\operatorname{d}\tau} \operatorname{Ad}_{\tau}\bigg{|}_{\tau=0}. \tag{22}\]
using the fact that the matrix exponential and the exponential associated with our Lie group coincide when we use the Lie algebra representation associated with \(G\), as in Prop. 3.3.
We note that
\[\exp_{G}((v,0))=I+\sum_{n=1}^{\infty}\frac{1}{n!}\begin{pmatrix}0&0&0\\ v&0&0\\ 0&0&0\end{pmatrix}^{n}=\begin{pmatrix}1&0&0\\ v&1&0\\ 0&0&1\end{pmatrix}=[v,0]. \tag{23}\]
Thus for all \([n,0]\in N\), we have that there exists \((n,0)\in\operatorname{Lie}(\widetilde{G})\eqqcolon\mathfrak{g}\) such that \(\exp_{\widetilde{G}}((n,0))=[n,0]\). So we may identify \(D\cap N\) with a subset \(\mathfrak{d}\coloneqq\{(d,0):[d,0]\in D\cap N\}\subseteq\mathfrak{g}\). Note that \(\mathfrak{d}\) is also discrete.
Recall that the continuous action of a connected group on a discrete set is trivial. Now, \(H\) is a connected group, and we can consider the action of conjugation by elements of \(H\) on the discrete set \(\mathfrak{d}\). Denote this action by \(C:H\times\mathfrak{d}\to\mathfrak{d}\). Conjugation is a continuous action, thus the action \(C\) must be trivial. That is, \(C_{h}\equiv\operatorname{id}_{\mathfrak{d}}\) for every \(h\in H\). In particular, the action \(C|_{P(D)\times\mathfrak{d}}\) is trivial.
Because we are working in a Hausdorff space, the limit in (22) can be computed using any sequence \((t_{n})_{n=1}^{\infty}\) with \(t_{n}\to 0\). Assume \(P(D)\) is not discrete. Then there is a sequence of \(t_{n}\in P(D)\) such that \(t_{n}\to 0\). Now \(\operatorname{Ad}_{\tau}\) is just conjugation by \(\tau\in H\), which we know to be trivial when acting on \(\mathfrak{d}\). Since there exists our sequence \((t_{n})_{n=1}^{\infty}\) that is contained in \(P(D)\) by construction, we have that
\[\operatorname{ad}_{e_{0}}|_{\mathfrak{d}}=\frac{\operatorname{d}}{ \operatorname{d}\tau}\operatorname{Ad}_{\tau}|_{\mathfrak{d}}\bigg{|}_{ \tau=0}=\lim_{t_{n}\to 0}\frac{\operatorname{Ad}_{t_{n}}|_{\mathfrak{d}}- \operatorname{Ad}_{0}|_{\mathfrak{d}}}{t_{n}-0}=\lim_{t_{n}\to 0}\frac{I-I}{t_{n}}=0.\]
Because \(\operatorname{ad}_{e_{0}}|_{\mathfrak{d}}=J(\mathbf{N})|_{\mathfrak{d}}\), we have that \(J(\mathbf{N})|_{\mathfrak{d}}=0\). Since \([v,t][u,s]=[v+e^{tJ(\mathbf{N})}u,t+s]\;\;\forall[v,t],[u,s]\in G\), we can conlude that \(D\cap N\subseteq Z(G)\).
Now that we know \(\operatorname{ad}_{e_{0}}|_{\mathfrak{d}}=0\), we calculate that for all \((d,0)\in\mathfrak{d}\) and for arbitrary \((v,t)\in\mathfrak{g}\),
\[[(d,0),(v,t)] =[(d,0),(v,0)]+[(d,0),(0,t)]\] \[=0-t\operatorname{ad}_{e_{0}}((d,0))\] \[=0.\]
Thus \(\mathfrak{d}\subseteq Z(\mathcal{A}(\mathbf{N}))\). So3, we have that \(\exp_{G}(\mathfrak{d})\subseteq Z(G)_{0}\). As \(\exp_{G}(\mathfrak{d})=D\cap N\) by construction, we conclude \(D\cap N\subseteq Z(G)_{0}\).
Footnote 3: E.g. by exercise 9.1 in [1]
**Remark 7.4**.: _For a finitely supported multiplicity function \(\mathbf{\aleph}\), we define_
\[\mathfrak{k}\coloneqq\left\{t\in\mathbb{C}\;\Big{|}\;\ker\left(\frac{e^{tJ( \mathbf{N})}-1}{tJ(\mathbf{N})}\right)\neq\{0\}\right\}.\]
_Then,_
\[\mathfrak{k}=\left\{\frac{2\pi im}{x_{p}}\bigg{|}m\in\mathbb{Z},\ p\in \operatorname{supp}\left(\mathbf{N}\right)\cap\left(\mathbb{C}-\{0\}\right) \right\}. \tag{24}\]
Proof.: We abbreviate the set on the right hand side of 24 as \(S\).
In one direction, let \(t\in\mathbb{C}\) such that \(\ker\left(\frac{e^{tJ(\mathbf{N})}-1}{tJ(\mathbf{N})}\right)\neq\{0\}\). By rank-nullity, \(\det\left[\frac{e^{tJ(p,n)}-1}{tJ(p,n)}\right]=0\). Now, because \([\frac{e^{tJ(p,n)}-1}{tJ(p,n)}]\) is an upper triangular matrix, \(\det\left[\frac{e^{tJ(p,n)}-1}{tJ(p,n)}\right]=\left(\frac{e^{tx_{p}}-1}{tx_{p }}\right)^{n}\), so \(\left(\frac{e^{tx_{p}}-1}{tx_{p}}\right)^{n}=0\). Simplifying, \(\left(\frac{e^{tx_{p}}-1}{tx_{p}}\right)=0\), \(e^{tx_{p}}-1=0\). If \(t=0\), we can think of \(\frac{e^{tx_{p}}-1}{tx_{p}}\) as \(\lim_{z\to 0}\frac{e^{z}-1}{z}\) which equals \(1\), so \(\det\left[\frac{e^{tJ(p,n)}-1}{tJ(p,n)}\right]\neq 0\). This is a contradiction, so \(t\neq 0\). Thus, since \(t\neq 0\) we get that \(e^{tx_{p}}-1=0\), so \(e^{tx_{p}}=1\). Thus \(tx_{p}\in 2\pi i\,\mathbb{Z}\) and as \(tx_{p}\neq 0\), \(x_{p}\neq 0\) so we can divide and obtain \(t\in\frac{2\pi i\,\mathbb{Z}}{x_{p}}\). Hence, \(\mathfrak{k}\subseteq S\).
In the other direction, let \(t\in S\). Then \(tx_{p}\in 2\pi i\,\mathbb{Z}\) so \(e^{tx_{p}}=1\) and thus \(e^{tx_{p}}-1=0\). This means \(\frac{e^{tx_{p}}-1}{tx_{p}}=0\) so \(\det(J(p,n))=\left(\frac{e^{tx_{p}}-1}{tx_{p}}\right)^{n}=0\). So by rank-nullity,
\[\ker\left(\frac{e^{tJ(p,n)}-1}{tJ(p,n)}\right)\neq\{0\}.\]
Note that
\[\mathfrak{k}=\bigcup_{p\in\operatorname{supp}(\mathbf{N})}\bigcup_{n=1}^{ \infty}\left\{t\in\mathbb{C}\Big{|}\ker\left(\frac{e^{tJ(p,n)}-1}{tJ(p,n)} \right)\neq\{0\}\right\}\]
Therefore, \(t\in\mathfrak{k}\). Hence, \(S\subseteq\mathfrak{k}\).
We now come to one of our main results--that every discrete subgroup of a simply connected almost Abelian group is finitely generated. This refines Lemma 7.3 and tells us that even when the projection of a discrete subgroup fails to be discrete, it is still at least finitely generated.
**Theorem 7.5**.: _Let \(G\) be a simply connected almost Abelian group. Every discrete subgroup \(D\subseteq G\) is finitely generated._
Proof.: If \(P(D)\subseteq\mathbb{C}\) is discrete then it is finitely generated, and so by Proposition 7.2, \(D\) is finitely generated and we are done. So assume \(P(D)\) is not discrete.
**Case 1:** Suppose \(P(D)\subseteq\mathfrak{k}\). Consider the group
\[H\coloneqq\left\langle\left\{\frac{2\pi i}{x_{p}}\,\middle|\,p\in \operatorname{supp}\left(\mathbf{N}\right)\cap\left(\mathbb{C}-\{0\}\right) \right\}\right\rangle.\]
Since \(J(\mathbf{N})\) is a finite multiplicity function, \(|\{x_{p}\,:\,p\in\operatorname{supp}\left(\mathbf{N}\right)\cap\left(\mathbb{ C}-\{0\}\right)\}|\in\mathbb{N}\). Thus \(H\) is finitely generated. Note that \(H\) is Abelian. By assumption and construction:
\[P(D)\subseteq\mathfrak{k}\subseteq H.\]
Since all subgroups of a finitely generated Abelian group are finitely generated, it follows that \(P(D)\) is finitely generated. By Prop. 7.2, we are done.
**Case 2:** Suppose \(P(D)\not\subseteq\mathfrak{k}\). Then we have that there exists \(t_{0}\in P(D)\cap\mathfrak{k}^{c}\). Since \(t_{0}\in\mathfrak{k}^{c}\), we have by definition that
\[\ker\left(\frac{e^{t_{0}J(\mathbf{N})}-1}{t_{0}J(\mathbf{N})}\right)=\{0\}. \tag{25}\]
Now if \(v\in\ker(J(\mathbf{N}))\), then of course \(v\in\ker(e^{t_{0}J(\mathbf{N})}-1)\). For the reverse inclusion, assume that \(v\in\ker(e^{t_{0}J(\mathbf{N})}-1)\) and that \(t_{0}\in\mathfrak{k}^{c}\). Then, recalling \(t_{0}\neq 0\) by construction,
\[\left(\sum_{k=1}^{\infty}\frac{1}{n!}t_{0}^{n}J(\mathbf{N})^{n-1}\right)v=0,\]
if and only if
\[J(\mathbf{N})\left(\sum_{k=1}^{\infty}\frac{1}{n!}t_{0}^{n}J(\mathbf{N})^{n-1} \right)v=0,\]
if and only if
\[t_{0}\left(\sum_{k=1}^{\infty}\frac{1}{n!}t_{0}^{n-1}J(\mathbf{N})^{n-1} \right)J(\mathbf{N})v=0,\]
and thus \(v\in\ker(J(\mathbf{N}))\). Thus \([v_{0},t_{0}]\) also satisfies
\[\ker\left(e^{t_{0}J(\mathbf{N})}-\mathbb{1}\right)=\ker(J(\mathbf{N})). \tag{26}\]
Because of (25) and the rank-nullity theorem, we have that \(\frac{e^{t_{0}J(\mathbf{N})}-1}{J(\mathbf{N})}\) is invertible, and we denote the inverse by \(\frac{J(\mathbf{N})}{e^{t_{0}J(\mathbf{N})}-1}\). Hence, \(\gamma=\frac{J(\mathbf{N})}{e^{t_{0}J(\mathbf{N})}-1}v_{0}\in\mathbb{C}^{d}\) is well-defined.
As shown in the proof of Prop. 5., we can select \(\Phi\in\operatorname{Aut}\left(G\right)\) given by
\[\Phi([v_{0},t_{0}])=\left[v_{0}-\frac{e^{t_{0}J(\mathbf{N})}-\mathbb{1}}{J( \mathbf{N})}\gamma,t_{0}\right]=[0,t_{0}].\]
By considering \(\Phi(D)\) instead of \(D\) we can assume without loss of generality that \([0,t_{0}]\in D\). Using the formula for matrix multiplication like that in Prop. 4.2, we get:
\[[v,t][u,s][v,t]\,{}^{\text{-}1}[u,s]\,{}^{\text{-}1}=\left[\left(e^{tJ( \mathbf{N})}-\mathbb{1}\right)u-\left(e^{sJ(\mathbf{N})}-\mathbb{1}\right)v,0 \right],\]
for all \([v,t],[u,s]\in G\). Denote the commutator of \(G\) by \([\cdot,\cdot]_{\widetilde{G}}\). It follows in particular that \([G,G]\subseteq\mathbb{C}^{d}\cap D\).
Consider the map \(\varphi_{[0,t_{0}]}:D\to\mathbb{C}^{d}\cap D\) given by:
\[\varphi_{[0,t_{0}]}([u,s])=[0,t_{0}][u,s][0,t_{0}]\,{}^{\text{-}1}[u,s]\,{}^ {\text{-}1}=\left[\left(e^{t_{0}}J(\mathbf{N})-\mathbb{1}\right)u,0\right], \tag{27}\]
for all \([u,s]\in D\). Since \(P(D)\cong D/\,\mathbb{C}^{d}\) is assumed to be non-discrete, by Lemma 7.3 and the earlier realization that \([G,G]_{\widetilde{G}}\subseteq\mathbb{C}^{d}\cap G\), we have that \([D,D]_{D}\subseteq\mathbb{C}^{d}\cap D\subseteq Z(G)_{0}\). So in particular, \(\varphi_{[0,t_{0}]}([u,s])\) commutes with all elements of \(G\). Hence we may calculate:
\[\varphi_{[0,t_{0}]}([v,t][u,s]) =[0,t_{0}][v,t][u,s][0,t_{0}][0,t_{0}]\,{}^{\text{-}1}([v,t][u,s] )\,{}^{\text{-}1}\] \[=[0,t_{0}][v,t][u,s][0,t_{0}]\,{}^{\text{-}1}[u,s]\,{}^{\text{-}1 }[v,t]\,{}^{\text{-}1}\] \[=[0,t_{0}][v,t][0,t_{0}]\,{}^{\text{-}1}\left([0,t_{0}][u,s][0,t_{ 0}]\,{}^{\text{-}1}[u,s]\,{}^{\text{-}1}\right)[v,t]\,{}^{\text{-}1}\] \[=[0,t_{0}][v,t][0,t_{0}]\,{}^{\text{-}1}\,\varphi_{[0,t_{0}]}([u,s ])[v,t]\,{}^{\text{-}1}\] \[=\varphi_{[0,t_{0}]}([v,t])\varphi_{[0,t_{0}]}([u,s]),\]
and thus \(\varphi_{[0,t_{0}]}\) is a homomorphism. From (26) we have that
\[\ker(\varphi)=\left\{[u,s]\in D\ \big{|}\ u\in\ker(J(\mathbf{N}))\right\}=(Z(G)_{0} \times\mathbb{C})\cap D.\]
From Prop. 4.2, we know that \(Z(G)_{0}\times\mathbb{C}\cong\mathbb{C}^{1+\dim(Z(G))}\), and thus \((Z(G)_{0}\times\mathbb{C})\cap D\) is finitely generated. Since both \(\ker(\varphi_{[0,t_{0}]})\) and \(\varphi_{[0,t_{0}]}(D)\) are finitely generated, the result follows from the same logic as in the proof of Prop. 7.2.
## 8. Homogeneous Spaces
In this section, we describe a characterization of the maximal compact subgroup of a connected almost Abelian group. Such a characterization is of interest, since the homotopy type of a Lie group is given by that of its maximal compact subgroup. We begin with a few technical lemmas.
**Lemma 8.1** (Covering Space of a Homogeneous Space).: _Let \(G\) be a simply connected almost Abelian Lie group, \(H\subseteq G\) be a closed subgroup with \(H_{0}\) as its identity component. Then \(G/H_{0}\) is the universal cover of \(G/H\)._
Proof.: By Prop. 1.94(b) in [11], the natural map of \(G/H_{0}\) onto \(G/H\) is a covering map. By Prop. 1.94(e) in [11], \(G/H\) is simply connected.
**Lemma 8.2**.: _The intersection of complex connected Lie subgroups of a simply connected complex almost Abelian group is again a complex connected Lie subgroup._
Proof.: Let \(H\) and \(H^{\prime}\) be connected subgroups of a complex almost Abelian group \(G\). By Prop. 6.2, we have that \(H\) and \(H^{\prime}\) are one of the two forms:
1. \[H=\left\{[w,0]\in\mathbb{C}^{d}\rtimes\mathbb{C}\ |\ w\in\mathbf{W}\right\},\]
2. \[H=\left\{\left[w+\frac{e^{tJ(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N})}v_{0},t \right]\in\mathbb{C}^{d}\rtimes\mathbb{C}\ |\ w\in\mathbf{W},\ \ t\in\mathbb{C}\right\},\]
where \(\mathbf{W}\) is an \(\mathrm{ad}_{e_{0}}\)-invariant subspace, and \(v_{0}\in\mathbb{C}^{d}\) is an arbitrary fixed element. Thus the intersection \(H\cap H^{\prime}\) can be broken into three cases.
**Case 1:** Assume \(H\) and \(H^{\prime}\) are of type (i). Then their intersection is the intersection of subspaces of \(\mathbb{C}^{d}\), and thus is a subspace, which is of course connected.
**Case 2:** Assume \(H\) is of type (i) and \(H^{\prime}\) is of type (ii). Thus we may say
\[H \coloneqq\left\{[w,0]\ |\ w\in\mathbf{W}\right\},\] \[H^{\prime} \coloneqq\left\{\left[w^{\prime}+\frac{e^{tJ(\mathbf{N})}- \mathbb{1}}{J(\mathbf{N})}v^{\prime}_{0},t\right]\in\mathbb{C}^{d}\rtimes \mathbb{C}\ \bigg{|}\ w^{\prime}\in\mathbf{W}^{\prime},t\in\mathbb{C}\right\}.\]
But then any element \([w,t]\) of \(H\cap H^{\prime}\) is in particular an element of \(H\), and so \(t=0\). So the elements of \(H^{\prime}\) with \(t=0\) are of the form
\[\left[w^{\prime}+\frac{e^{0\cdot J(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N})}v^{ \prime}_{0},0\right]=[w^{\prime},0],\]
and thus \(H\cap H^{\prime}\) is another subspace, and so again is connected.
**Case 3:** Assume \(H\) and \(H^{\prime}\) are both of form (ii). So we define them as:
\[H \coloneqq\left\{\left[w+\frac{e^{tJ(\mathbf{N})}-\mathbb{1}}{J( \mathbf{N})}v_{0},t\right]\in\mathbb{C}^{d}\rtimes\mathbb{C}\ \bigg{|}\ w\in\mathbf{W},t\in\mathbb{C}\right\},\] \[H^{\prime} \coloneqq\left\{\left[w^{\prime}+\frac{e^{tJ(\mathbf{N})}- \mathbb{1}}{J(\mathbf{N})}v^{\prime}_{0},t\right]\in\mathbb{C}^{d}\rtimes \mathbb{C}\ \bigg{|}\ w^{\prime}\in\mathbf{W}^{\prime},t\in\mathbb{C}\right\},\]
where \(v_{0},v^{\prime}_{0}\in\mathbb{C}^{d}\) are arbitrary fixed elements, and \(\mathbf{W},\mathbf{W}^{\prime}\) are \(\mathrm{ad}_{e_{0}}\)-invariant subspaces. Note that if the intersection \(H\cap H^{\prime}\) is empty then we are done, so we may assume the intersection is nonempty.
Observe that for \(w\in\mathbf{W}\), we have that \(\left[w+\frac{e^{tJ(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N})}v_{0},t\right]\in H \cap H^{\prime}\) if and only if there exists \(w^{\prime}\in\mathbf{W}^{\prime}\) such that \(w+\frac{e^{tJ(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N})}v_{0}=w^{\prime}+\frac{e ^{tJ(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N})}v^{\prime}_{0}\) or \(w=w^{\prime}+\frac{e^{tJ(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N})}(v^{\prime}_{ 0}-v_{0})\). Thus the first component of \(H\cap H^{\prime}\) consists of elements \(w+\frac{e^{tJ(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N})}v_{0}\) such that \(w\in\mathbf{W}\cap\left(\mathbf{W}^{\prime}+\frac{e^{tJ(\mathbf{N})}-\mathbb{ 1}}{J(\mathbf{N})}(v^{\prime}_{0}-v_{0})\right)\). We have two subcases: either \(v_{0}=v^{\prime}_{0}\), or \(v_{0}\neq v^{\prime}_{0}\).
**Subcase 1:** Suppose \(v_{0}=v^{\prime}_{0}\). Then, \([w,t]\in H\cap H^{\prime}\) implies, as above,
\[w\in\mathbf{W}\cap\left(\mathbf{W}^{\prime}+\frac{e^{tJ(\mathbf{N})}- \mathbb{1}}{J(\mathbf{N})}(v^{\prime}_{0}-v_{0})\right)=w\in\mathbf{W}\cap \left(\mathbf{W}^{\prime}+\frac{e^{tJ(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N})}( \vec{0})\right)=W\cap W^{\prime}.\]
As in Case 1, we have an intersection of subspaces which we know to be connected.
**Subcase 2:** Suppose \(v_{0}\neq v_{0}^{\prime}\). We prove that \(H\cap H^{\prime}\) is path connected, and so in particular connected.
In order to prove path-connectedness, it is sufficient to prove that there exists a path between any two of the affine spaces
\[\mathbb{A}_{t}:=\left\{\left[w+\frac{e^{tJ(\mathbf{s})}-1}{J(\mathbf{s})}v_{0},t\right]\biggm{|}w\in\mathbf{W}\text{ s.t. }\exists w^{\prime}\in\mathbf{W}^{\prime}\text{ s.t. }w^{\prime}+\frac{e^{tJ(\mathbf{s})}-1}{J(\mathbf{s})}v_{0}^{ \prime}=w+\frac{e^{tJ(\mathbf{s})}-1}{J(\mathbf{s})}v_{0}\right\}\]
Now consider \((H\cap H^{\prime})_{0}\). Being a connected subgroup of \(G\), we have that it is once again of the form (i) or (ii). If it is in the form (ii), we have that there are elements \([*,t]\in(H\cap H^{\prime})_{0}\) for \(|\mathbb{R}|\) distinct nonzero \(t\).
If it is instead of the form \((i)\) there are two more options: either there are no \(t\neq 0\) coordinates, or there exists \(g\coloneqq\left[w+\frac{e^{tJ(\mathbf{s})}-1}{J(\mathbf{s})}v_{0},r\right] \in H\cap H^{\prime}\) such that \(g\notin(H\cap H^{\prime})_{0}\). However, since in particular \(g\in H\), there exists a neighborhood \(B_{\delta_{1}}(g)\subseteq H\) in the subspace topology on \(H\). Similarly, there exists a neighborhood \(B_{\delta_{2}}(g)\subseteq H^{\prime}\). Now by the assumptions on \(H\), we have that \(\left[w+\frac{e^{sJ(\mathbf{s})}-1}{J(\mathbf{s})}v_{0},s\right]\in H\) for all \(s\in\mathbb{C}\). Since the function \(\frac{e^{sJ(\mathbf{s})}-1}{J(\mathbf{s})}\) is continuous in \(s\), and the last component function is obviously continuous, we have that there is a neighborhood \(B_{\epsilon_{1}}(r)\subseteq\mathbb{C}\) such that \(\left[w+\frac{e^{sJ(\mathbf{s})}-1}{J(\mathbf{s})}v_{0},s\right]\in B_{\delta_ {1}}(g)\) for all \(s\in B_{\epsilon_{1}}(r)\). It can be easily seen that an analogous statement holds for \(H^{\prime}\), with corresponding neighborhood \(B_{\epsilon_{2}}(r)\). Let \(\delta\coloneqq\min(\delta_{1},\delta_{2})\), and let \(\epsilon=\min(\epsilon_{1},\epsilon_{2})\). Then it is apparent that \(B_{\delta}(g)\subseteq H\cap H^{\prime}\) contains elements \([*,s]\) for all \(s\in B_{\epsilon}(r)\). Thus in this case as well there are \(|\mathbb{R}|\) elements with distinct \(t\)-coordinates contained in \(H\cap H^{\prime}\).
Suppose that \([w,t]\in H\cap H^{\prime}\) implies that \(t=0\). Then, we have
\[w\in\mathbf{W}\cap\left(\mathbf{W}^{\prime}+\frac{e^{tJ(\mathbf{s})}-1}{J( \mathbf{s})}(v_{0}^{\prime}-v_{0})\right)=\mathbf{W}\cap(\mathbf{W}^{\prime}+[ 0](v_{0}^{\prime}-v_{0}))=\mathbf{W}\cap\mathbf{W}^{\prime},\]
so this subcase reduces to Case 1 where we had intersecting subspaces of \(\mathbb{C}^{d}\), which is clearly connected.
Assume \(\mathbb{A}_{t}\) has \(|\mathbb{R}|\) elements with distinct \(t\)-values, which we showed must be the case if \(t\) is not always zero for all elements of \(H\cap H^{\prime}\). Then there are uncountably many points \([*,t]\in H\cap H^{\prime}\), while by Remark 7.4 there can only be a countable number of points \([*,t]\) such that \(\frac{e^{tJ(\mathbf{s})}-1}{J(\mathbf{s})}\) is not invertible. So choose some \(h\coloneqq\left[w_{0}+\frac{e^{t_{0}J(\mathbf{s})}-1}{J(\mathbf{s})}v_{0},\ t_{0} \right]\in H\cap H^{\prime}\) such that \(\frac{e^{t_{0}J(\mathbf{s})}-1}{J(\mathbf{s})}\) is invertible. Note that since \(h\in H\cap H^{\prime}\), we know that there exists \(w_{0}^{\prime}\in\mathbf{W}^{\prime}\) such that \(w_{0}=w_{0}^{\prime}+\frac{e^{t_{0}J(\mathbf{s})}-1}{J(\mathbf{s})}(v_{0}^{ \prime}-v_{0})\). Then consider the path
\[\gamma(s)\coloneqq\left[\left(\frac{e^{sJ(\mathbf{s})}-1}{J(\mathbf{s})} \right)\left(\frac{e^{t_{0}J(\mathbf{s})}-1}{J(\mathbf{s})}\right)^{-1}\left(w _{0}^{\prime}+\frac{e^{t_{0}J(\mathbf{s})}-1}{J(\mathbf{s})}(v_{0}^{\prime}-v _{0})\right)+\frac{e^{sJ(\mathbf{s})}-1}{J(\mathbf{s})}v_{0},\ s\right].\]
Clearly, \(\gamma\) is a continuous function. Now we note,
\[\left(\frac{e^{sJ(\mathbf{s})}-1}{J(\mathbf{s})}\right)\left(\frac {e^{t_{0}J(\mathbf{s})}-1}{J(\mathbf{s})}\right)^{-1}\left(w_{0}^{\prime}+ \frac{e^{t_{0}J(\mathbf{s})}-1}{J(\mathbf{s})}(v_{0}^{\prime}-v_{0})\right)=\\ \underbrace{\left(\frac{e^{sJ(\mathbf{s})}-1}{J(\mathbf{s})} \right)\left(\frac{e^{t_{0}J(\mathbf{s})}-1}{J(\mathbf{s})}\right)^{-1}w_{0}^{ \prime}}_{\in\mathbf{W}^{\prime}\text{ by adj}_{\epsilon_{0}}\text{-invariance of }\mathbf{W}^{\prime}}+\underbrace{\left(\frac{e^{sJ(\mathbf{s})}-1}{J(\mathbf{s })}\right)(v_{0}^{\prime}-v_{0})}_{\in\frac{e^{sJ(\mathbf{s})}-1}{J(\mathbf{s })(v_{0}^{\prime}-v_{0})}}.\]
So
\[\left(\frac{e^{sJ(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N})}\right)\left(\frac{e^{t_ {0}J(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N})}\right)^{-1}\left(w_{0}^{\prime}+ \frac{e^{t_{0}J(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N})}(v_{0}^{\prime}-v_{0}) \right)\in\mathbf{W}^{\prime}+\frac{e^{sJ(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N })}(v_{0}^{\prime}-v_{0}).\]
Simultaneously, we have that
\[\left(\frac{e^{sJ(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N})}\right)\left(\frac{e^ {t_{0}J(\mathbf{N})}-\mathbb{1}}{J(\mathbf{N})}\right)^{-1}w\in\mathbf{W},\]
by the \(\operatorname{ad}_{e_{0}}\)-invariance of \(\mathbf{W}\). Thus the image of \(\gamma\) is in \(H\cap H^{\prime}\).
**Definition 8.1**.: Let \(X\) be a subset of a complex almost Abelian Lie group \(G\). We define \(\mathcal{C}(X)\) to be the minimal connected complex Lie subgroup containing \(X\) (defined as the intersection of all such connected complex Lie groups). Note that by Lemma 8.2, this is well-defined.
The main result is that the maximal compact subgroup of an almost Abelian Lie group \(G=\widetilde{G}/\Gamma\) is intimately related to \(\mathcal{C}(\Gamma)\).
**Proposition 8.3**.: _Let \(G=\widetilde{G}/\Gamma\) be a connected almost Abelian Lie group. The maximal compact subgroup \(K\subseteq G\) is given by \(K=\mathcal{C}(\Gamma)/\Gamma\)._
Proof.: Recall that all subgroups of \(G\) are either Abelian or almost Abelian. If a subgroup of \(G\) is almost Abelian, by Prop. 6.6, it cannot be compact. Hence, all compact subgroups of \(G\) are Abelian and \(\operatorname{Lie}\left(K\right)\) is an Abelian subalgebra of \(\operatorname{Lie}\left(G\right)\).
We claim that \(\operatorname{Lie}\left(K\right)\subseteq\mathbb{C}\{\log\Gamma\}\). Suppose to the contrary that this is not true. Then, there exists \(X\in\operatorname{Lie}\left(K\right)-\mathbb{C}\{\log\Gamma\}\). Since \(X\notin\mathbb{C}\{\log\Gamma\}\), there is no \(\tau\in\mathbb{C}^{*}\) and \(\gamma\in\Gamma-\{\mathbb{1}\}\) such that \(X=\frac{1}{\tau}\log\gamma\). That is, we cannot write \(\gamma=e^{\tau X}\) for any choice of \(\gamma\in\Gamma-\{\mathbb{1}\}\) and \(\tau\in\mathbb{C}^{*}\). It follows that \((H_{X}-\{\mathbb{1}\})\cap(\Gamma-\{\mathbb{1}\})=\emptyset\). Since subgroups intersect at least at the identity element, we must have \(H_{X}\cap\Gamma=\{\mathbb{1}\}\).
Now consider the quotient map \(q_{\Gamma}\colon\widetilde{G}\to G\). Since this is a covering map, it is continuous and open. By the definition of quotient topology, \(S\subseteq G\) is open if and only if \(q_{\Gamma}^{-1}(S)\subseteq\widetilde{G}\) is open. Since \(q_{\Gamma}^{-1}(S^{C})=q_{\Gamma}^{-1}(S)^{C}\), we immediately have that \(S\subseteq G\) is closed if and only if \(q_{\Gamma}^{-1}(S)\subseteq\widetilde{G}\) is closed. Now observe that \(q_{\Gamma}^{-1}(q_{\Gamma}(S))=S\cdot\Gamma\). If \(S\) is closed, then since \(\Gamma\) is also closed, we have that \(S\cdot\Gamma\) is closed. But by our observation above, \(S\cdot\Gamma=q_{\Gamma}^{-1}(q_{\Gamma}(S))\) is closed if and only if \(q_{\Gamma}(S)\) is closed. Consequently, \(q_{\Gamma}\) is also a closed map.
Since \(H_{X}\) is closed, \(q_{\Gamma}(H_{X})\subseteq G\) is also closed. However, by construction, \(q_{\Gamma}(H_{X})\subseteq K\) is not compact since \(q_{\Gamma}\) is continuous and \(H_{X}\) is not compact. Since closed subsets of compact spaces are themselves compact, \(K\) is not compact, a contradiction. Hence, \(\operatorname{Lie}\left(K\right)\subseteq\mathbb{C}\{\log\Gamma\}\).
By Prop. 6.7, the compactness of \(K\) implies that \(\dim K=\dim\widetilde{K}=\operatorname{rank}\left(\Gamma\cap\widetilde{K} \right)\leq\operatorname{rank}\Gamma\), where \(K=\widetilde{K}/\Gamma\). Observe that if \(\widetilde{K}=\mathcal{C}(\Gamma)\), we have by construction \(\widetilde{K}\cap\Gamma=\Gamma\), thus \(\dim K\) obtains the upper bound \(\operatorname{rank}\Gamma\). Moreover, since the Lie algebra of \(K\) is in the complex span of the logarithm of \(\Gamma\), we conclude that we must have \(\widetilde{K}=\mathcal{C}(\Gamma)\).
Hence, we have a construction of the maximal compact subgroup of a connected complex almost Abelian Lie group. The relative simplicity of this construction suggests that it may be much easier to probe the homotopy type of such Lie groups by instead studying the homotopy type of their maximal compact subgroups.
## Acknowledgments
This work was done as part of the University of California, Santa Barbara Mathematics Summer Research Program for Undergraduates and was supported by NSF REU Grant DMS
1850663. We are very grateful to both UCSB and the NSF for making this opportunity possible, and for the enriching, challenging, and fun experiences we had in the course of the program.
|
2301.13614 | Synchronized states in dissipatively coupled harmonic oscillator
networks | The question under which conditions oscillators with slightly different
frequencies synchronize appears in various settings. We show that
synchronization can be achieved even for harmonic oscillators that are
bilinearly coupled via a purely dissipative interaction. By appropriately tuned
gain/loss stable dynamics may be achieved where for the cases studied in this
work all oscillators are synchronized. These findings are interpreted using the
complex eigenvalues and eigenvectors of the non-Hermitian matrix describing the
dynamics of the system. | Juan N. Moreno, Christopher W. Wächtler, Alexander Eisfeld | 2023-01-30T18:14:03Z | http://arxiv.org/abs/2301.13614v1 | # Synchronized states in dissipatively coupled harmonic oscillator networks
###### Abstract
The question under which conditions oscillators with slightly different frequencies synchronize appears in various settings. We show that synchronization can be achieved even for _harmonic_ oscillators that are bilinearly coupled via a purely dissipative interaction. By appropriately tuned gain/loss stable dynamics may be achieved where for the cases studied in this work all oscillators are synchronized. These findings are interpreted using the complex eigenvalues and eigenvectors of the non-Hermitian matrix describing the dynamics of the system.
## I Introduction
Synchronization is a fascinating phenomenon, which can be interpreted as a display of cooperative behavior appearing in many complex systems [1; 2]. Since the first observation by Huygens in the late 1600s [3], it has been studied in diverse communities, where it plays an important role in our understanding for example in electric networks in engineering, circadian rhythms in biology, pattern formation in statistical mechanics, and chemical reactions in chemistry [4; 5; 6]. By now, it is seen as a universal phenomenon that is important both in fundamental studies and in technical applications, ranging from laser networks [7], to phase-locked loops [8], Josephson junction arrays [9; 10], spin-torque resonators [11], and power grids [12]. Even today, the originally observed phenomenon of clock synchronization remains a crucial application for modern communication networks [13; 14].
Typically synchronization is viewed in terms of the adjustment of rhythms of autonomous oscillators, which attain stable periodic orbits without active regulation from the outside [15] and thus require nonlinearities in the governing equations of motion. Far less common is the investigation of synchronization in models that are linear in both the oscillators and the couplings. Without dissipation, coupled harmonic oscillators form collective eigenmodes, where the individual oscillators perform motion with a fixed phase relation. However, a system not initialized in an eigenmode usually stays in a superposition of several eigenmodes with different eigenfrequencies resulting in a beating pattern. Moreover, if the number of coupled oscillators is large, the system dynamics does not need to exhibit perfect revivals in general and synchronized motion is absent. Hence in a closed system of oscillators, only for an eigenmode as initial condition one obtains a time-independent phase relation between the oscillators. However if the system is not closed, but subject to gain and loss, the open system dynamics allow for a situation where all eigenmodes but one are damped. Then, synchronization is possible as long as the respective eigenstate is present in the initial state. However, in order to achieve a situation where all but one mode are damped, one needs to carefully balance gain and loss.
In contrast to a self-sustained system where the nonlinearity counteracts the dissipation (or gain) in order to stabilize periodic orbits, a _single_ linear harmonic oscillator only exhibits the following dynamics in the absence of periodic driving: Either the dissipation exceeds the gain, such that the amplitude of the dissipative systems shrinks and eventually reaches a single point in phase space, or in other way around, where the gain exceeds the dissipation, the oscillation amplitude infinitely grows. In the special case where both are equivalent the system is effectively described by closed system dynamics with infinitely many closed orbits in phase space depending on the initial energy of system. However, when coupling between linear oscillators are introduced, many more solutions are possible.
Here, we investigate a network of linear harmonic oscillators subject to gain and loss. Generally, one would consider each oscillator to couple to its own environment and direct coupling between two or more entities in the network. However, a purely dissipative coupling leads to intriguing phenomena also for self-sustained oscillators like for example oscillator death [1]. In our model of linear oscillators, it allows for the emergence of dissipation free subspaces in parameter space. Within these subspaces we find periodic motion of all oscillators in the network, that is starting from an (nearly) arbitrary initial state the system reaches a regime during time propagation in which all oscillators exhibit synchronized motion for a long time. At this point, let us specify the notion of synchronization we use throughout this work:
-- With 'long time' we mean times long compared to the eigenfrequencies of the individual oscillators and we focus on the case where all oscillators have small deviations from a common'mean frequency'. In the ideal case they oscillate forever.
-- With'synchronoized' we mean that the oscillators have a fixed phase relation. Ideally we want that all oscillators have the same amplitude. If this is the case,
then we denote it as _full synchronization_. If the system is not in a fully synchronized state, we will characterize its _degree of synchronization_ by a suitable measure.
-- With 'arbitrary' initial state we mean that for most initial states synchronization is achieved, yet there exist some special initial conditions that do not lead to synchronization.
We note that within the above definitions for uncoupled oscillators one only finds synchronization, when there is no gain and loss and all oscillators have the same frequency.
The remainder of the paper is organized as follows: In Sec. II.1 we summarize some general considerations of synchronization for linearly coupled harmonic oscillators important for our work, followed by the specific model under investigation in Sec. II.2. In the subsequent Sec. III we discuss our results, which includes the special case of two coupled oscillators in Sec. III.1 and the more general case of many oscillators in Sec. III.2. Finally, we conclude in Sec. IV.
## II Model and basic formalism
### General considerations of synchronization in linear oscillator models
To introduce the basic concepts and notation, we consider \(N\) harmonic oscillators in a network, each labeled by a subscript \(n=1,...,N\). The motional state of each oscillator is characterized by a time dependent complex amplitude \(a_{n}(t)=|a_{n}(t)|\mathrm{e}^{\mathrm{i}\phi_{n}(t)}\). If all oscillators in the network oscillate with a common real frequency \(\omega_{\mathrm{syn}}\) while their relative amplitudes remain constant, we will refer to it as synchronization. Using a vector notation \(\tilde{a}(t)=[a_{1}(t),...,a_{N}(t)]^{\intercal}\), such synchronized motion may be expressed as
\[\tilde{a}(t)=f(t)\tilde{a}_{\mathrm{syn}}\mathrm{e}^{-\mathrm{i}\omega_{ \mathrm{syn}}t}, \tag{1}\]
where \(f(t)\) is a real function that takes into account the possibility that the amplitudes decay (or grow) over time, which we will discuss in Sec. II.2 in more detail. In the case of \(f(t)=1\) the motion represents a periodic steady state, which we refer to as _ideal synchronized motion_.
The above notion is not sufficient to fully characterize synchronized motion as for example a single oscillatory site in the network (while all other oscillators are at rest) also fulfills Eq. (1). It is thus necessary to also quantify the _degree of synchronization_ of a vector \(\tilde{a}\), which we denote by \(\mathcal{S}(\tilde{a})\). To this end, we use the inverse participation ratio [16]
\[\mathcal{S}(\tilde{a})=\frac{1}{\sum_{n=1}^{N}|a_{n}|^{4}}, \tag{2}\]
which takes values between \(1\) and \(N\). Here, a value of \(\mathcal{S}=1\) corresponds to the aforementioned case of a single oscillator in motion, whereas a value of \(\mathcal{S}=N\) indicates _fully synchronized motion_, i.e. all nodes have the same amplitude (without phase). Values of \(\mathcal{S}=\tilde{N}<N\) correspond to _partial synchronization_ of approximately \(\tilde{N}\) oscillators. In Fig. 1, we illustrate different degrees of synchronization and their respective dynamics in a network of three oscillators.
The time evolution of a linearly coupled network of harmonic oscillators in the presence of gain and loss is generally expressed as
\[\frac{d}{dt}\tilde{a}=-\mathrm{i}W\tilde{a}, \tag{3}\]
where we assume the non-Hermitian matrix \(W\) to be time-independent. Then, the state of the system at time \(t\) is simply given by
\[\tilde{a}(t)=e^{-\mathrm{i}Wt}\tilde{a}(0), \tag{4}\]
where \(\tilde{a}(0)\) denotes the initial state at time \(t=0\). Thus, the dynamics of the network is fully characterized by the matrix \(W\), in particular by its eigenvalues and eigenvectors. Since \(W\) is (in general) non-Hermitian, there exist right and left eigenvectors defined via
\[W\tilde{c}_{j}= w_{j}\tilde{c}_{j}\quad\mathrm{and}\quad\tilde{z}_{j}^{ \dagger}W=\tilde{z}_{j}^{\dagger}w_{j}. \tag{5}\]
Figure 1: Illustration of potentially attainable synchronized motion in a network of \(N=3\) oscillators. The inverse participation ratio \(S(\tilde{a})\) increases from top to bottom in accordance with the transition from partially to fully synchronized motion.
Here, \(\dagger\) indicates the complex conjugated and transpose, and the eigenvectors are normalized according to
\[\tilde{c}_{j}^{\dagger}\tilde{c}_{j}=1\quad\text{and}\quad\tilde{z}_{j^{\prime}}^ {\dagger}\tilde{c}_{j}=\delta_{j^{\prime}j}. \tag{6}\]
Note, that in general \(\tilde{c}_{j}^{\dagger}+\tilde{z}_{j}^{\dagger}\). The matrix \(W\) can now be expressed as \(W=\sum_{j}w_{j}\tilde{c}_{j}\tilde{z}_{j}^{\dagger}\), such that the time evolution of Eq. (4) is conveniently given by
\[\tilde{a}(t)=\sum_{j}\tilde{c}_{j}e^{-\mathrm{i}w_{j}t}\,\tilde{z}_{j}^{ \dagger}\tilde{a}(0), \tag{7}\]
where \(\tilde{z}_{j}^{\dagger}\tilde{a}(0)\) is the initial weight of the eigenstate \(j\). While the real part of the complex eigenvalue \(w_{j}\) determines the oscillation frequency of eigenmode \(j\), the imaginary part \(\mathrm{Im}[w_{j}]\) determines whether the oscillatory motion is damped (\(\mathrm{Im}[w_{j}]<0\)), growing (\(\mathrm{Im}[w_{j}]>0\)) or oscillates forever (\(\mathrm{Im}[w_{j}]=0\)).
In order to obtain a time evolution of the form of Eq. (1) with \(f(t)=1\) after some initial transient time, i.e. dynamically reach the eigenstate with \(\mathrm{Im}[w_{\mathrm{sync}}]=0\), the initial state needs to have non-vanishing overlap with the synchronized eigenstate \([\tilde{z}_{\mathrm{sync}}^{\dagger}a(0)\neq 0]\). Furthermore, all other eigenstates present in the initial state need to have \(\mathrm{Im}[w_{j}]<0\), such that they are damped. In the following, we will therefore search for conditions and parameters under which _one_ eigenstate fulfills \(\mathrm{Im}[w_{\mathrm{sync}}]=0\) while all other eigenstates fulfill \(\mathrm{Im}[w_{j}]<0\). Subsequently, we will characterize the degree of synchronization of the resulting state in terms of \(\mathcal{S}\); cf. Eq. (2).
### Linear oscillators with purely dissipative coupling
After the general considerations of the previous Sec. II.1, let us now specify the network of interest throughout the remainder of this work: The individual oscillators have frequencies \(\Omega_{n}\in\mathbb{R}\) and are arranged on a ring. Each oscillator is subject to gain/loss mediated via the rate \(\gamma\in\mathbb{R}\) and interacts with its two nearest neighbors via a purely dissipative coupling \(v\in\mathbb{R}\). For simplicity we assume that the coupling and dissipation is equal for all oscillators; we are interested in the possibility of synchronization when the frequency of each oscillator is different, which corresponds to the notion of synchronization as an adjustment of rhythms due to the presence of interactions. The equation of motion of the \(n\)-th oscillator is then given by
\[\frac{d}{dt}a_{n}=(-\mathrm{i}\Omega_{n}-\gamma)a_{n}-v(a_{n+1}+a_{n-1}), \tag{8}\]
with \(a_{0}\equiv a_{N}\) and \(a_{N+1}\equiv a_{1}\) to fulfill periodic boundary conditions. Note that positive values of \(\gamma\) represent loss whereas negative values correspond to gain. To simplify notation we express all energies in units of \(v\) and take \(v\) to be positive (the case of negative \(v\) will be discussed later), i.e. \(\omega_{n}=\Omega_{n}/v\), \(g=\gamma/v\) and \(\tau=tv\). Furthermore, we parameterize the frequencies as \(\omega_{n}=\bar{\omega}+\Delta_{n}\). Then, Eq. (8) becomes
\[\frac{d}{d\tau}a_{n}=[-\mathrm{i}(\bar{\omega}+\Delta_{n})-g]a_{n}-(a_{n+1}+a_ {n-1}). \tag{9}\]
Our goal in the following is to determine the values of \(g\) for a given set of frequency differences \(\Delta_{n}\), such that the oscillators perform synchronized motion in the sense discussed in Sec. II.1.
As the term \((-\mathrm{i}\bar{\omega}-g)\) is independent of the oscillator index \(n\), it only trivially contributes to the overall dynamics; specifically oscillations with frequency \(\bar{\omega}\) and damping/growing with rate \(g\). In matrix representation, Eq. (9) can be written in the form of Eq. (3) with \(t\to\tau\) and \(W=(\bar{\omega}-\mathrm{i}g)\mathbb{I}+M\), where
\[M=\begin{pmatrix}\Delta_{1}&-\mathrm{i}&0&\ldots&-\mathrm{i}\\ -\mathrm{i}&\Delta_{2}&-\mathrm{i}&\ldots&0\\ 0&-\mathrm{i}&&\\ \vdots&&\\ -\mathrm{i}&0&\ldots&-\mathrm{i}&\Delta_{N}\end{pmatrix} \tag{10}\]
Note, that the (left and right) eigensvectors of \(W\) and \(M\) are identical and their eigenvalues are simply shifted, i.e., if \(M\tilde{c}_{j}=\lambda_{j}\tilde{c}_{j}\) then \(W\tilde{c}_{j}=w_{j}\tilde{c}_{j}\) with
\[w_{j}=\bar{\omega}+\mathrm{Re}[\lambda_{j}]+\mathrm{i}(-g+\mathrm{Im}[\lambda_ {j}]),\quad v>0. \tag{11}\]
Moreover, as \(M\) only depends on \(\Delta_{n}\), the eigenvectors and thus the degree of synchronization \(\mathcal{S}(\tilde{c})\) is independent of \(g\).
Let us summarize the general conditions of the previous Sec. II.1 for synchronized motion tailored to the specifics of our system discussed here:
1. There exists a single eigenstate \(\tilde{c}_{\mathrm{sync}}\) of \(W\) with purely real eigenvalue. This corresponds to a state \(\tilde{c}_{\mathrm{sync}}\) that fulfills \(-g+\mathrm{Im}[\lambda_{\mathrm{sync}}]=0\), where \(M\tilde{c}_{\mathrm{sync}}=\lambda_{\mathrm{sync}}\tilde{c}_{\mathrm{sync}}\).
2. All other eigenstates of \(W\) have negative imaginary part for the set of parameters determined in (i). That corresponds to \(-g+\mathrm{Im}[\lambda_{j}]<0\) for all \(j\neq\mathrm{sync}\).
3. The synchronization measure \(\mathcal{S}(\tilde{c}_{\mathrm{sync}})\) should be as large as possible. Ideally \(\mathcal{S}(\tilde{c}_{\mathrm{sync}})=N\).
So far, we have taken \(v\) to be positive. For negative values of \(v\) we define the scaled energies in terms of \(-v\) such that \(\omega_{n}=\bar{\omega}+\Delta_{n}=-\Omega_{n}/v\), \(g=-\gamma_{n}/v\), and \(\tau=-tv\). Then, Eq. (9) becomes
\[\frac{d}{d\tau}a_{n}=[-\mathrm{i}(\bar{\omega}+\Delta_{n})-g]a_{n}+(a_{n+1}+a_ {n-1}), \tag{12}\]
where the first term remains identical while the sign changes in front of the oscillator couplings. As a result, the eigenvalues of \(W\) [cf. Eqs. (10) and (11)] are given by
\[w_{j}=\bar{\omega}+\mathrm{Re}[\lambda_{j}]+\mathrm{i}(-g-\mathrm{Im}[\lambda_ {j}]),\quad v<0. \tag{13}\]
Here, the real part of the eigenvalues (as well as the corresponding eigenstates and thus the measure \(\mathcal{S}\)) remains unchanged, while the imaginary part simply changes its sign. Thus, eigenstates that are decaying for \(v>0\), are growing for \(v<0\) and vice versa.
## III Results
In the following we first discuss the case of \(N=2\) in Sec. III.1, which provides a clear picture of the basic mechanism underlying synchronization of linear oscillators interacting via dissipative couplings. Subsequently in Sec. III.2, we consider a ring of \(N>2\) oscillators and show that also in this case synchronized motion may be achieved and follows similar arguments as before.
### Two coupled oscillators (\(N=2\))
Without loss of generality, we may choose the scaled frequency differences of the two oscillators to be \(\Delta_{1}=+\Delta\) and \(\Delta_{2}=-\Delta\), such that matrix \(M\) governing the dynamics [cf. Eq. (10)] is given by
\[M=\begin{pmatrix}\Delta&-\mathrm{i}\\ -\mathrm{i}&-\Delta\end{pmatrix} \tag{14}\]
Here, we have chosen \(v>0\). However, from the discussion in Sec. II.2 we know that a negative value of \(v\) simply results in a change of sign of the imaginary part of the eigenvalues. The two eigenvalues and corresponding right eigenvectors of \(M\) are given by
\[\lambda_{\pm} =\pm\sqrt{\Delta^{2}-1} \tag{15}\] \[\vec{c}_{\pm} =\frac{1}{\sqrt{1+|\Delta\pm\sqrt{\Delta^{2}-1}|^{2}}}\begin{pmatrix} \mathrm{i}(\Delta\pm\sqrt{\Delta^{2}-1})\\ 1\end{pmatrix} \tag{16}\]
If \(|\Delta|<1\) ( \(|\Delta|>1\)) the eigenvalues \(\lambda_{\pm}\) are both purely imaginary (real) and non-degenerate. In contrast, for \(\Delta=\pm 1\) not only are the eigenstates degenerate but also the corresponding eigenvectors coalesce, i.e., these values of \(\Delta\) correspond to exceptional points. The impact of exceptional points on synchronization goes beyond the scope of the present work and we will focus in the following on the cases \(|\Delta|>1\) and \(|\Delta|<1\).
Overview:As discussed in Sec. II.2, the eigenenergies \(w_{\pm}=\bar{\omega}+\mathrm{Re}[\lambda_{\pm}]+\mathrm{i}(-g+\mathrm{Im}[ \lambda_{\pm}])\) describe the overall possibility of long lasting synchronized motion in terms of oscillation frequency and damping, while \(\mathcal{S}\) quantifies the degree of synchronization. Let us start by considering the imaginary part of the eigenenergies \(w_{\pm}\) given by \(\mathrm{Im}[w_{\pm}]=-g+\mathrm{Im}[\lambda_{\pm}]\), which determines the (exponential) damping or growing. In Figs. 2 (a) and (b) we show \(\mathrm{Im}(w_{-})\) and \(\mathrm{Im}(w_{\star})\), respectively, as a function of the frequency difference \(\Delta\) and the dissipation strength \(g\). Note, that \(\Delta\) as well as \(g\) can take on positive and negative values. The red areas in Fig. 2(a) and (b) indicate positive values corresponding to amplitude growth whereas the blue areas indicate negative values and thus amplitude damping. The two regions are separated by a white region, where amplitudes neither increase nor decrease. We discuss this most relevant region for dissipation free synchronization in more detail below.
As expected from the discussion above, quite different behavior of \(\mathrm{Im}[w_{\pm}]\) is observed depending on whether \(|\Delta|>1\) or \(|\Delta|<1\). Similarly, a pronounced difference is found in the behavior of the real part \(\mathrm{Re}[w_{\pm}]=\bar{\omega}+\mathrm{Re}[\lambda_{\pm}]\), which describes the oscillation frequency of the eigenmodes and is shown in Fig. 2(c) and (d). For \(|\Delta|<1\) the frequency remains unchanged and both eigenstates oscillate with the mean frequency \(\bar{\omega}\). However, for \(|\Delta|>1\) the frequency of the \(-\) state [cf. Fig. 2(c)] is increasing, while that of the \(+\) state [cf. Fig. 2(d)] is increasing. Both follow the functional form of a square-root with opposite sign, cf. Eq. (15). Lastly, in Fig. 2(e) and (f) we show the degree of synchronization \(\mathcal{S}\) as function of \(\Delta\), which
Figure 2: Top row: Density plots of the imaginary part \(\mathrm{Im}(w_{\pm})\) as a function of the frequency difference \(\Delta\) and the dissipation strength \(g\): (a) \(w_{-}\) and (b) \(w_{+}\). Dissipation-free synchronization is found along the white line. Middle row: Corresponding real part (c) \(Re(w_{-})\) and (d) \(Re(w_{\star})\) as a function of \(\Delta\), which corresponds to the oscillation frequency of the respective eigenvector. Last row: Degree of synchronization \(\mathcal{S}\) as function of \(\Delta\) of the eigenvalue (e) \(\vec{c}_{-}\) and (f) \(\vec{c}_{+}\). The largest value is found for \(|\Delta|<1\) corresponding to fully synchronized motion.
is given by [cf. Eq. (16)]
\[\mathcal{S}(\tilde{c}_{\pm},\Delta)=\left\{\begin{array}{ll}2&,\,|\Delta|<1\\ 2\frac{\Delta^{2}}{2\Delta^{2}-1}&,\,|\Delta|>1\end{array}\right.. \tag{17}\]
As expected, the maximum value lies within the range of \(|\Delta|<1\) and rapidly decreases as \(|\Delta|\) increases, indicating the absence of synchronization. After this broad overview we will in the following discuss in more detail the potential of synchronized motion in the system of \(N=2\) oscillators, focusing on the three criteria (i)-(iii) formulated in Sec. II.2.
Detailed discussion of the regime \(|\Delta|>1\):In this case, the eigenvalues \(\lambda_{\pm}\) become purely real [cf. Eq. (15)], such that the eigenenergies take the simple form \(w_{\pm}=(\bar{\omega}\pm\sqrt{\Delta^{2}-1})-\mathrm{i}g\). Most importantly, the imaginary part is solely given by \(-g\) for both states and is independent of \(\Delta\), which can also be seen in Figs. 2(a) and (b). Thus, both eigenstates show the same dynamical response to dissipation, i.e., either both are dissipation free (\(g=0\)) or the amplitudes decay/increase with the same rate given by \(-g\). Although there exists a dissipation free subspace for \(g=0\), and thus requirement (i) is fulfilled, requirement (ii) cannot be fulfilled simultaneously. The reasons is that both states have different oscillation frequencies \(\bar{\omega}\pm\sqrt{\Delta^{2}-1}\) and none of them is decaying, resulting in a beating pattern. We show an example of such a time evolution of the real amplitudes \(\mathrm{Re}(a_{n})\) governed by Eq. (9) in Fig. 3(a) for \(\Delta=1.1\) and \(g=0\).
Detailed discussion of the regime \(|\Delta|<1\):After we have ruled out the possibility of synchronization [according to our conditions (i)-(iii)] in the previous regime, we now discuss the case of \(|\Delta|<1\), where dissipation free synchronized motion is indeed possible. For \(|\Delta|<1\) the eigenvalues \(\lambda_{\pm}\) are purely imaginary [cf. Eq. (15)] and dissipation free states are determined by \(0=-g\pm\sqrt{|1-\Delta^{2}|}\), such that condition (i) may be fulfilled. In contrast to the previous case, we need to differentiate between the two states: Dissipation vanishes for the \(+\) state if \(g=g_{+}\equiv\sqrt{|1-\Delta^{2}|}\), and for the \(-\) state if \(g=g_{-}\equiv-\sqrt{|1-\Delta^{2}|}\). Each of these solutions describes a half circle with radius one, cf. Figs. 2(a) and (b).
We now examine whether condition (ii) is also fulfilled in this regime. When the \(-\) state is dissipation free, the amplitude of the \(+\) state is growing exponentially as \(\mathrm{Im}[w_{+}(g_{-})]=-g_{-}+\sqrt{1-\Delta^{2}}=2\sqrt{1-\Delta^{2}}>0\). This is also verified by Fig. 2: Along the white region in panel (a) within the regime \(|\Delta|<1\), the area in panel (b) is red. In contrast, along the white region in panel (b), the area in panel (a) is blue, i.e. while the \(+\) state is dissipation free, the \(-\) state is damped. Specifically, \(\mathrm{Im}[w_{-}(g_{+})]=-g_{+}-\sqrt{1-\Delta^{2}}=-2\sqrt{1-\Delta^{2}}<0\). Thus, synchronized motion for \(|\Delta|<1\) is found whenever the condition \(g=\sqrt{1-\Delta^{2}}\) is fulfilled. Moreover, this state has a degree of synchronization of \(\mathcal{S}=2\) and is therefore fully synchronized for all \(|\Delta|<1\).
In Fig. 3(b) we show the dynamics for the parameters \(\Delta=0.6\) and \(g=0.8\) when starting in the initial state \(\bar{a}(0)=(1,0)^{\top}\). As discussed previously, we expect to find synchronized motion for these parameters. Indeed, after a short transient time of \(\tau\gtrsim 2\) a stationary oscillatory motion emerges where both oscillators have the same amplitude. Note the phase shift between the two oscillators, which may be understood as follows: Considering the \(+\) state \(\tilde{c}_{+}\) [cf. Eq. (16)], the long time dynamics is given by \(\bar{a}_{\mathrm{sync}}(t)=\tilde{c}_{+}\exp[-\mathrm{i}\omega_{+}t]\); cf. Eq. (7). Then,
\[\mathrm{Re}[\bar{a}_{\mathrm{sync}}(t)]=\mathcal{N}\begin{pmatrix}\cos(\omega _{+}t+\phi)\\ \cos(\omega_{+}t)\end{pmatrix}, \tag{18}\]
where the phase difference \(\phi\) fulfills \(\tan(\phi)=-\sqrt{1-\Delta^{2}}/\Delta\) and \(\mathcal{N}=(1+|\Delta+\sqrt{\Delta^{2}-1}|^{2})^{-1/2}\) is the normalization constant from Eq. (16).
### Many coupled oscillators on a ring
In this section, we generalize our results from the previous Sec. III.1 for the case of two coupled oscillators to large numbers of oscillators arranged on a ring. Also for the case of \(N\) oscillators, the dynamics is governed by Eqs. (9)-(11). In the following we will first discuss the case of equal frequencies of all oscillators. Afterwards, we discuss the more relevant case of frequency differences.
Figure 3: Examples of different dissipation free dynamics found for the case of \(N=2\) oscillators. We plot the real amplitude \(Re(a_{n}(\tau))\) of the first oscillator in red (\(n=1\)) and the second one in blue (\(n=2\)). (a) For \(\Delta=1.1\) and \(g=0\), the presence of two oscillation frequencies within the dissipation free subspace leads to beating. (b) For \(\Delta=0.6\) and \(g=0.8\), only a single eigenstate with its respective oscillation frequency is dissipation free, while the other is damped leading to a periodic steady state of both oscillators, i.e., synchronization. Parameters: \(\bar{\omega}=10\), \(\bar{a}(0)=(1,0)^{\top}\). These results are obtained by direct integration of the differential equation. It agrees perfectly with the results obtained via diagonalization.
Identical frequencies of all oscillators
To gain a basic understanding of the eigenstates and eigenvector structure we now consider the case when all frequencies are identical, i.e. \(\Delta_{n}=\Delta\). Then, the eigenvalues and (right) eigenvectors of \(W\) are given by
\[w_{j} = \left(\bar{\omega}+\Delta\right)-i\Big{(}g\pm 2\cos\bigl{(}\frac{2 \pi j}{N}\bigr{)}\Big{)},\quad v\gtrless 0, \tag{19}\] \[\tilde{c}_{j} = \frac{1}{\sqrt{N}}\sum_{n=1}^{N}e^{\mathrm{i}\frac{2\pi j}{N}jn} \tilde{c}_{n}, \tag{20}\]
where \(\tilde{c}_{n}\) is the \(n\)th unit-vector. As all eigenstates are independent of \(\Delta\) or \(g\). One sees that most eigenstates are degenerate. For even \(N\) only the eigenstates with \(j=N\) and \(j=N/2\) are not degenerate; for odd \(N\) only the state with \(j=N\) is not degenerate. Moreover, the real part of the eigenenergies \(w_{j}\), i.e. the oscillation frequencies, is simply shifted by \(\Delta\) for all eigenstates. However, the imaginary part of \(w_{j}\), which dictates the dissipation and more importantly the possibility of dissipation free dynamics, requires a more careful analysis.
Positive \(v\):The imaginary part of the \(j\)th eigenvalue \(\mathrm{Im}[w_{j}]=0\) if \(g=g_{j}\equiv-2\cos(2\pi j/N)\). Then, all other eigenvalues \(w_{j^{\prime}}\) with \(j^{\prime}\neq j\) have imaginary part given by
\[\mathrm{Im}[w_{j^{\prime}}(g_{j})]=2\cos\left(\frac{2\pi j}{N}\right)-2\cos \left(\frac{2\pi j^{\prime}}{N}\right). \tag{21}\]
Furthermore, we need to distinguish the two cases of odd and even \(N\): For an _odd_ number of oscillators and \(j\neq(N\pm 1)/2\) there is always at least one \(j^{\prime}\) with \(\mathrm{Im}[w_{j^{\prime}}(g_{j})]>0\), and thus condition (ii) is not fulfilled. On the other hand, if \(j=(N\pm 1)/2\) all other eigenstates are damped except for \(j^{\prime}=j\mp 1\). Yet, this state is also dissipation free and condition (ii) cannot be fulfilled. For _even_\(N\), however, there exists a non-degenerate eigenstate \(j=N/2\) that fulfills (i) and (ii). Then, \(g=2\) and \(\tilde{c}_{\mathrm{syn}}\equiv\tilde{c}_{N/2}=\frac{1}{\sqrt{N}}(-1,1\ldots,- 1,1)^{\top}\), which corresponds to anti-phase synchronization between nearest neighbors with the same frequency \(\bar{\omega}+\Delta\).
Negative \(v\):In contrast to the previous case. the imaginary part of the \(j\)th eigenstate now is equal to zero if \(g=g_{j}\equiv+2\cos(2\pi j/N)\) and thus Eq. (21) becomes
\[\mathrm{Im}[w_{j^{\prime}}(g_{j})]=-2\cos\left(\frac{2\pi j}{N}\right)+2\cos \left(\frac{2\pi j^{\prime}}{N}\right) \tag{22}\]
for all other eigenvalues \(w_{j^{\prime}}\) with \(j^{\prime}\neq j\). Here, only if \(j=N\) are all other states damped and conditions (i) and (ii) fulfilled. The corresponding eigenstate is \(\tilde{c}_{\mathrm{syn}}\equiv\tilde{c}_{N}=\frac{1}{\sqrt{N}}(1,\ldots,1)^{\top}\), i.e., in-phase synchronization of all oscillators with frequency \(\bar{\omega}+\Delta\).
#### iii.1.2 Oscillators with different frequencies
In this section, we discuss the case of arbitrary frequency differences \(\Delta_{n}\) for each oscillator on the ring. In this case, the matrix \(M\) [cf. Eq. (10)] can no longer be diagonalized analytically. Therefore, we discuss the basic behavior along a few examples of \(\Delta_{n}\) and solve the eigenvalue problem numerically. Yet, these examples demonstrate that dissipation free synchronized motion also exists in such a general setup.
A convenient way to investigate how the properties of synchronization are affected by changes of \(\Delta_{n}\), is to parametrize the frequency difference according to
\[\Delta_{n}=s_{n}\Delta, \tag{23}\]
and analyze the behavior of the eigenvalues and eigenvectors of \(W\) as a function of \(\Delta\) for a given (and fixed) set of \(s_{n}\). Furthermore, we choose \(v\) to be _negative_, such that for \(\Delta=0\) there exists a fully synchronized eigenstate if \(g=2\) (see the discussion in Sec. III.2.1b). Note that a negative value of \(v\) implies \(g_{j}=\mathrm{Im}[\lambda_{j}]\).
In the following we consider as example the case of \(N=5\) oscillators and show in Fig. 4 the results of the numerical diagonalization of the matrix \(M\) for three different realizations of \(\bar{s}=(s_{1},...,s_{5})\) (different columns). We choose the largest difference between neighboring values of \(s_{n}\) to be equal to one, i.e. \(\max[s_{n}-s_{n+1}]=1\). Then, for \(\Delta<1\) all frequency differences between neighboring oscillators are always smaller than the dissipative coupling between them (which has magnitude one).
The case of \(N=2\) in our network of oscillators allows us to represent the full parameter space as shown in Fig. 2 and identify the dissipation free subspaces and synchronization within. However, for larger system sizes (as considered now) a representation similar to Fig. 2 becomes very space consuming. Yet, a dissipation free subspace is always necessary for synchronization, which corresponds to the white lines in Figs. 2(a) and (b). Thus, in order to determine whether conditions (i)-(iii) are fulfilled, it is sufficient to only search along the parameters for which each eigenstate becomes dissipation free. In particular, the relevant information of Fig. 2(a) and (b) may be conveniently combined to contain only \(g_{\pm}=\mathrm{Im}[\lambda_{\pm}]\) as function of \(\Delta\). Accordingly, the top row of Fig. 4 shows the imaginary part of all eigenvalues \(\mathrm{Im}[\lambda_{j}]\) as function of the parameter \(\Delta\) and the middle row shows the respective real parts \(\mathrm{Re}[\lambda_{j}]\). Lastly, in the bottom row we plot the degree of synchronization \(\mathcal{S}\) of each eigenvector also as function of \(\Delta\). The eigenvalues of \(M\) are sorted in descending order of their imaginary parts, i.e. \(\mathrm{Im}[\lambda_{1}]>\mathrm{Im}[\lambda_{2}]>\cdots>\mathrm{Im}[\lambda_{N}]\).
In the following we discuss different regimes of \(\Delta\) and its impact on the possibility of synchronized motion in accordance with conditions (i)-(iii). We focus on the eigenstate \(\tilde{c}_{1}\) with largest imaginary part \(\mathrm{Im}[\lambda_{1}]\) (high-lighted as thick blue lines in Fig. 4). The reason is that for \(g=\mathrm{Im}[\lambda_{1}]\) the eigenstate \(\tilde{c}_{1}\) becomes dissipation free while all other eigenstates are simultaneously damped. In contrast, if we would choose \(g\) such that another eigenstate \(\tilde{c}_{j\neq 1}\) would become dissipation free, there is at least one eigenstate that is exponentially growing. It is thus
sufficient to only analyze the possibility of synchronization of \(\tilde{c}_{1}\) in the following.
No frequency difference (\(\Delta=0\)):This means that there are no variations in the oscillator frequencies and the situation is exactly the same as discussed in Sec. III.2.1b. Consequently, the eigenvalues of \(W\) are given by Eq. (19). From the discussion in Sec. III.2.1b, we know that if \(g=2=\mathrm{Im}[\lambda_{\mathrm{syn}}]\) there exists a dissipation free synchronized state \(\tilde{c}_{\mathrm{syn}}\equiv\frac{1}{\sqrt{5}}(1,\ldots,1)^{\top}\) with associated real eigenvalue \(w_{\mathrm{syn}}=\bar{\omega}\), i.e. all oscillators are in phase and oscillate with frequency \(\bar{\omega}\). This is exactly what we observe in Fig. 4: the eigenvalue with largest imaginary part has imaginary part \(\mathrm{Im}[\lambda_{1}]=2\) (blue thick lines in the top row). Note that \(\mathrm{Im}[\lambda_{2}]=\mathrm{Im}[\lambda_{3}]\) and \(\mathrm{Im}[\lambda_{4}]=\mathrm{Im}[\lambda_{5}]\). Furthermore, \(\mathrm{Re}[\lambda_{j}]=0\) (middle row) which implies an oscillation frequency of \(\bar{\omega}\).
Small frequency differences (\(0<\Delta<1\)):In this regime, the disorder in the frequency differences between nearest neighboring oscillators always remains smaller than the coupling between them (which is 1). We thus expect that the degree of synchronization also remains large [\(\mathcal{S}(\tilde{c}_{1})\approx N\)], i.e. the full delocalization of the eigenstate \(\tilde{c}_{1}\) persists. In the bottom row of Fig. 4 we observe exactly this behavior of the thick blue line corresponding to \(\tilde{c}_{1}\): For small values of \(\Delta\), \(\mathcal{S}(\tilde{c}_{1})\) is maximal and slowly decreases as \(\Delta\) approaches the value of 1. Thus, the synchronized state remains close to be fully synchronized within this regime [condition (iii)]. Note, that the values for which \(\mathcal{S}(\tilde{c}_{1})\) starts to decrease depends on the specific realization of disorder \(\vec{s}\).
The imaginary part of the corresponding eigenvalue (top row) continues to be the largest value of all eigenvalues (blue thick line), \(\mathrm{Im}[\lambda_{1}]>\mathrm{Im}[\lambda_{j\pm 1}]\). Thus, for \(g=\mathrm{Im}[\lambda_{1}]\) the eigenstate \(\tilde{c}_{1}\) becomes dissipation free while all other eigenstates are damped, i.e. conditions (i) and (ii) are fulfilled. As \(\Delta\) increases, \(\mathrm{Im}[\lambda_{1}]\) decreases resulting from the larger amount of frequency disorder. Simultaneously, the real part \(\mathrm{Re}[\lambda_{1}]\) remains close to 0 such that the oscillation frequency of the synchronized state \(\tilde{c}_{1}\) also continues to be close \(\bar{\omega}\). Note, the value of \(\mathrm{Re}[\lambda_{1}]\) only affects the oscillation frequency.
Large frequency differences (\(\Delta\geq 1\)):As \(\Delta\) is increased further, the frequency difference exceeds the nearest neighbor interaction such that - similar to (Anderson) localization in finite systems [17] - the degree of synchronization \(\mathcal{S}(\tilde{c}_{1})\) of the previously delocalized eigenstate \(\tilde{c}_{1}\) rapidly decreases as \(\Delta\) increases; see blue thick
Figure 4: Examples of dissipation free and (fully) synchronized dynamics in a ring of \(N=5\) oscillators with random frequency disorder. The three different columns correspond to three different set of (scaled) frequency realizations \(\vec{s}\). The value of \(v\) is taken to be negative. In the top row we show the imaginary part \(\mathrm{Im}[\lambda_{j}]\) of the eigenvalues \(\lambda_{j}\) of the matrix \(M\) as a function \(\Delta\). The middle row shows the corresponding real part \(\mathrm{Re}[\lambda_{j}]\) and the bottom row the degree of synchronization \(\mathcal{S}(\tilde{c}_{j})\) of the corresponding eigenstates \(\tilde{c}_{j}\). For all three considered realizations, there exists an eigenstate (blue) with the maximum value of \(\mathcal{S}\) (bottom row) for small values of \(\Delta\lesssim 1\). This eigenstate also has the largest imaginary part of its associated eigenvalue (top row), which allows the tuning \(g\) in such a way that it becomes dissipation free while all other eigenstates are damped.
lines in the bottom row of Fig. 4. Hence, only partial synchronization is possible in this regime and condition (iii) is not fulfilled.
At the same time, the largest imaginary value \(\mathrm{Im}[\lambda_{1}]\) continues to decrease as function of \(\Delta\). Yet, close to \(\Delta=1\) it remains well separated from the second largest imaginary value \(\mathrm{Im}[\lambda_{2}]\) such that a suitable choice of \(g\) still allows for dissipation free dynamics with a sinlge oscillation frequency. However, \(\mathrm{Im}[\lambda_{1}]\) may coalesce with \(\mathrm{Im}[\lambda_{2}]\) for larger values of \(\Delta\) depending on the specific realization of \(\tilde{s}\). An example of such a degeneracy is observed for \(\Delta\approx 1.6\) in the top right panel of Fig. 4. As a result, both eigenstates would be dissipation free resulting in the beating pattern discussed previously in Sec. III.1. However, as mentioned above, only partial synchronization is possible in this regime anyways.
Very large frequency differences (\(\Delta\gg 1\)):In the regime of very large frequency differences, we expect that the degree of synchronization takes its minimum value \(\mathcal{S}(\tilde{c}_{j})=1\) for all eigenstates \(j\) since the scaling follows \(\Delta\gg v\). This implies that the values \(\Delta_{n}=\Delta s_{n}\) are much larger than the dissipative coupling strength \(v\). Then, \(M\) is approximately diagonal with eigenvectors \(\tilde{c}_{j}\) nearly localized. Note that in this limit there is no synchronized state. We have checked numerically that for \(\Delta\) larger than the smallest difference between the \(s_{n}\) the synchronization measure of all eigenstates approaches one, as expected (not shown here).
Lastly, to demonstrate that the dynamics of the system of oscillators is consistent with our discussion of the different regimes above (obtained from analyzing the eigenvectors and eigenfrequencies), we show in Fig. 5 examples of \(\mathrm{Re}[a_{n}(\tau)]\) as a function of the scaled time \(\tau\) for \(\tilde{s}=(1.14,0.20,1.20,-0.46,-1.1)\) (corresponding to the first column of Fig. 4) for three different values of \(\Delta\). In all cases, we choose the initial state \(\tilde{a}_{0}=(1,1,2,-1,-1)\).
Panel (a) corresponds to the case of vanishing frequency difference, i.e. \(\Delta=0\). We choose the dissipation \(g=2\) such that only the eigenstate with largest imaginary part is dissipation free. As expected after a short transient time of \(\tau\approx 2.5\) all oscillators are in-phase synchronized.
In panel (b), we increase the frequency difference to be \(\Delta=0.5\). Hence, the synchronized state is dissipation free for \(g=1.91\). Analogues to the previous case (a), \(\tau\approx 2.5\), yet with a small phase shift. Importantly, all oscillators have the same amplitude consistent with the finding of Fig. 4 that the degree of synchronization is maximal [\(\mathcal{S}(\tilde{c}_{1})=5\) for this value of \(\Delta\)].
Contrarily, in panel (c) where \(\Delta=1.1\) (and \(g=1.51\) to match the condition of dissipation free dynamics) the amplitudes vary among the oscillators. This is in accordance with \(\mathcal{S}(\tilde{c}_{1})<5\). However, still only a single oscillation frequency is present (after some transient time). This is an example of partial synchronization.
## IV Conclusions
In this work we have investigated the possibility of long-lived synchronized motion in networks of harmonic oscillators, which are subject to gain/loss and interact via nearest neighbor dissipative couplings. In this context, we refer to synchronization as the existence of a single eigenstate of the dynamical matrix, which is dissipation free. Furthermore, if it attains the maximum value of the (inverse) participation ratio we refer to it as 'fully synchronized'. We find that in the case of only two coupled oscillators, synchronization may always be achieved by tuning the gain appropriately as long as the frequency difference between the two oscillators is smaller than their interaction strength.
A similar behavior may be observed in larger networks, i.e. many oscillators arranged on a ring with nearest neighbor interactions, yet the possibility of synchronization then depends on the specifics of the system at hand: If all oscillators are identical, synchronized col
Figure 5: Dynamical behavior of \(\mathrm{Re}(a_{i}(\tau))\) given by Eq.(14) for different values of the scaling factor \(\Delta\). In all three cases the mean frequency of the oscillators is \(\tilde{\omega}=10\) and the disorder is the same of the first panel of Fig. (4), namely \(\tilde{s}=(1.14,\ 0.20,\ 1.20,\ -0.46,\ -1.1)\) The coupling strength \(v\) is taken to be negative and all frequencies are given in units of \(|v|\). The initial condition is \(\tilde{a}_{0}=(1,1,2,-1,-1)\). Panels (a) and (b) show fully synchronized motion, while panel (c) is an example of partial synchronization.
lective motion may be achieved for an even number of sites with repulsive dissipative couplings (\(v\) positive) _or_ an odd number of sites with attractive dissipative interactions (\(v\) negative). For small frequency differences compared to the coupling between the oscillators, this behavior remains, which we show specifically for the case of \(N=5\), yet it should also hold for larger networks. However, as the number of coupled oscillators increases, it becomes increasingly difficult to achieve full synchronization and may only be observed for very small frequency differences. For larger frequency differences, the (inverse) participation ratio decreases significantly such that only partial synchronization may be achieved. This is in accordance with Anderson localization, where on-site disorder results in localized eigenstates. However, as the dynamical matrix in this work is non-Hermitian, Anderson localization is not directly applicable. Here, future work is needed to study the interplay of synchronization and localization, in particular in the thermodynamic limit and arbitrary small frequency perturbations.
Synchronization as discussed in this work is intimately related to the existence of dissipation free dynamics and thus isolated points/submanifolds in parameter space. Hence, they require a very precise tuning of gain and loss in order to obtain periodic steady states. This is however hard to achieve in any realistic experiment and the synchronized state will experience some gain or loss. We can relax the condition \(\mathrm{Im}[w_{j}]=0\) by solely requiring \(|\mathrm{Im}[w_{j}]|\ll|\mathrm{Re}[w_{j}]|\), which means that the change of amplitude of oscillation is small over many oscillations. In addition, we then require \(\mathrm{Im}[w_{j}]\ll\mathrm{Im}[w_{\mathrm{sync}}]\), which means that all other eigenstates decay much faster than the'synchronized' one. In principle, one may relax the condition even further and demand that there exists only one state with \(\mathrm{Im}[w_{j}]>0\), while all other states fulfill \(\mathrm{Im}[w_{i}]\leq 0\). Then the synchronized state would grow while all other states are exponentially damped.
###### Acknowledgements.
C.W.W. acknowledges support from the Max-Planck Gesellschaft via the MPI-PKS Next Step fellowship and is financially supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project No. 496502542 (WA 5170/1-1). A.E. acknowledges support from the DFG via a Heisenberg fellowship (Grant No EI 872/10-1).
|
2305.10832 | Knowledge-based Integration of Multi-Omic Datasets with Anansi:
Annotation-based Analysis of Specific Interactions | Motivation: Studies including more than one type of 'omics data sets are
becoming more prevalent. Integrating these data sets can be a way to solidify
findings and even to make new discoveries. However, integrating multi-omics
data sets is challenging. Typically, data sets are integrated by performing an
all-vs-all correlation analysis, where each feature of the first data set is
correlated to each feature of the second data set. However, all-vs-all
association testing produces unstructured results that are hard to interpret,
and involves potentially unnecessary hypothesis testing that reduces
statistical power due to false discovery rate (FDR) adjustment.
Implementation: Here, we present the anansi framework, and accompanying R
package, as a way to improve upon all-vs-all association analysis. We take a
knowledge-based approach where external databases like KEGG are used to
constrain the all-vs-all association hypothesis space, only considering
pairwise associations that are a priori known to occur. This produces
structured results that are easier to interpret, and increases statistical
power by skipping unnecessary hypothesis tests. In this paper, we present the
anansi framework and demonstrate its application to learn metabolite-function
interactions in the context of host-microbe interactions. We further extend our
framework beyond pairwise association testing to differential association
testing, and show how anansi can be used to identify associations that differ
in strength or degree based on sample covariates such as case/control status.
Availability: https://github.com/thomazbastiaanssen/anansi | Thomaz F. S. Bastiaanssen, Thomas P. Quinn, John F. Cryan | 2023-05-18T09:21:30Z | http://arxiv.org/abs/2305.10832v1 | # Knowledge-based Integration of Multi-Omic Datasets with Anansi:
###### Abstract
**Motivation:** Studies including more than one type of 'omics data sets are becoming more prevalent. Integrating these data sets can be a way to solidify findings and even to make new discoveries. However, integrating multi-omics data sets is challenging. Typically, data sets are integrated by performing an all-vs-all correlation analysis, where each feature of the first data set is correlated to each feature of the second data set. However, all-vs-all association testing produces unstructured results that are hard to interpret, and involves potentially unnecessary hypothesis testing that reduces statistical power due to false discovery rate (FDR) adjustment.
**Implementation:** Here, we present the anansi framework, and accompanying R package, as a way to improve upon all-vs-all association analysis. We take a knowledge-based approach where external databases like KEGG are used to constrain the all-vs-all association hypothesis space, only considering pairwise associations that are _a priori_ known to occur. This produces structured results that are easier to interpret, and increases statistical power by skipping unnecessary hypothesis tests. In this paper, we present the anansi framework and demonstrate its application to learn metabolite-function interactions in the context of host-microbe interactions. We further extend our framework beyond pairwise association testing to differential association testing, and show how anansi can be used to identify associations that differ in strength or degree based on sample covariates such as case/control status.
**Availability:**[https://github.com/thomazbastiaanssen/anansi](https://github.com/thomazbastiaanssen/anansi)
Introduction
Techniques that aim to measure the totality of a certain type of biological molecules are known as 'omics. The most prevalent types of 'omics include (meta)genomics, (meta)transcriptomics, (meta)proteomics and metabolomics. In recent years, there has been an increase in studies that feature multiple types of 'omics data, which are referred to as multi-omics (Subramanian et al., 2020). For instance, in host-microbione studies, it has become more common to measure both the microbial metagenome and the serum and/or gut metabolome. 'Omics approaches enable a broader and more exploratory avenue of doing research, potentially allowing the researcher to uncover complex patterns that would otherwise not have been discovered (Yanai and Lercher, 2019). However, dealing with big data sets comes with new challenges, especially with regard to the interpretation of results and the preservation of statistical power.
Often, multi-omics analysis takes the form of pairwise all-vs-all association testing between the features of the data sets, an approach which inherits the same two challenges. First, an all-vs-all association procedure will produce unstructured results that are typically presented as a list of "significant" findings or a heatmap of associations. These lists and heatmaps can be difficult to interpret or generate new hypotheses from because the results are not put in the context of established biological knowledge. Second, the method can be wasteful in terms of statistical power. As every statistical test produces a p-value that ought to be adjusted (e.g., through false discovery rate (FDR) adjustment), if it is biologically unfeasible for an association to be real, testing for it anyway could be considered a waste of power and may result in false negatives.
In this article, we present the anansi framework as an alternative approach to all-vs-all association testing that leverages knowledge databases to address the aforementioned challenges:
* **Knowledge databases help structure results and improve interpretability by giving context to the results.** It is difficult to form hypotheses about results that are presented without context. For instance, in the case of the microbiome, relating the levels of metabolites to the abundance of microbial species may result in _significantly associated_ metabolite-microbe pairs that cannot be explained from a biological perspective. Rather than assessing microbes on a taxonomical level, one could instead assess the genes within those microbes that might encode for enzymes, receptors or other proteins that interact with those metabolites. By re-framing the analysis as a problem of protein-metabolite interactions we gain the ability to leverage our extensive knowledge of metabolic pathways when interpreting results and generating new hypotheses.
* **Knowledge databases help improve statistical power by restricting the total number of tested hypotheses.** When performing an all-vs-all association analysis, many features that do not interact will still be tested for a statistically significant association. Calculating additional p-values will result in a higher number of p-values to adjust by post-hoc methods like Benjamini-Hochberg's and Storey's q-value procedure, which will in turn lead to a loss of power (Benjamini and Hochberg, 1995; Storey and Tibshirani, 2003). Conversely, assessing interactions in pairs of features that do not biologically interact risks encountering spuriously significant associations that do not lead to fruitful hypotheses.
Re-framing the point, if the goal of a multi-omics integration analysis is to identify real associations between the features of two biologically related data sets in order to formulate testable hypotheses, all-vs-all analysis may not always be the most appropriate approach. Here, we present the anansi (Annotation-based Analysis of Specific Interactions) framework and accompanying R package which uses knowledge databases to reduce unnecessary hypothesis testing, giving context to the results and improving statistical power. Although the method is general, we demonstrate its application on a microbiome-metabolme integration data set.
Related Work
### Knowledge databases
Anansi relies on the structure provided from knowledge databases. Typically, these are databases that that contain knowledge on features and how they interact, for example in the form of a molecular interaction network. Notable databases include **KEGG**(Kanehisa and Goto, 2000), **MetaCyc**(Caspi et al., 2014), **CAZy**(Drula et al., 2022), **HMDB**(Wishart et al., 2007) and **EggNOG**(Huerta-Cepas et al., 2019). The main difference between these databases is the specific focus of their content and in many cases identifiers can be mapped from one database to another.
### Measures of association
Numerous methods exist to measure an association between two features. The most common are undoubtedly **Pearson's** and **Spearman's Rank** correlation coefficients. Both of these metrics can be thought of as special cases of a **linear model**(Kenney and Keeping, 1962). These methods are well-understood and perform well in general, though many types of 'omics data are compositional and inherently display negative correlations, for which these methods may yield spurious results (Gloor et al., 2017). Some methods have been introduced to specifically address these traits of compositional data, including **proportionality**, (Lovell et al., 2015; Quinn et al., 2017), a collection of metrics that investigate whether the ratio between two features remains stable. Other compositional methods include **SPARCC**(Friedman and Alm, 2012) and **SPieCeasi**(Kurtz et al., 2015), both of which are microbiome-oriented and assume a sparse matrix.
### All-vs-all approaches
The _Hierarchical All-against-All association testing_ (**HAILA**) framework aims to cluster features from the same data set in a data-driven manner before analysis in order to reduce the amount of testing, thus substantially improving power (Ghazi et al., 2021). HAILA relies on data-driven clustering to reduce the amount of tests performed and thus the biological interpretation of these clusters is not guaranteed. Notably, HAILA is designed for large population studies and confounding factors are thus expected to be _regressed out_ before analysis.
### Ordination and learning-based approaches
On a different axis, the **MINT** and **DIABLO** frameworks in the mixOmics suite respectively use a sparse-PLS (Le Cao et al., 2011) and PLS-based approach to identify those associations between features of two or more 'omics data sets that are the most informative to discriminate between phenotypes (Singh et al., 2019; Rohart et al., 2017). Analogously, the _microbe-metabolite vectors_ (**mmvec**) algorithm intends to identify and estimate associations between microbes and metabolites using a neural network approach that explicitly addresses the compositional nature of the microbiome data (Morton et al., 2019). However, these multivariate frameworks are data-driven and thus do not consider pre-existing knowledge structures, meaning that the most discriminative associations could be biologically meaningless.
Methods
### Motivation
The anansi framework relies on a _knowledge-based binary adjacency matrix_ to only assess associations between pairs of features that are known to interact in some fashion. The knowledge-based binary adjacency matrix is to used to mask associations that are not previously documented so that they can be skipped entirely for the purpose of downstream analysis, including visualisation, interpretation and indeed even hypothesis testing and subsequent multiple testing corrections. This application is key to addressing the aforementioned challenges of multi-omics integration:
* It solves the challenge of interpretability because resulting analysis is easier to interpret due to the structure imposed by the knowledge database. All remaining feature pairs will be structured and contextualized by their corresponding metabolic pathway in the knowledge database.
* It solves the challenge of statistical power because power will be improved by avoiding unnecessary hypothesis testing in feature pairs that would be impossible to interpret due to the lack of a corresponding metabolic pathway in the knowledge database. By skipping non-canonical interactions in this manner, we preserve statistical power when applying FDR.
Next, we will demonstrate how an all-vs-all association approach can be enhanced by introducing a knowledge-based binary adjacency matrix.
### All-vs-all associations
First, let us review an all-vs-all association analysis.
Suppose we have two 'omics data sets, \(\mathbf{Y}\) and \(\mathbf{X}\), with \(1...M_{Y}\) and \(1...M_{X}\) features, respectively, both with \(1...N\) samples. Each column within these data sets would contain the measured abundance of an 'omics feature, for which we represent the j-th feature of data set \(\mathbf{Y}\) by
\[\mathbf{f}_{j}^{(Y)}=[Y_{1j},...,Y_{Nj}] \tag{1}\]
and analogously for \(\mathbf{X}\).
Thus, we can view the data set \(\mathbf{Y}\) as
\[\mathbf{Y}=\quad\begin{bmatrix}\mathbf{f}_{1}^{(Y)}&\mathbf{f}_{2}^{(Y)}& \dots&\mathbf{f}_{M_{Y}}^{(Y)}\\ \par Y_{11}&Y_{12}&\dots&Y_{1M_{Y}}\\ \par Y_{21}&Y_{22}&\dots&Y_{2M_{Y}}\\ \par\vdots&\vdots&\ddots&\vdots\\ \par Y_{N1}&Y_{N2}&\dots&Y_{NM_{Y}}\end{bmatrix} \tag{2}\]
and analogously for \(\mathbf{X}\).
Here, columns represent different features and rows represent different samples or measurements. Then, the all-vs-all association matrix for these two data sets can be calculated as the association, \(\rho\), between a column in \(\mathbf{Y}\) and another column in \(\mathbf{X}\):
\[\rho(\mathbf{Y},\mathbf{X}):=\quad\begin{bmatrix}\rho(\mathbf{f}_{1}^{(Y)}, \mathbf{f}_{1}^{(X)})&\rho(\mathbf{f}_{1}^{(Y)},\mathbf{f}_{2}^{(X)})&\dots& \rho(\mathbf{f}_{1}^{(Y)},\mathbf{f}_{M_{Y}}^{(X)})\\ \rho(\mathbf{f}_{2}^{(Y)},\mathbf{f}_{1}^{(X)})&\rho(\mathbf{f}_{2}^{(Y)}, \mathbf{f}_{2}^{(X)})&\dots&\rho(\mathbf{f}_{2}^{(Y)},\mathbf{f}_{M_{X}}^{(X)} )\\ \par\vdots&\vdots&\ddots&\vdots\\ \par\rho(\mathbf{f}_{M_{Y}}^{(Y)},\mathbf{f}_{1}^{(X)})&\rho(\mathbf{f}_{M_{Y }}^{(Y)},\mathbf{f}_{2}^{(X)})&\dots&\rho(\mathbf{f}_{M_{Y}}^{(Y)},\mathbf{f}_ {M_{X}}^{(X)})\end{bmatrix} \tag{3}\]
Notice that the rows of our all-vs-all association matrix \(\rho(\mathbf{Y},\mathbf{X})\) correspond to the columns (features) in data set \(\mathbf{Y}\) the columns of our association matrix \(\rho(\mathbf{Y},\mathbf{X})\) correspond to the columns from data set \(\mathbf{X}.\) Further, notice that the resulting all-vs-all association matrix can be easily converted to a heatmap by depicting the resulting association coefficients a colour gradient.
### Binary adjacency matrix
A binary adjacency matrix is a matrix of which the elements indicate which pairs of rows and columns are linked, or adjacent, to each other. It can be generated from an adjacency list, which in turn can be generated from a network graph such as those used to formulate a metabolic pathway network. Pairs of features between data sets that do interact, for instance based on whether they are connected in a _knowledge database_ like a metabolic pathway, will return a value of 1, whereas pairs of features that are not connected in such a manner will be depicted as 0. Put symbolically, a binary adjacency matrix \(\mathbf{A}\) associates two sets \(\mathbf{u}=\{1,...,U_{max}\}\) and \(\mathbf{t}=\{1,...,T_{max}\}\), where
\[A_{ij}=\begin{cases}1,&\text{if $u_{i}$ associates with $t_{j}$}\\ 0,&\text{otherwise}\end{cases} \tag{4}\]
If we let each element in \(\mathbf{u}\) represent one of the \(M_{Y}\) features in \(\mathbf{Y}\), and each element in \(\mathbf{t}\) represent one of the \(M_{X}\) features in \(\mathbf{X}\), then \(\mathbf{A}\) represents a binary adjacency matrix describing the relationship _between_ two data sets \(\mathbf{Y}\) and \(\mathbf{X}\).
For example, suppose data set \(\mathbf{Y}\) contains metabolites from the KEGG database, whereas data set \(\mathbf{X}\) contains molecular functions (KEGG orthologues). An adjacency list then contains information of which metabolites from \(\mathbf{Y}\) are known to interact (i.e. are synthesised, catabolism, are cofactors of) with a function in \(\mathbf{X}\). This list, retrievable from publicly available databases, can be mapped into to create the binary adjacency matrix, as shown below:
\[\begin{array}{c|c}\text{Adjacency list}\\ \hline u_{1}&\mathbf{t}\\ \hline u_{2}&\to t_{1},t_{3},t_{4}\\ u_{3}&\to t_{1},t_{4}\\ u_{4}&\to t_{2},t_{5}\\ u_{5}&\to t_{1},t_{3},t_{4},t_{5}\end{array}\rightarrow\]
Binary adjacency matrix
\[\begin{array}{c|ccc}&t_{1}&t_{2}&t_{3}&t_{4}&t_{5}\\ \hline u_{1}&1&1&1&0&0\\ u_{2}&1&0&1&1&0\\ u_{3}&1&0&0&1&0\\ u_{4}&0&1&0&0&1\\ u_{5}&1&0&1&1&1\end{array} \tag{5}\]
### Masked association matrix
Notice that since all elements from data set \(\mathbf{Y}\) are listed as row names and all elements from data set \(\mathbf{X}\) are listed as columns, the binary adjacency matrix will have the same dimensions as the association matrix. Because the dimensions are the same, the association matrix can be multiplied against the adjacency matrix to "mask" any associations that are not documented by the knowledge database. The masked association matrix, \(\mathbf{R}\), can thus be derived as follows:
\[\mathbf{R}=\rho(\mathbf{Y},\mathbf{X})*\mathbf{A} \tag{6}\]
where
\[R_{ij}=\begin{cases}\rho(\mathbf{f}_{i}^{(Y)},\mathbf{f}_{j}^{(X)}),&\text{if $u _{i}$ associates with $t_{j}$}\\ 0,&\text{otherwise}\end{cases} \tag{7}\]
The masked association matrix \(\mathbf{R}\) should be seen as the primary output of the anansi package and can serve as the basis for follow-up analyses such as differential association analysis, network analysis, and functional enrichment analysis.
### Implementation
The anansi framework was written in the R language. The general workflow of this package can be conceptualized as a three-step process:
* Generate a binary adjacency matrix with the appropriate dimensions and features by cross-referencing the feature table(s) with the knowledge database.
* Compute the masked association matrix \(\mathbf{R}\) by multiplying \(\rho(\mathbf{Y},\mathbf{X})\) with \(\mathbf{A}\) (See Equation 6).
* Perform follow-up analyses on the resulting masked association matrix \(\mathbf{R}\), such as differential association analysis.
An overview of the internal architecture can be seen in figure 1. Statistics are computed using the base R stats package and the lme4 package for linear mixed effects models (R Core Team, 2022; Bates et al., 2015). "getter" functions are available to parse output files into the wide format as well as the long format, designed to be compatible with the ggplot2 plotting software (Wickham, 2014, 2016). Anansi is parallelizable by using the futures framework (Bengtsson, 2021).
Figure 1: Diagram of the anansi workflow. The anansi workflow relies on three steps, each with their own function and custom S4 object classes. In step \(\mathbf{A}\), input ’omics data sets are collected and compared to a knowledge-based adjacency list, referred to here as a dictionary. The adjacency lists can be based on known interaction, such as in a metabolic network, or based on membership of a shared overarching category such as a gene pathway. Features that are not part of at least one pair are omitted here. An anansiWeb S4 object is returned, which is the input for the main anansi function. In \(\mathbf{B}\), the masked association matrix \(\mathbf{R}\) is computed. Optionally, differential associations are assessed. All results are collected and adjusted p-values are computed, after which an S4 anansiYarn object is returned, which can be parsed into different formats by the functions in \(\mathbf{C}\). Results can be parsed to a long format directly compatible with the ggplot2 plotting software as well as to a publication-ready wide format table.
Results
### Differential association testing
In order to assess differences in associations based on one or more variables (such as phenotype or treatment), we make use of the emergent and disjointed association paradigm introduced in the context of proportionality (Erb et al., 2017; Quinn et al., 2017; Erb, 2020) and apply it outside of the simplex. Briefly, disjointed associations refer to the scenario where the _slope_ of an association is dependent on a variable. On the other hand, emergent associations refer to the scenario where the _strength_ of the scenario is dependent on a variable. See figure 2 for an illustrated hypothetical example. Anansi supports arbitrarily complex linear models as well as longitudinal models using formula syntax from the base R stats package and the lme4 package for linear mixed effects models, respectively (R Core Team, 2022; Bates et al., 2015).
Figure 2: An example of differential associations between hypothetical features Y and X. In all cases, phenotype C illustrates the differential association compared to phenotypes A & B. Disjointed associations describe the scenario where there is a detectable association in all cases, but the quality of the association differs. Emergent associations describe the case where an association can be detected in one case but not in another. In scenario **A**, the features Y and X are from different datasets and differential associations can be assessed using a classical linear models: \(lm(Y\sim X\times\)\(Phenotype\)) and \(lm(residuals(lm(Y\sim X)\sim Phenotype\)) for disjointed and emergent associations, respectively. In scenario **B**, the features are from the same compositional dataset. Differential proportionality can be assessed using applying similar models on log-ratios: \(lm(log(\frac{Y}{X})\sim\underline{Phenotype})\) and \(lm(residuals(lm(log(\frac{Y}{X})\sim 1)\sim\underline{Phenotype})\) for disjointed and emergent proportionality, respectively. In all cases, the \(R^{2}\) and p-value for the underlined part of the equation is considered to estimate differential associations.
### FMT Ageing
An early version of the anansi framework was used in a recent publication, assessing the associations between hippocampal metabolites and microbial functions (Boehme et al., 2021, Extended Data Fig. 7). Briefly, the aim of the study was to investigate whether faecal microbiota transplantation from young donor mice could restore symptoms of ageing in aged recipient mice. As part of the study, the functional metagenome was inferred from 16S rRNA sequencing data and compared to hippocampal metabolite levels. Hippocampal metabolites were first linked to microbial functions that either produce or metabolise these metabolites. Then, association strength was assessed for each of these pairs, both per treatment group and between treatment groups. Strikingly, the slope of the associations between feature pairs, such as lactate vs lactate dehydrogenase, was completely inverted, implying that the relation between these features is dependent on the treatment received (Figure 3). The specific nature of these results enables researchers to formulate follow-up hypotheses. The anansi package contains curated snippets of this dataset for tutorial purposes. Full analysis is available online: [https://github.com/thomazbastiaanssen/anansi/](https://github.com/thomazbastiaanssen/anansi/)
Figure 3: Figure showing the associations between hippocampal metabolite levels and related microbial functions. X-axis shows Pearson correlation coefficients for the the metabolite-function pairs. The red vertical dashed lines depict a Pearson correlation coefficient of 0. Colours depict the treatment groups, whereas grey points represent the correlation coefficients after pooling all three groups. Opaque points with black borders display significantly disjointed associations. Non-opaque points display the correlation coefficient for the associations where the full model fitted sufficiently well after FDR, but where no disjointed associations were detected.
Discussion
### Towards Interpretability
We have argued that there are two main challenges when using multi-omics integration analysis to formulate testable hypotheses, namely 1) interpretation of results and 2) the preservation of statistical power. The anansi framework addresses both of these challenges by constraining the all-vs-all association hypothesis space, only considering pairwise associations that are _a priori_ known to occur. This constraint guarantees that all resulting associations occur in the knowledge database and that no statistical power is wasted by unnecessarily extending FDR adjusting to undocumented -and thus likely uninterpretable- associations.
### Limitations
There are a few limitations to consider when using anansi. First, functional metagenomics data, such as the output from PIGCRUSt2 (Douglas et al., 2020) and HUManN3 (Beghini et al., 2021) is nested in the sense that the abundance of each function is directly dependent on the abundance on the respective taxa that contain those genes, which may lead to violations of independence between features for the purpose of controlling the false discovery rate (FDR) and lead to spurious associations. A move towards metatranscriptomics and/or metaproteomics rather than metagenomics would alleviate the by-taxon dependence between functions.
Second, the accuracy of anansi is highly dependent on the accuracy of the databases used to generate an adjacency matrix. Feature pairs that interact in reality but are not catalogued as such will not be assessed. Interestingly, in a recent study attempting to link the levels of serum metabolites to a variety of factors including the microbiome, many unknown metabolites and xenobiotics were linked to the microbiome (Bar et al., 2020). It stands to reason that, especially when it comes to interkingdom communication, many associations have simply not been mapped. That said, it is unlikely that associations between feature pairs that in reality do interact, yet have not been catalogued as such, will lead to fruitful hypotheses even if they were assessed in the context of an integratomics analysis.
Third, anansi currently only supports a binary adjacency matrix, but in biology, interaction is often on a spectrum. For instance, different ligands bind to their respective receptors at different efficiencies. Future implementations using a knowledge-based adjacency matrix may expand on this principle by allowing for continuous interaction scores.
### Conclusions
While the anansi framework was designed with microbiome and metabolomics data in mind, it could feasibly be applied any field where interactions between two large data sets where only some features meaningfully interact need to be assessed. Example applications include phage-bacterium, immune-metabolite or receptor-ligand interaction analysis.
As 'omics data sets increase in number and complexity, there is a dire need for tools and approaches to process and parse this data in such a way that meaningful and testable hypotheses can be formulated. The microbiome field is in need of methods to investigate causality (Bastiaanssen and Cryan, 2021; Cryan and Mazmanian, 2022) and we view anansi as one of many approaches to move towards this goal.
## 6 Acknowledgements
APC Microbiome Ireland is a research centre funded by Science Foundation Ireland (SFI), through the Irish Governments' national development plan (grant no. 12/RC/2273_P2).
We are grateful for the helpful comments and encouragement of Aonghus Lavelle, Benjamin Valderrama, Frank Snijders, Ionas Erb and Sarah-Jane Leigh. The anansi hexagon sticker was designed by Johanna Snijders (nightllu).
Declarations
TFSB and TPQ declare no competing interests. JFC has been an invited speaker at conferences organized by Mead Johnson, Ordesa, and Yakult, and has received research funding from Reckitt, Nutricia, Dupont/IFF, and Nestle. This did not influence this manuscript in any way.
## 8 Code Availability
Anansi is open source and freely available under the GPL-3 licence. The code implementing the anansi algorithm as well as a tutorial demonstrating the analysis performed for the FMT Ageing manuscript can be found on GitHub: [https://github.com/thomazbastiaanssen/anansi](https://github.com/thomazbastiaanssen/anansi)
|
2310.05737 | Language Model Beats Diffusion -- Tokenizer is Key to Visual Generation | While Large Language Models (LLMs) are the dominant models for generative
tasks in language, they do not perform as well as diffusion models on image and
video generation. To effectively use LLMs for visual generation, one crucial
component is the visual tokenizer that maps pixel-space inputs to discrete
tokens appropriate for LLM learning. In this paper, we introduce MAGVIT-v2, a
video tokenizer designed to generate concise and expressive tokens for both
videos and images using a common token vocabulary. Equipped with this new
tokenizer, we show that LLMs outperform diffusion models on standard image and
video generation benchmarks including ImageNet and Kinetics. In addition, we
demonstrate that our tokenizer surpasses the previously top-performing video
tokenizer on two more tasks: (1) video compression comparable to the
next-generation video codec (VCC) according to human evaluations, and (2)
learning effective representations for action recognition tasks. | Lijun Yu, José Lezama, Nitesh B. Gundavarapu, Luca Versari, Kihyuk Sohn, David Minnen, Yong Cheng, Vighnesh Birodkar, Agrim Gupta, Xiuye Gu, Alexander G. Hauptmann, Boqing Gong, Ming-Hsuan Yang, Irfan Essa, David A. Ross, Lu Jiang | 2023-10-09T14:10:29Z | http://arxiv.org/abs/2310.05737v3 | # Language Model Beats Diffusion
###### Abstract
While Large Language Models (LLMs) are the dominant models for generative tasks in language, they do not perform as well as diffusion models on image and video generation. To effectively use LLMs for visual generation, one crucial component is the visual tokenizer that maps pixel-space inputs to discrete tokens appropriate for LLM learning. In this paper, we introduce MAGVIT-v2, a video tokenizer designed to generate concise and expressive tokens for both videos and images using a common token vocabulary. Equipped with this new tokenizer, we show that LLMs outperform diffusion models on standard image and video generation benchmarks including ImageNet and Kinetics. In addition, we demonstrate that our tokenizer surpasses the previously top-performing video tokenizer on two more tasks: (1) video compression comparable to the next-generation video codec (VCC) according to human evaluations, and (2) learning effective representations for action recognition tasks.
## 1 Introduction
Large transformer-based language models, commonly referred to as LMs or LLMs, are the de facto models for natural language generation (OpenAI, 2023; Google, 2023). Over time, LMs have expanded their capabilities to generate content in various modalities, asserting their dominance in other domains like audio (Agostinelli et al., 2023), speech (Rubenstein et al., 2023), code generation (Li et al., 2023), medical applications (Singhal et al., 2023) and robotics (Zitkovich et al., 2023).
LMs are capable of generating images and videos. To do so, the image pixels are mapped into a sequence of discrete tokens by a visual tokenizer (_c.f._ Section 2). These tokens are then fed into the LM transformer, as if they were lexical words, for generative modeling. Despite notable advancements in employing LMs for visual generation (Esser et al., 2021; Chang et al., 2022), LMs still do not perform as well as diffusion models (Rombach et al., 2022). For instance, when evaluating on the ImageNet dataset, a gold standard benchmark for image generation, the best language model (Lee et al., 2022) underperforms the diffusion model (Gao et al., 2023) by a substantial 48% margin (FID 3.41 _vs._ 1.79 when generating images at the 256\(\times\)256 resolution).
_Why do language models lag behind diffusion models in visual generation?_ This paper suggests that a primary reason is the lack of a good visual representation, resembling our natural language system, for effectively modeling the visual world. To substantiate this hypothesis, this paper shows that, when utilizing a good visual tokenizer, the masked language model (Devlin et al., 2019; Chang et al., 2022; Yu et al., 2023) surpasses the state-of-the-art diffusion models in terms of both generation fidelity and efficiency across image and video benchmarks, given the same training data, comparable model size, and training budget. To the best of our knowledge, this provides the first evidence that language models beat diffusion models on the hallmark ImageNet benchmark.
It is worth emphasizing that our intention is not to assert whether the language model is superior to others, but to promote the exploration of visual tokenization methods for LLMs. A fundamental difference of LLMs from other models, such as diffusion models, is that LLMs utilize a discrete latent format: tokens obtained from a visual tokenizer. We show that the values of these discrete visual tokens should not be overlooked considering their distinct advantages as follows. **(1) Compatibility with LLMs.** The main advantage of a token representation is that it shares the same form
as language tokens, making it straightforward to leverage the optimizations our community has developed over many years for LLMs. This includes faster training and inference speeds (Shazeer, 2019; Lester et al., 2021), advancements in model infrastructure (Dao et al., 2022; Du et al., 2022), learning recipes for model scaling (Brown et al., 2020; Chowdhery et al., 2022), and GPU/TPU optimization, among other innovations. Unifying vision and language by the same token space could set the stage for a true multimodal LLM that can understand, generate, and reason within our visual environment. **(2) Compressed representation.** The discrete token may offer a fresh perspective on video compression. The visual tokens can serve as a new video compression format to reduce disk storage and bandwidth during internet transfers. Unlike compressed RGB pixels, these tokens can be fed directly into generative models, bypassing the conventional decompression and latent encoding steps. This allows for faster processing in generative video applications, especially beneficial in edge computing cases. **(3) Visual understanding benefits**. Prior research has shown that the discrete tokens are valuable as a pre-training target in self-supervised representation learning, as discussed in BEiT (Bao et al., 2021) and BEVT (Wang et al., 2022). Additionally, research finds that using tokens as the model inputs improves the robustness and generalization (Mao et al., 2021).
In this paper, we introduce MAGVIT-v2, a video tokenizer designed to map videos (and images) into compact discrete tokens. Our model is built on the state-of-the-art video tokenizer, MAGVIT (Yu et al., 2023a), within the VQ-VAE framework (Van Den Oord et al., 2017). We propose two new techniques. First, a novel lookup-free quantization method enables the learning of a large vocabulary that is able to improve generation quality of the language model. Second, through extensive empirical analyses, we have identified modifications to the tokenizer that not only enhance generation quality but also enable the tokenization of both images and videos using a shared vocabulary.
We empirically demonstrate that our model outperforms the previously top-performing video tokenizer, MAGVIT, in three key areas. First, our model significantly improves the generation quality of MAGVIT, establishing the state of the art on the common image and video benchmarks. Second, user studies indicate that its compression quality exceeds that of MAGVIT and the current video compression standard, HEVC (Sullivan et al., 2012). Moreover, it is on par with the next-generation video codec, VVC (Bross et al., 2021). Finally, we show that, compared to MAGVIT, our new tokens are stronger for video understanding tasks across two setups and three datasets. The main contributions of this work are:
* A new video tokenizer that outperforms the previously best-performing video tokenizer in three areas: visual generation, video compression, and action recognition.
* A novel lookup-free quantization approach that enables improving the visual generation quality of language models by learning a large vocabulary.
* To the best of our knowledge, the first evidence suggesting that a language model can outperform diffusion models on ImageNet when provided with the same training data, an equivalent model size, and a similar training budget.
* A video compressor with better quality than HEVC and VVC, at similar bit rates, according to user studies. To our knowledge, this is the first successful attempt of a visual tokenizer designed for video generation to achieve comparable results to standard codecs.
## 2 Background
Language Model (LM) for visual generation.LMs have been extended to generate images and videos. A visual tokenizer \(f\) is used to first map visual inputs into a sequence of discrete tokens. A video \(\mathbf{V}\in\mathbb{R}^{T\times H\times W\times 3}\) (or image when \(T=1\)) is tokenized into a discrete representation \(\mathbf{X}=f(\mathbf{V})\in\{1,2,\cdots,K\}^{T^{\prime}\times H^{\prime}\times W ^{\prime}}\), where \(K\) is the codebook (vocabulary) size of the visual tokenizer. \(\mathbf{X}\) is flattened into a 1D token sequence obtained using raster scan ordering and then fed into an LM transformer for generative modeling.
Two types of LMs are commonly used for visual generation. The _Autoregressive LM (AR-LM)_ includes ImageGPT (Chen et al., 2020), DALL-E (Ramesh et al., 2021), Parti (Yu et al., 2022b), _etc_. An AR-LM predicts the next token given the previous tokens along with additional conditioning information \(\mathbf{c}\) using a categorical distribution for \(p_{\theta}(\mathbf{x}_{i}\mid\mathbf{x}_{<i};\mathbf{c})\). During inference, AR-LMs use the standard autoregressive decoding over the tokens. Finally, the tokens are converted back to pixels by a decoder associated with the visual tokenizer.
The _Masked LM (MLM)_ is another type of language model for visual generation, such as: MaskGIT (Chang et al., 2022), MAGVIT (Yu et al., 2023a), Phenaki (Villegas et al., 2022), and MUSE (Chang et al., 2023), among others. An MLM is trained using a masked token objective (De
vlin et al., 2019), where some tokens in the sequence are randomly masked and need to be predicted given the observed tokens. Let \(\mathbf{m}\in\{0,1\}^{n}\) be a random binary sequence where \(\mathbf{m}^{\top}\mathbf{1}\in[0,n-1]\). The MLM learns \(p_{\theta}(\mathbf{x}_{i}\mid\{\mathbf{x}_{j}:\mathbf{m}_{j}=1,\forall j\}; \mathbf{c})\) for all \(i\) where \(\mathbf{m}_{i}=0\). To generate a video or image during inference, the MLM uses the non-autoregressive decoding algorithms for images and videos (Chang et al., 2022; Yu et al., 2023). The decoding starts with a fully masked sequence, which is iteratively filled by repeating two steps: (1) sample the whole sequence \(\hat{\mathbf{x}}^{(t)}\) from \(p_{\theta}\) given the non-masked tokens from the previous step, (2) re-mask the \(\left\lfloor\lambda(t)\cdot n\right\rfloor\) tokens in \(\hat{\mathbf{x}}^{(t)}\) with the lowest probability, following a decreasing masking ratio schedule \(\lambda(t)\), according to timestamp \(t\).
**Denoising Diffusion Models (DDM).** DDMs (Sohl-Dickstein et al., 2015; Song and Ermon, 2019) are regarded as the state-of-the-art in visual generation due to their high-quality image (Dhariwal and Nichol, 2021; Ho et al., 2022) and video generation (Ho et al., 2022). For instance, DDPM (Ho et al., 2020) learns a denoising process parameterized as conditional Gaussian distributions over image pixels. Recently, diffusion models and language models have displayed a significant overlap. Recent DDMs diffuse over latents rather than raw pixels. These latents are obtained using models similar to the visual tokenizer used by LMs. In fact, the very first latent in diffusion, proposed by Rombach et al. (2022), is derived from a visual tokenizer. Additionally, the diffusion model's architecture has been shifting from the U-Net to the transformer architecture (Peebles and Xie, 2022). Consequently, the boundaries between diffusion and language models in visual generation have become less distinct. Yet, a fundamental difference between DDMs and LMs lies in the latent format, _i.e._, continuous _vs._ discrete. We have discussed the benefits of having discrete tokens in Section 1 and will show that the proposed tokenizer improves in these aspects.
**Visual tokenization.** Visual tokenization plays an essential role in mapping pixels into a discrete representation suitable for generative modeling. VQ-VAE (Van Den Oord et al., 2017) is a cornerstone work in image tokenization. A VQ-VAE model consists of a convolutional neural network (CNN) encoder, a vector-quantization (VQ) bottleneck, and a CNN decoder. Given a video \(\mathbf{V}\in\mathbb{R}^{T\times H\times W\times 3}\), the VQ-VAE's encoder \(E\) produces latent embeddings \(\mathbf{Z}=E(\mathbf{V})\in\mathbb{R}^{T^{\prime}\times H^{\prime}\times W^{ \prime}\times d}\). Each embedding vector \(\mathbf{z}\in\mathbb{R}^{d}\) in \(\mathbf{Z}\) is then passed through the vector quantizer \(q\), which assigns it to the closest entry \(\mathbf{c}\in\mathbb{R}^{d}\) in the learned codebook embedding \(\mathbf{C}\in\mathbb{R}^{K\times d}\):
\[q(\mathbf{z})=\mathbf{c}_{i},\text{ where }i=\operatorname*{arg\,min}_{j\in\{1,2, \cdots,K\}}\|\mathbf{z}-\mathbf{c}_{j}\|_{2}. \tag{1}\]
To get discrete tokens, we drop the embedding dimension and represent \(\mathbf{Z}\) by its indices \(\mathbf{X}\in\{1,2,\cdots,K\}^{T^{\prime}\times H^{\prime}\times W^{\prime}}\). For decoding, embeddings of all image tokens are given as input to the decoder \(D\) to reconstruct the input \(\hat{\mathbf{V}}=D(\mathbf{Z})\). Following VQ-VAE, VQGAN (Esser et al., 2021) introduces an adversarial loss and feature-level perceptual losses to enhance the image quality.
Video tokenization is more challenging and VQGAN has been adapted to meet this purpose (Ge et al., 2022; Villegas et al., 2022; Yu et al., 2023). The state of the art in video tokenization is MAGVIT (Yu et al., 2023), which introduces a better 3D architecture, an inflation technique for initialization using image pre-training, and robust training losses. With MAGVIT, the LMs achieve leading generation quality across multiple video benchmarks. However, MAGVIT struggles to tokenize images and often results in noticeable flickering in longer videos.
## 3 Method
We introduce a new **video tokenizer** designed to map the spatial-temporal dynamics from a visual scene into compact discrete tokens suitable for language models. Our approach builds upon the state-of-the-art video tokenizer, MAGVIT, as detailed in Yu et al. (2023). This section highlights two new designs: a lookup-free quantizer and a collection of enhancements to the tokenizer model.
### Lookup-Free Quantizer
Although the community has made great progress in developing VQ-VAEs, the relationship between improvements in the reconstruction quality and subsequent generation quality is still not well understood. A common misconception is that improving reconstruction equates to improving the generation of the language model. For example, enlarging the vocabulary can improve reconstruction quality. However, such improvement only extends to generation when the vocabulary size is small, and a very large vocabulary can actually hurt the performance of the language model.
As illustrated by the dashed curves in Fig. 1, the reconstruction FID, indicated by the right \(y\)-axis (where a lower value is better), improves as the vocabulary size (the \(x\)-axis) increases. The orange
solid curve in Fig. 1 represents the LM's generation quality (the left \(y\)-axis). The generation FID initially improves but deteriorates for larger vocabulary. This may shed light on why the vocabulary size of most language models for visual generation is around 1-8k (Esser et al., 2021; Villegas et al., 2022), which is significantly smaller than the size of natural language vocabulary, over 200k.
A simple trick for training a larger codebook involves decreasing the code embedding dimension when increasing the vocabulary size (Yu et al., 2022). This trick captures the intuition of limiting the representational capacity of individual tokens, which in turn facilitates learning over the distribution of a large vocabulary.
**Lookup-Free Quantization (LFQ).** Motivated by the above observation, we reduce the VQ-VAE codebook's embedding dimension to zero. Formally, the codebook \(\mathbf{C}\in\mathbb{R}^{K\times d}\) is replaced with an integer set \(\mathbb{C}\) where \(\left\lvert\mathbb{C}\right\rvert=K\). Recall that in VQ-VAE models, the quantizer must look up all \(K\)\(d\)-dimensional embeddings in the codebook, where \(d\) is typically \(256\), when computing the closest codebook entry to the encoder output. This new design eliminates the need for such embedding lookup entirely hence we call it _lookup-free quantization (LFQ)_. We found that LFQ can grow the vocabulary size in a way benefiting the generation quality of language models. As shown by the blue curves in Fig. 1, both reconstruction and generation consistently improves as the vocabulary size increases - a property not observed in current VQ-VAE methods.
While various LFQ methods are available, this paper discusses a straightforward variant that assumes independent codebook dimensions and binary latents. Specifically, the latent space of LFQ is decomposed as the Cartesian product of single-dimensional variables, as \(\mathbb{C}=\times_{i=1}^{\log_{2}K}C_{i}\). Given a feature vector \(\mathbf{z}\in\mathbb{R}^{\log_{2}K}\), each dimension of the quantized representation \(q(\mathbf{z})\) is obtained from:
\[q(\mathbf{z}_{i})=C_{i,j},\text{ where }j=\arg\min_{k}\left\lvert\mathbf{z}_{ i}-C_{i,k}\right\rvert, \tag{2}\]
where \(C_{i,j}\) is the \(j\)-th value in \(C_{i}\). With \(C_{i}=\left\{-1,1\right\}\), the \(\arg\min\) can be computed by the sign function as
\[q(\mathbf{z}_{i})=\operatorname{sign}(\mathbf{z}_{i})=-\mathbb{1}\left\{ \mathbf{z}_{i}\leq 0\right\}+\mathbb{1}\left\{\mathbf{z}_{i}>0\right\}. \tag{3}\]
With LFQ, the token index for \(q(\mathbf{z})\) is given by:
\[Index(\mathbf{z})=\sum_{i=1}^{\log_{2}K}\arg\min_{k}\left\lvert\mathbf{z}_{i}- C_{i,k}\right\rvert\prod_{b=0}^{i-1}\left\lvert C_{b}\right\rvert=\sum_{i=1}^{ \log_{2}K}2^{i-1}\mathbb{1}\left\{\mathbf{z}_{i}>0\right\}, \tag{4}\]
where \(\left\lvert C_{0}\right\rvert=1\) sets the virtual basis.
We add an entropy penalty during training to encourage codebook utilization:
\[\mathcal{L}_{entropy}=\mathbb{E}[H(q(\mathbf{z}))]-H[\mathbb{E}(q(\mathbf{z}))]. \tag{5}\]
This penalty is inspired by a similar loss used in image VQGAN model (Chang et al., 2022), which is also found in entropy-based clustering (Jansen et al., 2020). In LFQ, given the independence among dimensions, we rewrite \(H(q(\mathbf{z}))=\sum_{i=1}^{\log_{2}K}H(q(\mathbf{z}_{i}))\). The \(H[\mathbb{E}(q(\mathbf{z}))]\) term can be approximated with sub-groups of dimensions for \(K>2^{18}\) where direct estimation is memory bound.
We note that there are various other variants of LFQ, _e.g._, opting for the multivariant over the binary codebook \(C_{i}\) or employing other quantization techniques such as Agustsson et al. (2019). As the first paper to introduce this concept, we focus on the simplest form with independent binary dimensions, which shows promising improvements. Other LFQ methods merit further research.
### Visual Tokenizer Model Improvement
Joint image-video tokenization.A desirable feature of visual tokenization is the capability to tokenize images and videos using a shared codebook. However, the MAGVIT tokenizer, which utilizes the 3D CNN, faces challenges in tokenizing images due to the temporal receptive field.
Figure 1: **Reconstruction and generation quality curves** in FID on ImageNet when scaling the tokenizer’s vocabulary size with Vector Quantization (VQ) and Lookup-Free Quantization (LFQ). Comparison is done at 128\(\times\)128 resolution using an MLM with 306-372M parameters.
To build a joint image-video tokenizer, a new design is needed. We begin our discussion by revisiting an existing method C-ViViT (Villegas et al., 2022). As depicted in Fig. 1(a), C-ViViT employs full spatial transformer blocks combined with causal temporal transformer blocks. This approach performs reasonably well but has two drawbacks. First, unlike CNNs, the positional embeddings makes it difficult to tokenize spatial resolutions that were not seen during training. Second, empirically we found that 3D CNNs perform better than spatial transformer and produce tokens with better spatial causality of the corresponding patch.
To tackle these drawbacks, we explore two plausible designs. Fig. 1(b) combines C-ViViT and MAGVIT. Assuming a temporal compression ratio of 4, a 3D CNN processes blocks of 4 frames followed by a causal transformer. In Fig. 1(c), we use the temporally causal 3D convolution to replace the regular 3D CNN. Specifically, the temporal padding scheme for a regular 3D convolution layer with kernel size \((k_{t},k_{h},k_{w})\) includes \(\lfloor\frac{k_{t}-1}{2}\rfloor\) frames before and \(\lfloor\frac{k_{t}}{2}\rfloor\) frames after the input frames. In contrast, a causal 3D convolution layer pads with \(k_{t}-1\) frames before the input and nothing after, so that the output for each frame only depends on the previous frames. In consequence, the first frame is always independent of other frames, allowing the model to tokenize single images.
Temporal convolutional subsampling with stride \(s\) is sufficient for \(s\times\) down-sampling by mapping \(1+s\times t\) frames into \(1+t\). After a regular \(s\times\) up-sampling, we drop the first \(s-1\) resulting frames, which maps \(1+t\) frames into \(1+s\times t\) and allows for the tokenization of a single image. Tab. 4(a) empirically compares the designs in Fig. 2, and we find that the causal 3D CNN performs the best.
**Architecture modifications.** In addition to using causal 3D CNN layers, we made several other architectural modifications to improve upon the MAGVIT model. First, we change the encoder downsamplers from average pooling into strided convolutions to leverage learned kernels, and replace the decoder upsamplers from nearest resizing followed by convolution with a depth-to-space operator. Second, we defer the temporal downsampling from the first few encoder blocks to the last ones. In addition, the downsampling layer in the discriminator now utilizes 3D blur pooling (Zhang, 2019) to encourage shift invariance. Finally, we add one adaptive group normalization layer before the residual blocks at each resolution in the decoder to pass in the quantized latents as the control signal following StyleGAN (Karras et al., 2019). Tabs. 4(b) and 4(c) empirically verify these designs.
**Token factorization for efficient prediction.** The output tokens can be fed into language models to generate videos. To assist smaller transformers predicting in a large vocabulary, we can factorize the LFQ token's latent space into equal subspaces. For instance, rather than predicting using a codebook of size \(2^{18}\), we can predict in two concatenated codebooks, each of size \(2^{9}\). We embed each subspace token separately and use their embedding summation as the token embedding for the transformer input. For the output layer with weight tying (Press and Wolf, 2017), we use the embedding matrix for each subspace to obtain logits with seperate prediction heads.
## 4 Experiments
This section empirically verifies the proposed tokenizer across three distinct tasks: video and image generation, video compression, and action recognition. Fig. 3 visually compares the reconstruction quality of our tokenizer with prior works. More qualitative samples are shown at [https://magvit.cs.cmu.edu/v2](https://magvit.cs.cmu.edu/v2).
Figure 2: **Causal tokenizer architecture comparison**. The decoders, which are omitted from the figure, employ an architecture that is symmetric to the encoder.
### Experimental Setups
Datasets.We use Kinetics-600 (K600) (Carreira et al., 2018) and UCF-101 (Soomro et al., 2012) for video generation experiments, along with ImageNet (Deng et al., 2009) for image generaton. In addition, MCL-JCV (Wang et al., 2016) is used as the testbed for video compression, with Kinetics-400 (K400) (Kay et al., 2017) and SSv2 (Goyal et al., 2017) for video understanding.
Implementation detailsWe follow the tokenizer training setting and hyperparameters in (Yu et al., 2023), unless stated otherwise. LFQ is used, which eliminates the codebook embedding, to increase the default codebook size to \(K=2^{18}\). The weight of \(\mathcal{L}_{entropy}\) follows an annealing schedule with a 3\(\times\) higher starting point and linearly decays to a fixed value of \(0.1\) within 2k steps. We defer details regarding the evaluation setup of each subsection to the Appendix.
### Visual Generation
The masked language model (MLM) (Devlin et al., 2019) is used in image and video generation. To verify the tokenizer, we employ the same MLM transformers in MAGVIT (Yu et al., 2023). As we use a smaller MLM (\(\sim\)300M parameters) with a large codebook (\(2^{18}\approx\)262K), the token factorization as discussed in Section 3.2 is applied using two heads with each predicting from a codebook of size \(2^{9}\).
Video generation.We consider two standard video benchmarks, UCF-101 for class-conditional generation and K600 for frame prediction with 5-frame condition. FVD (Unterthiner et al., 2018) is used as our primary evaluation metric. Tab. 1 shows that our model surpasses all prior arts in both benchmarks. Specifically, it outperforms the previous best model MAGVIT by a large margin, while using the same MLM transformer backbone. These results demonstrate the essential role of a good visual tokenizer in enabling LMs to generate high-quality videos. Fig. 4 shows qualitative samples from the model.
Image generation on ImageNet.We evaluate MAGVIT-v2 on image generation under the standard ImageNet class-conditional setting. We present results for resolution 512\(\times\)512 in Tab. 2 and
Figure 3: **Image reconstruction samples with different tokenizers. We compare the VQGAN used in MaskGIT (Chang et al., 2022) with two of our models trained on ImageNet and web images (Chen et al., 2022). Original images are by Eric TERRADE and Barth Bailey on Unsplash.**
refer to the Appendix for 256\(\times\)256 results. FID (Heusel et al., 2017) and Inception Score (IS) (Salimans et al., 2016) are used as evaluation metrics. Our model surpasses the best performing diffusion models both in sampling quality (w.r.t. FID and IS), and inference-time efficiency (w.r.t. sampling steps).
It is worth noting that all the models compared are trained using the same ImageNet training data, with a comparable model size and training budget. Therefore, the performance primarily evaluates the model's capabilities. The masked language model, equipped with our tokenizer, exhibits a notable improvement in FID over the best diffusion model baseline at 512\(\times\)512 (FID=1.91 _vs._ 2.65, 28% \(\downarrow\)). While this margin narrows at 256\(\times\)256 resolution, the MLM uses a 50% reduced model size and needs much fewer decoding steps (_e.g._, 64 _vs._ 250) to get the image generation quality. Qualitative samples in comparison with other models are shown in Fig. 5.
### Video Compression
We conduct a subjective rater study to assess the compression quality of MAGVIT-v2. The study is conducted on the 30 videos of the MCL-JCV dataset, resized to a resolution of 640\(\times\)360. Sixteen raters are engaged, each providing responses to an average of roughly 800 pairwise-preference questions.
We calculate Elo scores (Elo and Sloan, 2008) based on pairwise preferences to quantify the relative visual quality between the models. The study compares our model with MAGVIT as well as the current video compression standard HEVC (H.265) video codec (Sullivan et al.,
\begin{table}
\begin{tabular}{l l c c c} \hline \hline Type & Method & K600 FVD\(\downarrow\) & UCF FVD\(\downarrow\) & \#Params & \#Steps \\ \hline GAN & TrIVD-GAN-FP (Luc et al., 2020) & 25.7\(\pm\)0.7 & & 1 \\ Diffusion & Video Diffusion (Ho et al., 2022c) & 16.2\(\pm\)0.3 & & 1.1B & 256 \\ Diffusion & RIN (Jabri et al., 2023) & 10.8 & & 411M & 1000 \\ \(\text{AR-LM}+\text{VQ}\) & TATS (Ge et al., 2022) & \multirow{2}{*}{33\(\pm\)18} & 321M & 1024 \\ MLM + VQ & Phenaki (Villegas et al., 2022) & & 227M & 48 \\ MLM + VQ & MAGVIT (Yu et al., 2023a) & 9.9\(\pm\)0.3 & 76\(\pm\)2 & 306M & 12 \\ \hline MLM + LFQ & _MAGVIT-v2 (this paper)_ & 5.2\(\pm\)0.2 & & & 12 \\ & **4.3\(\pm\)**0.1 & **58\(\pm\)**3 & 307M & 24 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Video generation results**: frame prediction on Kinetics-600 and class-conditional generation on UCF-101. We adopt the evaluation protocol of MAGVIT.
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline Type & Method & \multicolumn{2}{c}{w/o guidance} & \multicolumn{2}{c}{w/ guidance} & \multicolumn{2}{c}{\#Params & \#Steps \\ & FID\(\downarrow\) & IS\(\uparrow\) & FID\(\downarrow\) & IS\(\uparrow\) & \#Params & \#Steps \\ \hline GAN & StyleGAN-XL (Sauer et al., 2022) & & 2.41 & 267.8 & 168M & 1 \\ Diff. + VAE* & DiT-XL/2 (Peebles and Xie, 2022) & 12.03 & 105.3 & 3.04 & 240.8 & 675M & 250 \\ Diffusion & ADM+Upsample (Dhariwal and Nichol, 2021) & 9.96 & 121.8 & 3.85 & 221.7 & 731M & 2000 \\ Diffusion & RIN (Jabri et al., 2023) & 3.95 & 216.0 & & & 320M & 1000 \\ Diffusion & simple diffusion (Hoogeboom et al., 2023) & 3.54 & 205.3 & 3.02 & 248.7 & 2B & 512 \\ Diffusion & VDM++ (Kingma and Gao, 2023) & 2.99 & 232.2 & 2.65 & 278.1 & 2B & 512 \\ \(\text{MLM}+\text{VQ}\) & MaskGIT (Chang et al., 2022) & 7.32 & 156.0 & & & 227M & 12 \\ MLM + VQ & DPC+Upsample (Lezama et al., 2023) & 3.62 & 249.4 & & 619M & 72 \\ \hline MLM + LFQ & _MAGVIT-v2 (this paper)_ & 4.61 & 192.4 & & & 307M & 12 \\ & & 3.07 & 213.1 & **1.91** & **324.3** & & 64 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Image generation results**: class-conditional generation on ImageNet 512\(\times\)512. Guidance indicates the classifier-free diffusion guidance (Ho and Salimans, 2021). * indicates usage of extra training data. We adopt the evaluation protocol and implementation of ADM.
Figure 6: **Video compression rater study**.
2012) and the next-generation codec VVC (H.266) (Bross et al., 2021). As shown in Fig. 6, raters prefer our model to the compared methods at multiple bit rates.
We also compare the compression quality using common distortion metrics (LPIPS, PSNR, and MS-SSIM) at 0.0384 bpp, the bit rate of MAGVIT. The results in Tab. 3 show that our model outperforms MAGVIT on all metrics, and it outperforms all methods on LPIPS, a metric which correlates more closely with subjective quality assessments than PSNR or MS-SSIM.
### Video Understanding
In this subsection, we assess the tokenizer's capability to learn a video understanding model for action recognition. Two setups are examined: (1) using tokens as prediction targets for the transformer's output, and (2) using tokens as the input to the transformer. For the former setup, we use a similar architecture following the BEVT (Wang et al., 2022) pre-training. For the tokens as inputs, to work with the ViViT backbone (Arnab et al., 2021), we detokenize the tokens to pixels before feeding them to the ViViT transformers.
Tab. 4 shows that MAGVIT-v2 outperforms the previous best MAGVIT in these evaluations. Specifically, when using the decoded tokens as input, the performance approaches that of the model trained with ground-truth pixels using the same ViViT backbone. While these numbers are still worse than the state-of-the-art in action recognition, they represent solid improvements credited to the new tokenizer.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Token as transformer’s: & Output & \multicolumn{2}{c}{Input} \\ Tokenizer & SSV2 & SSV2 & K400 & K600 \\ \hline
3D VQ-VAE & 64.13 & 41.27 & 44.44 & 45.67 \\ MAGVIT (Yu et al., 2023a) & 67.22 & 57.34 & 72.29 & 74.65 \\ _MAGVIT-v2 (this paper)_ & **67.38** & **62.40** & **75.34** & **77.93** \\ Raw pixel & n/a & 63.08 & 76.13 & 78.92 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Video action recognition performance** (classification accuracy\(\uparrow\times\)100).
Figure 4: **Frame prediction samples on Kinetics-600.**
Figure 5: **Class-conditional generation samples on ImageNet 512\(\times\)512.** We compare with each of the previous works with a random sample from the same image class.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & LPIPS & PSNR & MS-SSIM\(\uparrow\) \\ \hline HEVC (Sullivan et al., 2012) & 0.199 & 30.10 & 0.943 \\ VVC (Bross et al., 2021) & 0.153 & **32.65** & **0.966** \\ \hline MAGVIT (Yu et al., 2023a) & 0.144 & 23.70 & 0.846 \\ _MAGVIT-v2 (this paper)_ & **0.104** & 26.18 & 0.894 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Video compression metrics.**
### Ablation Study
In Fig. 1, we have ablated LFQ _vs_. VQ and the vocabulary size. In Tab. 5, we validate the key designs proposed in Section 3.2. Specifically, Tab. 4(a) compares the architecture illustrated in Fig. 2; Tab. 4(b) and Tab. 4(c) verify the LFQ and other improvements on ImageNet and UCF-101, respectively.
## 5 Related Work
Visual tokenization.Beyond the VQ-VAE models discussed in Section 2, additional models have been proposed. ViT-VQGAN (Yu et al., 2022) introduces transformer blocks as a substitute for CNNs for image tokenization. C-ViViT (Villegas et al., 2022) further extends this idea for video tokenization. Early studies on video tokenization treat frames as independent images with no temporal compression (Wu et al., 2022; Gupta et al., 2022). Later research (Yan et al., 2021; Ge et al., 2022; Yu et al., 2023) integrates 3D CNNs to tokenize spatial-temporal volumes. Despite these advances in vector quantization (VQ), the codebook learned by previous VQ models is relatively small (_e.g._, 8k) due to the difficulty in improving the generation quality with larger vocabularies. In contrast, our tokenizer can induce a large vocabulary (_e.g._, \(262\)k) that can be effectively modeled by an LM, leading to enhanced image and video generation quality.
Text-to-{image, video}.Text-to-image and text-to-video generation has garnered significant rapid advancements using both language models (Yu et al., 2023; Chang et al., 2023) and diffusion models (Ho et al., 2022; Blattmann et al., 2023; Singer et al., 2022; Ge et al., 2023; Ramesh et al., 2022). Although diffusion models, such as Midjourney, are considered the top performers in these tasks, it is unclear whether their advantage stems from the model, data, or some other unidentified factors. Indeed, it is challenging to scientifically compare these text-to-image models as they are trained on varied datasets, with some even being proprietary data, under inconsistent training conditions. To facilitate a fairer comparison, this paper prioritizes using the ImageNet and Kinetics benchmarks.
Diffusion models.Exhibiting high quality sampling, pixel-space diffusion models (Sohl-Dickstein et al., 2015; Song and Ermon, 2019; Ho et al., 2020) raised to the top of the generative modeling space for both image (Ho et al., 2020; Dhariwal and Nichol, 2021; Saharia et al., 2022) and video (Ho et al., 2022; Singer et al., 2022) synthesis. The pixel-space denoising diffusion models (DDMs) are later refined by the latent-space DDM (Rombach et al., 2022), which conducts diffusion over the _continuous_ latent embeddings derived from a pre-trained variational autoencoder (VAE). Binary latents for image modeling were used in Wang et al. (2023), where the diffusion process is parameterized with Bernoulli distributions. Recent studies have identified advantages in substituting the U-Net (Ronneberger et al., 2015) denoising backbone with a Transformer (Peebles and Xie, 2022; Jabri et al., 2023) or a hybrid of both (Hoogeboom et al., 2023), making the distinctions between diffusion and language models in visual generation more blurred, with a key distinction being their latent format -- continuous for diffusion and discrete for language models.
## 6 Conclusion and Future Work
We introduce MAGVIT-v2, a novel video tokenizer that exploits lookup-free quantization along with architectural advancements to tokenize images and video with a shared vocabulary. The experiments show that our tokenizer outperforms the previously leading video tokenizer across three areas: visual generation, video compression, and action recognition in videos. Our results suggest that a good visual tokenizer is key for enabling language models to excel in image and video generation. These results demonstrate the great capabilities of LMs in visual generation, and advocate for further exploration of advanced visual tokenization methods designed for LLMs.
\begin{table}
\end{table}
Table 5: **Ablation study verifying key design choices.** |
2308.09347 | Endowments, patience types, and uniqueness in two-good HARA utility
economies | This paper establishes a link between endowments, patience types, and the
parameters of the HARA Bernoulli utility function that ensure equilibrium
uniqueness in an economy with two goods and two impatience types with additive
separable preferences. We provide sufficient conditions that guarantee
uniqueness of equilibrium for any possible value of $\gamma$ in the HARA
utility function
$\frac{\gamma}{1-\gamma}\left(b+\frac{a}{\gamma}x\right)^{1-\gamma}$. The
analysis contributes to the literature on uniqueness in pure exchange economies
with two-goods and two agent types and extends the result in [4]. | Andrea Loi, Stefano Matta | 2023-08-18T07:10:03Z | http://arxiv.org/abs/2308.09347v1 | # Endowments, patience types, and uniqueness in two-good Hara utility economies
###### Abstract.
This paper establishes a link between endowments, patience types, and the parameters of the HARA Bernoulli utility function that ensure equilibrium uniqueness in an economy with two goods and two impatience types with additive separable preferences. We provide sufficient conditions that guarantee uniqueness of equilibrium for any possible value of \(\gamma\) in the HARA utility function \(\frac{\gamma}{1-\gamma}\left(b+\frac{a}{\gamma}x\right)^{1-\gamma}\). The analysis contributes to the literature on uniqueness in pure exchange economies with two-goods and two agent types and extends the result in [4].
The first author was supported by INdAM. GNSAGA - Gruppo Nazionale per le Strutture Algebriche, Geometriche e le loro Applicazioni and by KASBA, funded by Regione Autonoma della Sardegna. Both authors were supported by STAGE, funded by Fondazione di Sardegna.
This paper provides a positive answer to the question above for HARA utilities, an important subclass of the DARA type. More precisely, our main result, Theorem 2, shows the connection between endowments, patience types and the parameters of the HARA utility function that ensure the uniqueness of the equilibrium. To obtain this result, we will follow the approach of [4], where the excess demand function is approximated by a polynomial whose variable, the price, is raised to a power dependent on \(\gamma\). An algebraic result, Lemma 4, which links the existence of a double root of a polynomial to an inequality involving its coefficients, allows us to use a topological argument to prove our main result.
For an overview of the literature on uniqueness, in addition to the well-known contributions by [3] and [5], we also refer the reader to the two recent contributions by [1] and [7] for two-good, two-agent pure exchange economies.
This short note is organised as follows. Section 2 analyses the economic setting using the polynomial approach. Section 3 proves our main result.
## 2. Preliminaries
Consider an economy with two goods and \(I=2\) impatience types, where type \(i\) has preferences represented by the utility function
\[u_{i}(x,y)=u_{H}(x)+\beta_{i}u_{H}(y), \tag{1}\]
where \(u_{H}\) is HARA, i.e.
\[u_{H}(x):=\frac{\gamma}{1-\gamma}\left(b+\frac{a}{\gamma}x\right)^{1-\gamma}, \ \gamma>0,\gamma\neq 1,a>0,b\geqslant 0. \tag{2}\]
Let \(\varepsilon\) be a rational number \(\frac{m}{n}\), \(m,n\in\mathbb{N}\) sufficiently close to \(\frac{1}{\gamma}\). Suppose that \(\gamma>2\) and, hence, \(n>2m\). Denoting by \((e_{i},f_{i})\) consumer i's endowments, the standard maximisation problem over the budget constraint \(px_{i}+y_{i}\leqslant pe_{i}+f_{i}\) gives (see [4, formula (14)]) the aggregate excess demand function for good \(x\):
\[\sum_{i=1}^{2}\frac{b-bp^{\varepsilon}\sigma_{i}+a\varepsilon\left(pe_{i}+f_{ i}\right)}{a\varepsilon\left(p+\sigma_{i}p^{\varepsilon}\right)}-(e_{1}+e_{2}), \tag{3}\]
where
\[\varepsilon\approx\frac{1}{\gamma},\ \sigma_{i}:=\beta_{i}^{\varepsilon},\ i=1,2.\]
Following [4], we combine terms over a common denominator and take the numerator, then we collect terms in \(p\), divide by \(p^{\varepsilon}\), and we get:
\[p(-ae_{1}\sigma_{1}\varepsilon-ae_{2}\sigma_{2}\varepsilon-b \sigma_{1}-b\sigma_{2})+p^{1-\varepsilon}(af_{1}\varepsilon+af_{2}\varepsilon +2b)+\] \[p^{\varepsilon}(-ae_{1}\sigma_{1}\sigma_{2}\varepsilon-ae_{2} \sigma_{1}\sigma_{2}\varepsilon-2b\sigma_{1}\sigma_{2})+af_{1}\sigma_{2} \varepsilon+af_{2}\sigma_{1}\varepsilon+b\sigma_{1}+b\sigma_{2}\]
Recalling that \(\varepsilon=\frac{m}{n}\) and by letting, with a slight abuse of notation, \(x:=p^{1/n}\), we rewrite the previous expression in decreasing order as follows:
\[A(e,\sigma,a,b)x^{n}+B(f,\sigma,a,b)x^{n-m}+C(e,\sigma,a,b)x^{m}+D(f,\sigma,a,b), \tag{4}\]
where
\[\begin{split} A(e,\sigma,a,b):=&-(e_{1}\sigma_{1}+e_{2} \sigma_{2})-\frac{b}{a\varepsilon}(\sigma_{1}+\sigma_{2})<0,\\ B(f,\sigma,a,b):=&(f_{1}+f_{2})+\frac{2b}{a \varepsilon}>0,\\ C(e,\sigma,a,b):=&(e_{1}+e_{2})\sigma_{1}\sigma_{2} -\frac{2b}{a\varepsilon}\sigma_{1}\sigma_{2}<0,\\ D(f,\sigma,a,b):=&(f_{1}\sigma_{2}+f_{2}\sigma_{1 })+\frac{b}{a\varepsilon}(\sigma_{1}+\sigma_{2})>0.\end{split} \tag{5}\]
If the following conditions hold
\[\beta_{1}<\beta_{2},\,e_{1}\leqslant e_{2},\,f_{1}\geqslant f_{2}, \tag{6}\]
\[b\geqslant\frac{a}{\gamma}\left(\frac{\beta_{2}}{\beta_{1}}\right)^{\frac{2}{ \gamma}}(e_{2}+f_{1}), \tag{7}\]
then the polynomial (4) satisfies the inequality
\[A(e,\sigma,a,b)D(f,\sigma,a,b)-B(f,\sigma,a,b)C(e,\sigma,a,b)<0 \tag{8}\]
We will follow, mutatis mutandis, the same line of reasoning of the proof of [4, Theorem 2], the only difference here is that we deal with an arbitrary value of \(\gamma.\) The formula \(A(e,\sigma,a,b)D(f,\sigma,a,b)-B(f,\sigma,a,b)C(e,\sigma,a,b)\) can be written as
\[(\sigma_{2}-\sigma_{1})(e_{1}f_{2}\sigma_{1}-e_{2}f_{1}\sigma_{2})+E(e,f, \sigma,a,b),\]
where
\[E(e,f,\sigma,a,b):=-\frac{b^{2}}{a^{2}\varepsilon^{2}}\left(\sigma_{1}-\sigma _{2}\right)^{2}+\frac{b}{a\varepsilon}\left[(e_{1}+e_{2}+f_{1}+f_{2})\,\sigma _{1}\sigma_{2}-(e_{1}+f_{2})\sigma_{1}^{2}-(e_{2}+f_{1})\sigma_{2}^{2}\right].\]
Observe that, by condition (6),
\[(\sigma_{2}-\sigma_{1})(e_{1}f_{2}\sigma_{1}-e_{2}f_{1}\sigma_{2})\leqslant( \sigma_{2}-\sigma_{1})f_{1}(e_{1}\sigma_{1}-e_{2}\sigma_{2})<(\sigma_{2}- \sigma_{1})f_{1}\sigma_{2}(e_{1}-e_{2})\leqslant 0.\]
Moreover, \(E(e,f,\sigma,a,b)\leqslant 0\) if and only if
\[b\geqslant a\varepsilon\frac{[(e_{1}+e_{2}+f_{1}+f_{2})\sigma_{1}\sigma_{2}-( e_{1}+f_{2})\sigma_{1}^{2}-(e_{2}+f_{1})\sigma_{2}^{2}]}{\left(\sigma_{1}- \sigma_{2}\right)^{2}}.\]
Again, by (6), we can write
\[a\varepsilon\frac{[(e_{1}+e_{2}+f_{1}+f_{2})\sigma_{1}\sigma_{2} -(e_{1}+f_{2})\sigma_{1}^{2}-(e_{2}+f_{1})\sigma_{2}^{2}]}{\left(\sigma_{1}- \sigma_{2}\right)^{2}}\] \[<a\varepsilon\frac{[(e_{1}+e_{2}+f_{1}+f_{2})\sigma_{1}\sigma_{2 }]}{\left(\sigma_{1}-\sigma_{2}\right)^{2}}\] \[<a\varepsilon\left(\frac{\sigma_{2}}{\sigma_{1}}\right)^{2}(e_{2} +f_{1}).\]
Thus, since \(\sigma_{i}=\beta_{i}^{\varepsilon},\,i=1,2\) and \(\gamma=\frac{1}{\varepsilon},\) the proof of the lemma follows.
## 3. Main result
In this section we present our main result, Theorem 2. As far as uniqueness is concerned, we will assume an arbitrary \(\gamma>2\). In fact, the case \(\gamma\in(1,2]\) is a particular case of [4, Theorem 1], while the case \(\gamma\leqslant 1\) is a well known result in the literature [2, 5, 6].
Observe that the zero set of aggregate demand function amounts to studying the zeros of polynomial (4). In fact, according to [4]'s approach it is possible to approximate \(\gamma\) with a rational number, since \(\mathbb{Q}\) is dense in \(\mathbb{R}\), in such a way that the cardinality of the set of regular equilibria does not decrease [4, Lemma 9]. To provide a geometric insight, this corresponds to small perturbations of the aggregate demand function that do not allow a decrease in the number of the equilibria.
**Theorem 2**.: In an economy with two goods and two impatient types with Hara preferences (1), if the conditions (6) and (7) hold, then the equilibrium price is unique.
_Remark 3_.: For the general type of DARA, [1] observe there is not a closed-form expression that ensures uniqueness, but conditions (6) and (7) represent a closed-form expression for HARA utilities, an important subclass of utilities of type DARA. They are the same as those presented in [4], here suitably generalised.
Proof.: By [4, Theorem 1]) there exists uniqueness of equilibrium if and only if the polynomial (4), \(P(x)\), has a unique positive root. We will prove that the inequality (8), which holds by Lemma 1, implies that \(P(x)\) has a unique positive root. Assume by contradiction that \(P(x)\) has more than one positive root. Since \(P(x)\) belongs to the path-connected space of polynomials \(Ax^{n}+Bx^{n-m}+Cx^{m}+D\), with non zero coefficients such that \(AD-BC<0\), it follows by the continuous dependence of the roots of a polynomial on its coefficients that \(P(x)\) has indeed a double positive root. Hence one can achieve the conclusion of Theorem 2 by the following algebraic lemma.
**Lemma 4**.: If the polynomial (4), \(P(x)=Ax^{n}+Bx^{n-m}+Cx^{m}+D\), \(ABCD\neq 0\), has a double positive root, then \(AD-BC\geqslant 0\).
Proof.: By contradiction, let \(\alpha>0\) be a double root of \(P(x)\), that is, \((x-\alpha)^{2}\) divides \(P(x)\). The following table shows the pattern of the remainders after the first \(k\) steps of the division.
\begin{tabular}{|c|c|} \hline
**Step** & **Remainder** \\ \hline
1 & \(2\alpha Ax^{n-1}\ -\ \alpha^{2}Ax^{n-2}\ +\ Bx^{n-m}\ +\ Cx^{m}\ +\ D\) \\
2 & \(3\alpha^{2}Ax^{n-2}\ -\ 2\alpha^{3}Ax^{n-3}\ +\ Bx^{n-m}\ +\ Cx^{m}\ +\ D\) \\
3 & \(4\alpha^{3}Ax^{n-3}\ -\ 3\alpha^{4}Ax^{n-4}\ +\ Bx^{n-m}\ +\ Cx^{m}\ +\ D\) \\ \(\vdots\) & \(\vdots\) \\ \(k\) & \((k\,+\,1)\alpha^{k}Ax^{n-k}\,-\,k\alpha^{k+1}Ax^{n-k-1}\,+\,Bx^{n-m}\,+\,Cx^{m }\,+\,D\) \\ \hline \end{tabular} From \(n-k-1=n-m\), we get \(m=k+1\) and then we can rewrite the reminder accordingly:
\[m\alpha^{m-1}Ax^{n-m+1}+[B-(m-1)\alpha^{m}A]x^{n-m}+Cx^{m}+D.\]
Continuing the division with this new reminder, the next table reveals again the following pattern:
\begin{tabular}{|c|l|} \hline
**Step** & **Remainder** \\ \hline
1 & \([B+(m+1)\alpha^{m}A]x^{n-m}-m\alpha^{m+1}Ax^{n-m-1}+Cx^{m}+D\) \\
2 & \([2\alpha B+(m+2)\alpha^{m+1}A]x^{n-m-1}-[\alpha^{2}B+(m+1)\alpha^{m+2}A]x^{n-m- 2}+Cx^{m}+D\) \\
3 & \([3\alpha^{2}B+(m+3)\alpha^{m+2}A]x^{n-m-2}-[2\alpha^{3}B+(m+2)\alpha^{m+3}A]x^{ n-m-3}+Cx^{m}+D\) \\ \(\vdots\) & \(\vdots\) \\ \(k\) & \(k\alpha^{k-1}B\ +\ (m\ +\ k)\alpha^{m+k-1}A]x^{n-m-k+1}\ -\ [(k\ -\ 1)\alpha^{k}B\ +\ (m\ +\ k -1)\alpha^{m+k}A)]x^{n-m-k}+Cx^{m}+D\) \\ \hline \end{tabular}
From \(n-m-k=m\), we get \(k=n-2m\). We can rewrite the reminder as follows:
\[[(n-2m)\alpha^{n-2m-1}B+(n-m)\alpha^{n-m-1}A]x^{m+1}-\] \[-[(n-2m-1)\alpha^{n-2m}B+(n-m-1)\alpha^{n-m}A]x^{m}+Cx^{m}+D,\]
that, reordering terms, becomes
\[[(n-2m)\alpha^{n-2m-1}B+(n-m)\alpha^{n-m-1}A]x^{m+1}+\] \[+[C-(n-2m-1)\alpha^{n-2m}B-(n-m-1)\alpha^{n-m}A]x^{m}+D.\]
Starting with this new reminder, the last pattern is suggested by the following table:
\begin{tabular}{|c|l|} \hline
**Step** & **Remainder** \\ \hline
1 & \([C+(n-2m+1)\alpha^{n-2m}B+(n-m+1)\alpha^{n-m}A]x^{m}-[(n-2m)\alpha^{n-2m+1}B+\) \\ & \((n-m)\alpha^{n-m+1}A]x^{m-1}+D\) \\
2 & \([2\alpha C+(n-2m+2)]\alpha^{n-2m+1}B+(n-m+2)\alpha^{n-m+1}A]x^{m-1}-[\alpha^{2 }C+(n-2m+1)\alpha^{n-2m+2}B+(n-m+1)\alpha^{n-m+2}A]x^{m-2}+D\) \\
3 & \([3\alpha^{2}C+(n-2m+3)]\alpha^{n-2m+2}B+(n-m+3)\alpha^{n-m+2}A]x^{m-2}-[2 \alpha^{3}C+(n-2m+2)\alpha^{n-2m+3}B+(n-m+2)\alpha^{n-m+3}A]x^{m-2}+D\) \\ \(\vdots\) & \(\vdots\) \\ \(k\) & \([k\alpha^{k-1}C+(n-2m+k)]\alpha^{n-2m+k-1}B+(n-m+k)\alpha^{n-m+k-1}A]x^{m-k+1}-[(k -1)\alpha^{k}C+(n-2m+k-1)\alpha^{n-2m+k}B+(n-m+k-1)\alpha^{n-m+k}A]x^{m-k}+D\) \\ \hline \end{tabular}
After \(k=m\) divisions, the remainder reduces to a first degree polynomial:
\(m\alpha^{m-1}C+(n-m)\alpha^{n-m-1}B+n\alpha^{n-1}A\)]\(x-[(m-1)\alpha^{m}C+(n-m-1)\alpha^{n-m}B+(n-1)\alpha^{n}A]+D\).
Under the hypothesis that \((x-\alpha)^{2}\) divides \(P(x)\), the coefficients of this last remainder must vanish, that is:
\[\begin{cases}m\alpha^{m-1}C=-(n-m)\alpha^{n-m-1}B-n\alpha^{n-1}A\\ D=(m-1)\alpha^{m}C+(n-m-1)\alpha^{n-m}B+(n-1)\alpha^{n}A.\end{cases}\]
Multiplying the second equation by \(m\alpha^{m-1}\), we get
\[m\alpha^{m-1}D=(m-1)\alpha^{m}m\alpha^{m-1}C+m(n-m-1)\alpha^{n-1}B+m(n-1) \alpha^{n+m-1}A,\]
where, substituting \(m\alpha^{m-1}C\) with the RHS of the first equation and multiplying by \(A\), we obtain
\[m\alpha^{m-1}AD=(n-2m)\alpha^{n-1}AB+(n-m)\alpha^{n+m-1}A^{2}.\]
Moreover, we observe that
\[m\alpha^{m-1}BC=-(n-m)\alpha^{n-m-1}B^{2}-n\alpha^{n-1}AB.\]
We can then write
\[m\alpha^{m-1}(AD-BC)=(n-m)\alpha^{n-m-1}(\alpha^{2m}A^{2}+B^{2})+2(n-m)\alpha^{n -1}AB,\]
as
\[(n-m)\alpha^{n-m-1}(\alpha^{2m}A^{2}+B^{2}+2\alpha^{m}AB),\]
or, equivalently,
\[(n-m)\alpha^{n-m-1}(\alpha^{m}A+B)^{2}.\]
Hence, we have
\[AD-BC=\frac{n-m}{m}\alpha^{n}(\alpha^{n}A+B)^{2}\geqslant 0,\]
yielding the desired contradiction.
Remark 5.: It should be possible to give a more elegant proof of Lemma 4 by using an approach based on the discriminant of a polynomial instead of a division algorithm as in our proof. However, this alternative approach seems to lead to very complicated calculations that the authors were unable to handle.
|
2307.08804 | Bjet_MCMC: A new tool to automatically fit the broadband SEDs of blazars | Multiwavelength observations are now the norm for studying blazars' various
states of activity, classifying them, and determining possible underlying
physical processes driving their emission. Broadband emission models became
unavoidable tools for testing emission scenarios and setting values to physical
quantities such as the magnetic field strength, Doppler factor, or shape of the
particle distribution of the emission zone(s). We announce here the first
public release of a new tool, Bjet_MCMC, that can automatically fit broadband
spectral energy distributions (SEDs) of blazars. The complete code is available
on GitHub and allows testing leptonic synchrotron self-Compton models (SSC),
with or without external inverse-Compton processes from the thermal environment
of supermassive black holes (accretion disk and broad line region). The code is
designed to be user-friendly and computationally efficient. It contains a core
written in C++ and a fully parallelized SED fitting method. The original
multi-SSC zones model of Bjet is also available on GitHub but is not included
in the MCMC fitting process at the moment. We present the features,
performance, and results of Bjet_MCMC, as well as user advice. | Olivier Hervet, Caitlin A. Johnson, Adrian Youngquist | 2023-07-17T19:39:38Z | http://arxiv.org/abs/2307.08804v2 | # Bjet_MCMC: A new tool to automatically fit the broadband SEDs of blazars
###### Abstract
Multiwavelength observations are now the norm for studying blazars' various states of activity, classifying them, and determining possible underlying physical processes driving their emission. Broadband emission models became unavoidable tools for testing emission scenarios and setting values to physical quantities such as the magnetic field strength, Doppler factor, or shape of the particle distribution of the emission zone(s). We announce here the first public release of a new tool, Bjet_MCMC, that can automatically fit broadband spectral energy distributions (SEDs) of blazars. The complete code is available on GitHub and allows testing leptonic synchrotron self-Compton models (SSC), with or without external inverse-Compton processes from the thermal environment of supermassive black holes (accretion disk and broad line region). The code is designed to be user-friendly and computationally efficient. It contains a core written in C++ and a fully parallelized SED fitting method. The original multi-SSC zones model of Bjet is also available on Github but is not included in the MCMC fitting process at the moment. We present the features, performance, and results of Bjet_MCMC, as well as users advice.
Blazars(164) -- Gamma-rays(637) -- Astronomy software(1855) -- Markov chain Monte Carlo(1889) -- Astronomy data modeling(1859) +
Footnote †: Now at Starry Sky North ([https://starryskiesnorth.org](https://starryskiesnorth.org))
## 1 Introduction: History, Features and Main Results of Bjet
The model Bjet, which stands for "Blob-in-Jet", takes its root in the quick developments of synchrotron-self-Compton (SSC) models in the second half of the 90's. It corresponds to a period where observational evidence of a compact non-thermal zone flaring in active galactic nuclei (AGN) jets was well established (e.g. Marscher & Gear, 1985), general consensus was reached on the AGN unification schemes (e.g. Maraschi & Rovetti, 1994; Urry & Padovani, 1995) and the first generation of gamma-ray space telescopes (CGRO) and ground-based very-high-energy atmospheric Cherenkov telescopes (Whipple, CAT) were reaching maturity. This new generation of telescopes led for the first time to precisely building multiwavelength spectral energy distributions (SEDs) from radio to gamma-rays of the brightest blazars.
Two main families of synchrotron self-Compton (SSC) models were developed. On one side the one-zone "pure SSC" for high-frequency synchrotron peaked BL Lacs (HBLs).1 On the other side models with thermal external inverse-Compton (EIC) from the interaction of high energy particles of the jet with the thermal ambient radiation field surrounding the nucleus due to the accretion disk emission reprocessed by the broad-lines region (BLR). These SSC+EIC models were primarily used in the modeling of flat spectrum radio quasars (FSRQs) such as 3C 279 (Sikora
et al., 1994; Ghisellini & Madau, 1996; Inoue & Takahara, 1996). In all cases, the high energy emission zone is assumed to be characterized by a compact spherical zone, further referenced as a "blob", relatively close to the nucleus and moving along the jet at relativistic speed. This blob is isotropically filled with high-energy particles (usually simplified as an electron population) and a tangled magnetic field. The blob radiation in its reference frame is also considered isotropic. Most of the SSC models follow the spherical radiation transfer formula set by Gould (1979). The particle distribution within the blob \(\rho(E)\) is characterized by a power-law-like spectrum (with many possible flavors) presenting an average index \(\alpha\sim 2\) (considering \(\rho(E)\propto E^{-\alpha}\)). Such a distribution is mostly justified by a process of diffuse shock acceleration.
Bjet is part of a second generation of models, called "multi-zones" (e.g. Ghisellini et al., 2005; Tavecchio et al., 2011). A known issue of one-zone models is that they usually poorly picture broadband SEDs below the infrared energy range. It is understood that most of the radio emission of jetted AGN is produced by large emission zones, which can be observed in radio very-long baseline interferometry as pc-to-kpc radio core and radio-knots. The very extended emission (\(>100\) kpc) is mostly contributing below the cm wavelengths energy range.
The C++ code foundation that was later used to build Bjet was developed in the early 2000s for a study of the blazar Mrk 501 (Katarzynski et al., 2001). In order to explain the low-frequency radiation of Mrk 501, they assumed an inhomogeneous model in a conical geometry with a constant bulk Lorentz factor and a power-law decrease of the magnetic field and particle density along the jet. The SSC of an inhomogeneous jet, discretized in homogeneous slices was first developed by Marscher (1980); Ghisellini et al. (1985). The approach of Katarzynski et al. (2001) consisted of two distinct models, one for the blob "Sblob" and one for the conical jet "Sjet". Bjet primary goal was to merge these two models into a consistent multi-zone framework for the study of the Intermediate-frequency-peaked BL Lac (IBL) AP Librae. AP Librae is a blazar that displays a multiwavelength SED with features inconsistent with one-zone models, such as a very broad and relatively flat inverse-Compton emission component from X-rays to very-high-energies (VHE, \(E>100\) GeV). This issue was tackled with the self-consistent multi-zone model Bjet that includes radiative interaction between the blob and the conical jet, such as synchrotron self-absorption, radiative absorption by pair creation, and the external inverse-Compton emission produced by the interaction of the blob's particles onto the jet photons (Hervet et al., 2015), see Figure 1, _right_.
In addition to AP Librae, Bjet has been used for modeling multiple jetted AGN emitting in VHE, such as PKS 0625-354, HESS J1943+213, 1ES 1215+303, PKS 1222+216 and TON 599 (HESS Collaboration et al., 2018; Archer et al., 2018; Valverde et al., 2020; Adams et al., 2022). It has undergone multiple improvements since its first usage, in optimizing the computation time, but also in its scientific completeness, such as
* Better radiation transfer for large angles with the line of sight
* External inverse Compton from the blob's particles onto the direct disk radiation
* Radial density profile of the broad line region for the thermal EIC and its associated gamma-ray absorption by pair creation. The density profile is based on Nalewajko et al. (2014).
A geometrical scheme of Bjet (not to scale) is presented in Figure 1, _left_. In this paper, we do not review the detail of the radiative processes and formula used in the code. They are described in Katarzynski et al. (2001); Hervet et al. (2015) and, for complementary details, in the Ph.D. thesis (Hervet, 2015, in French).
## 2 Motivations for Bjet_Mcmc
As mentioned above, Bjet has been used in multiple papers since its first development in 2015 and has shown its capability in modeling various types of blazars (HBLs, IBLs, LBLs, FSRQs) and a radiogalaxy candidate (PKS 0625-354). The main purpose of the project Bjet_MCMC is to provide this tool to the scientific community through an open GitHub project.2 Users can have access to the full Bjet code and perform one-zone pure SSC, one-zone SSC with thermal nucleus interactions (EIC + pair absorption), and multi-SSC zones with thermal nucleus interactions (blob + jet + nucleus).
Footnote 2: [https://github.com/Ohervet/Bjet_MCMC](https://github.com/Ohervet/Bjet_MCMC)
The second motivation was to make a tool that automatically fits the multiwavelength SEDs and which is user-friendly and computationally efficient. SSC models are notorious for being challenging for standard \(\chi^{2}\) minimization methods. They have high dimensionalities, parameter degeneracies, local minima, and model-dependant parameter
boundaries. To illustrate this last point, let's consider the particle distribution spectrum within the blob, which is set in our model as a broken power law.
\[N_{e}(\gamma)=\left\{\begin{array}{ll}N_{e}^{(1)}\gamma^{-n_{1}}&\text{for }\gamma_{\min}\leqslant\gamma\leqslant\gamma_{\text{brk}}\\ N_{e}^{(2)}\gamma^{-n_{2}}&\text{for }\gamma_{\text{brk}}\leqslant\gamma\leqslant \gamma_{\max}\end{array}\right., \tag{1}\]
with \(\gamma_{\min}\), \(\gamma_{\text{brk}}\) and \(\gamma_{\max}\), the Lorentz factors of the radiating particles a the minimum, break, and maximum of their distribution. In this equation, \(N_{e}^{(2)}=N_{e}^{(1)}\gamma_{\text{brk}}^{(n_{2}-n_{1})}\), and \(N_{e}^{(1)}\) the particle density factor set as \(N_{e}^{(1)}=N_{e}(1)\).
Keeping free \(\gamma_{\min}\), \(\gamma_{\text{brk}}\) and \(\gamma_{\max}\) in a standard \(\chi^{2}\) minimization algorithm will certainly create issues since the following condition \(\gamma_{\min}<\gamma_{\text{brk}}<\gamma_{\max}\) has to be respected. These constraints let us consider Markov-Chain Monte-Carlo (MCMC) methods as the most suited to perform a SED fit and explore the parameter space. Indeed, MCMCs have the advantage of building the best solution from posterior probability distributions, which is by default less impacted by discontinuity or non-linearity of the parameter space. We note here that this MCMC fitting approach of SSC models is relatively known in the community and has been implemented and used in multiple studies (e.g. Tramaccer et al., 2011; Zabalza, 2015; Qin et al., 2018; Jimenez-Fernandez and van Eerten, 2021)
In this paper, we do not intend to describe the general statistical concepts behind the MCMC methods. The literature on the subject is quite vast, we can recommend MacKay (2003), for example.
## 3 Implementing the MCMC method
For our project, we used the MCMC emcee package, which is a handy Python tool allowing a relatively simple implementation (Foreman-Mackey et al., 2013).3 emcee requires a user-defined probability function to evaluate the goodness of a fit. It then automatically builds the posterior probability density function following a given number of steps, walkers, and a defined burning sample. There are multiple flavors available on what type of move a walker can do on the parameter space, we use the default "StretchMove," developed by Goodman and Weare (2010). A few other moves - or combinations of moves - were tested but did not display significant improvements compared to the proposed default method.
Footnote 3: [https://emcee.readthedocs.io/en/stable/](https://emcee.readthedocs.io/en/stable/)
### Probability function and free parameters
Our probability function is based on the \(\chi^{2}\) value of the model on all considered SED spectral points. Asymmetric error bars are fully implemented in our \(\chi^{2}\) calculation. We highlight here that flux upper limits are not considered in the fit, but can still be included in the input SED data file for display purposes only. A general advice is to merge constraining upper limits together into larger energy bins until we have a statistically significant data point before fitting the SED. As the emcee package requires a log probability, our probability function is defined as \(\ln P=-\chi^{2}/2\).
Figure 1: _Left_: Scheme of the Bjet model, dashed lines show the considered radiative transfers. _Right:_ Example of an application of the Bjet model on the SED of the blazar AP Librae (Hervet et al., 2015).
The MCMC method is implemented for the single-zone SSC + thermal EIC model (blob + accretion disk + BLR). It includes up to 13 free parameters, as detailed in the sections below. The full Bjet model currently has 23 free parameters when including the SSC jet. We quickly realized that the computation time required to fit the full multi-zone model is not reasonable for user-friendly usage.4 In Bjet_MCMC, the user can decide to fix or free any of the 13 parameters. For pure SSC model fit, the user can deactivate the EIC option to save computation time.
Footnote 4: From rough estimations it would require computation times in the order of months with 10 parallelized cores with the current generation of CPUs.
### Defining the \(1\sigma\) parameter space and contour on the SED
The posterior distribution of probability allows us to define the parameter range corresponding to the \(1\sigma\) confidence level (\(\sim 68\%\)) around the best solution. We follow the general solution proposed by Lampton et al. (1976); Avni (1976) based on the \(\chi^{2}\) cumulative distribution function \(\chi^{2}_{\rm cdf}\), or more precisely on the percent point function \(\chi^{2}_{\rm ppf}\) which returns the \(\chi^{2}\) value associated with a probability \(P\) and a number of degrees of freedom \(k\) of the \(\chi^{2}_{\rm cdf}\). In this approach, we consider all models which have \(\chi^{2}<\chi^{2}_{\rm min}+\Delta\chi^{2}\) as within \(1\sigma\) of the best value, where \(\chi^{2}_{\rm min}\) is our best solution and \(\Delta\chi^{2}=\chi^{2}_{\rm ppf}(0.682,k)\). \(k\) is the number of free parameters in our model that can range from 1 to 13.
Bjet_MCMC draws the \(1\sigma\) contour in the SED associated with the parameter uncertainties. Solutions to get the exact contours were ruled out as too computing and disk-space demanding. For example, one can save all SED points for all models tested during the MCMC process, or run through thousands of randomly picked models within the \(1\sigma\) parameter space. In order to save computation time, this contour is built by picking up models with extremum parameter values at the \(1\sigma\) confidence level. We call it the "min-max method". This allows us to build relatively good contours by re-running the code Bjet only twice the number of free parameters, which typically takes up to a few minutes with all 13 parameters free. Hence we must warn the user that the contours on the SED plots are an approximation and do not extend to the full theoretical space covered by the parameters uncertainties.
### Parameters boundaries
The pure SSC model has 9 free parameters by default, in addition to 3 fixed parameters:
* The redshift \(z\) is set by the user.
* The cosmology is set by default as a flat \(\Lambda\)CDM with H0 = 69.6 km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega M\) = 0.286, and \(\Omega\Lambda\) = 0.714 (Bennett et al., 2014).
* The angle between the jet direction and the observer line of sight \(\theta\) is fixed at 0.57 degrees to satisfy the Doppler boosted regime \(\delta\sin\theta<1\), with the Doppler factor \(\delta\leqslant 100\).
In the MCMC implementation, we set minimum and maximum values for each of the parameters, as shown in Table 1. We intentionally use wide ranges for parameters to make as few assumptions as possible. Many parameters are in log scale to ease walkers' moves through multiple orders of magnitude.
Within our MCMC method, parameter boundaries mean that the posterior probability \(\ln P=0\) for the fit with any parameter outside the given parameter range. The MCMC algorithm acts as an acceptance/reject method only based on the change of the posterior probability value from one walker step to the other. If a walker moves outside the parameter range, the move will be automatically rejected and the walker will try again. From the emcee package, a good acceptance rate is about 0.2. Given the additional parameter constraints developed below, Bjet_MCMC shows an acceptance rate in the order of \(\sim 0.05-0.1\) from previous tests.
As highlighted in Table 1, multiple parameters have fluctuating boundaries intertwined with other parameter values. The particle spectrum, for example, must follow the condition of \(\gamma_{\rm min}<\gamma_{\rm brk}<\gamma_{\rm max}\), and \(n_{2}>n_{1}\). The fastest observed variability \(\Delta t_{\rm obs,min}\) is also used to constrain the Doppler factor and size of the emitting region. From the simple argument that in the jet frame, the fastest variation cannot happen faster than the time the light takes to cross the blob radius, we apply the condition \(R\leq c\delta\Delta t_{\rm obs,min}/(1+z)\). Finally, a last condition is applied, specifying that the blob diameter cannot be larger than the jet cross-section. From a radio study of a large sample of blazars, it can be noted that the intrinsic jet half-opening angle \(\alpha_{\rm jet/2}\) is no more than 5 degrees. (e.g. Hervet et al., 2016). Using this conical jet approximation we set the condition \(\alpha_{\rm jet/2}=\arctan(R/D_{\rm BH})180/\pi<5\).
## 4 Pure SSC validation on the HBL 1RXS J101015.9-311909
The blazar 1RXS J101015.9-311900 is a high-frequency peaked BL Lac (HBL) with a redshift of \(z=0.143\) that has been discovered emitting up to a few TeV by the H.E.S.S. Collaboration after an observing campaign between 2006 and 2010 (H.E.S.S. Collaboration et al., 2012). The multiwavelength SED of this source has been successfully fitted with a one-zone SSC model by Cerruti et al. (2013). In their study, they developed a fitting algorithm that relies on a strong parametrization of the SED features such as slopes and peaks, and extended the approach made by Tavecchio et al. (1998). They also probed the parameter space of their SSC model by discretizing it in a grid, each free parameter is divided into 10 points, with the exception of the particle index \(n_{1}\) in only 3 points. This kind of probing method quickly becomes very computationally heavy for large dimensionality and has only 6 parameters that can be considered free to mitigate the computation time. On these parameters, they used a different definition of the blob density, which does not allow a straightforward comparison with our results.
\begin{table}
\begin{tabular}{l l l l} \hline
**Parameter** & **Range** & **Description** & **Scale** \\ \hline \multicolumn{3}{c}{SSC Blob} \\ \hline \(\delta\) & [1, 100]\({}^{+}\) & Doppler factor & linear \\ \(N_{e}^{(1)}\) & [0, 8] & particle density [cm\({}^{-3}\)] & log10 \\ \(n_{1}\) & [1, 5]\({}^{+}\) & first index & linear \\ \(n_{2}\) & [1.5, 7.5]\({}^{+}\) & second index & linear \\ \(\gamma_{\rm min}\) & [0, 5]\({}^{+}\) & low-energy cutoff & log10 \\ \(\gamma_{\rm max}\) & [3, 8]\({}^{+}\) & high-energy cutoff & log10 \\ \(\gamma_{\rm break}\) & [2, 7]\({}^{+}\) & energy break & log10 \\ \(B\) & [-4, 0] & magnetic field strength [G] & log10 \\ \(R\) & [14, 19]\({}^{+}\) & blob radius [cm] & log10 \\ \hline \multicolumn{3}{c}{Additional parameters for thermal EIC} \\ \hline \(T_{\rm disk}\) & [3.5, 6] & disk black body temperature [K] & log10 \\ \(L_{\rm disk}\) & [40, 50] & disk luminosity [erg/s] & log10 \\ \(\epsilon_{\rm BLR}\) & [-5, 0] & covering factor of the BLR & log10 \\ \(D_{\rm BH}\)* & [15, 21]\({}^{+}\) & Distance of blob from SMBH [cm] & log10 \\ \hline \multicolumn{3}{l}{* Host galaxy frame.} \\ \multicolumn{3}{l}{\({}^{+}\) Parameters with additional constraints.} \\ \end{tabular}
\end{table}
Table 1: Parameters and bounds.
\begin{table}
\begin{tabular}{l|c c|c c} & \multicolumn{2}{c}{Cerruti et al. 2013} & \multicolumn{2}{c}{Bjet\_MCMC} \\ Parameter & Best Value & 1\(\sigma\) Range & Best Value & 1\(\sigma\) Range \\ \hline \(\delta\) & 96.83 & 32.07–99.53 & 83.8 & 35.8 – 100 \\ \(N_{e}^{(1)}\) [cm\({}^{-3}\)] & undefined & undefined & 4.24\(\times 10^{3}\) & 2.18\(\times 10^{2}\) – 1.0 \(\times 10^{6}\) \\ \(n_{1}\) & 2.0 & fixed & 2.56 & 2.31 – 2.74 \\ \(n_{2}\) & 4.0 & fixed & 3.75 & 3.25 – 4.24 \\ \(\gamma_{min}\) & 100 & fixed & 9.22 & 1.00 – 1.49 \(\times 10^{4}\) \\ \(\gamma_{max}\) & 5\(\times 10^{6}\) & fixed & 1.99\(\times 10^{6}\) & 2.57\(\times 10^{5}\) – 9.99\(\times 10^{7}\) \\ \(\gamma_{break}\) & 5.31\(\times 10^{4}\) & (3.48–13.15)\(\times 10^{4}\) & 2.13\(\times 10^{5}\) & 2.40\(\times 10^{4}\) – 3.93\(\times 10^{5}\) \\ \(B\) [G] & 0.015 & (0.51 – 4.089)\(\times 10^{-2}\) & 1.71\(\times 10^{-3}\) & 7.92\(\times 10^{-4}\) – 1.42\(\times 10^{-1}\) \\ \(R\) [cm] & 1.3\(\times 10^{16}\) & (0.49–11.57)\(\times 10^{16}\) & 1.40\(\times 10^{17}\) & 1.71\(\times 10^{15}\) – 2.21\(\times 10^{17}\) \\ \hline \(\chi_{\rm red}^{2}\) total & \multicolumn{2}{c}{260.6/27 = 9.65} & \multicolumn{2}{c}{187.1/24= 7.80} \\ \(\chi_{\rm red}^{2}\) X-ray – \(\gamma\)-ray & \multicolumn{2}{c}{18.6/18 = 1.04} & \multicolumn{2}{c}{14.4/15= 0.96} \\ \end{tabular}
\end{table}
Table 2: Best fit and parameter comparison for 1RXS J101015.9-311909
Nevertheless, this multiwavelength SED is ideal for a validation test of Bjet_MCM and allows a partial comparison of results with the work of Cerruti et al. (2013). For this model, we considered 100 walkers, 5000 steps, and a burning phase of 200 steps. We ran it over 15 parallelized threads over a full time of 5h 45min. First, we can visually compare the two models on Figure 2. We see notable differences but a relatively equally good fit at first glance. We however achieved a better \(\chi^{2}_{\rm red}\) considering the full dataset, or only the X-ray to gamma-ray dataset as used by Cerruti et al. (2013). These values are reported in Table 3. We observe a good agreement within errors on our free parameters. A notable difference is that the best value found of \(n_{1}\) with Bjet_MCMC is outside the probed parameter range of Cerruti et al. (2013). This point highlights the difficulty of finding the best model without deeply probing an extensive range of parameters. Overall this comparison fully validates our approach for single-zone pure SSC models by providing to date the best SED fit and parameter characterization on the blazar 1RXS J101015.9-311909.
## 5 Fitting the FSRQ PKS 1222+216 with SSC + Thermal Eic
In order to further display the capabilities of Bjet_MCMC, we performed a test on the FSRQ PKS 1222+216 (z = 0.432), which is known to display bright gamma-ray outbursts (e.g. Tavecchio et al., 2011; Adams et al., 2022). It has been noticed that this source was a good candidate for SSC+ thermal EIC models as EIC was proposed as the main source of VHE gamma-rays. However, without a proper fitting method, it is challenging to fully discard the one-zone SSC. This point may actually be the most critical in the relevance of Bjet_MCMC as it is to date likely the first SSC model that can provide a fit with a full SSC+EIC model (13 free parameters). As pure SSC and SSC+EIC models are nested, Bjet_MCMC can provide a statistical test that allows to reject the pure SSC hypothesis.
Figure 2: Results of Bjet_MCMC on the SED fit of the blazar 1RXS J101015.9-311909. The model applied was a one-zone pure SSC, including a gamma-ray EBL absorption following the model of Franceschini & Rodighiero (2017). The SED data points have been shared by the authors of Cerruti et al. (2013), while their model line itself has been manually digitized from the original paper.
\begin{table}
\begin{tabular}{l|c|c c} & Adams et al. 2022 & \multicolumn{2}{c}{Bjet\_MCMC} \\ Parameter & Best Value & Best Value & 1\(\sigma\) Range \\ \hline \(\delta\) & 40 & 31.5 & 26.0 – 42.3 \\ \(N_{e}^{(1)}\) [cm\({}^{-3}\)] & 2.0\(\times 10^{4}\) & 1.12\(\times 10^{7}\) & 3.41\(\times 10^{5}\) – 9.97 \(\times 10^{7}\) \\ \(n_{1}\) & 2.1 & 2.98 & 2.51 – 3.12 \\ \(n_{2}\) & 3.9 & 4.45 & 4.14 – 4.83 \\ \(\gamma_{min}\) & \(5.5\times 10^{2}\) & \(7.94\times 10^{2}\) & \((3.43-9.51)\times 10^{2}\) \\ \(\gamma_{max}\) & 3.0\(\times 10^{5}\) & 5.15\(\times 10^{6}\) & 9.42\(\times 10^{4}\) – 9.96\(\times 10^{8}\) \\ \(\gamma_{break}\) & 5.0\(\times 10^{3}\) & 3.09\(\times 10^{4}\) & 4.40\(\times 10^{3}\) – 5.22\(\times 10^{4}\) \\ \(B\) [G] & 3.0\(\times 10^{-2}\) & 3.92\(\times 10^{-2}\) & 1.25\(\times 10^{-2}\) – 8.06\(\times 10^{-1}\) \\ \(R\) [cm] & 5.5\(\times 10^{16}\) & 7.10\(\times 10^{16}\) & 1.94\(\times 10^{5}\) – 1.99\(\times 10^{17}\) \\ \(T_{\rm disk}\) [K] & 2.8\(\times 10^{46}\) & 2.7\(\times 10^{4}\) & \((2.23-3.35)\times 10^{4}\) \\ \(L_{\rm disk}\) [erg s\({}^{-1}\)] & 2.8\(\times 10^{46}\) & 2.8\(\times 10^{46}\) & fixed \\ \(e_{\rm BLR}\) & 2.0\(\times 10^{-2}\)* & 2.0\(\times 10^{-2}\) & fixed \\ \(D_{\rm BH}\) [cm] & 1.10\(\times 10^{19}\) & 4.56\(\times 10^{19}\) & \(2.96\times 10^{18}-1.00\times 10^{21}\) \\ \hline \(\chi_{\rm red}^{2}\) & 168.8/65 =2.60 & \multicolumn{2}{c}{140.5/66 = 2.13} \\ \end{tabular}
* _The value \(e_{\rm BLR}\) was considered as fixed in the study of Adams et al. (2022)._
\end{table}
Table 3: **Best fit and parameter comparison for PKS 1222+216**
Figure 3: Results of Bjet\_MCMC on the SED fit of the blazar PKS 1222+216. The model applied was a one-zone SSC + thermal EIC from the disk and BLR radiative interaction, including a gamma-ray EBL absorption by the model of Franceschini & Rodighiero (2017). The SED data points have been shared by the authors of Cerruti et al. (2013), while their model line itself has been manually digitized from the original paper.
It has to be noted that MCMC methods can still struggle in complicated parameter space and be stuck in local minima. Eventually, a proper MCMC algorithm should always end up in the real best solution. The computation time needed can however be unachievable with standard computers.
In this section, we compare the results of Bjet_MCMC on the 2014 flare SED of PKS 1222+216 published by Adams et al. (2022). In their paper, the SED model was manually crafted through a "fit by eye" model using Bjet. Being fitted with the same core code (with only minor updates since then), we can have a proper parameter-by-parameter check on how the results of Bjet_MCMC differ from the previous model.
For this fit, we used the same MCMC setup as for 1RXS J101015.9-311900 (100 walkers, 5000 steps, 200 steps of burning phase, 15 computing threads). Activating the interaction with the thermal nucleus emission significantly increases the computation time for each step. The full MCMC chain took a total of 12h to run. After noticing multiple \(\chi^{2}\) local minima, we fixed two parameters to the Adams et al. (2022) value which are the BLR covering factor \(\epsilon_{\rm BLR}=1\times 10^{-2}\) and the accretion disk luminosity \(L_{\rm disk}=2.8\times 10^{46}\) erg s\({}^{-1}\). As seen in Figure 3 the best fit is visually convincing, and in Figure 6 that the MCMC walkers display a good general \(\chi^{2}\) convergence. However, we still have hints of local minima, such as in Figure 6, _top-left_ where a small fraction of walkers get stuck away from the best fit. They also appear as "islands" in the parameter space corner plot (see Figure 7).
Results of Bjet_MCMC show better fit \(\chi^{2}\) compared to the model of Adams et al. (2022). It is interesting to notice that the best solution of Bjet_MCMC does not favor any significant contribution for EIC emission in gamma rays. However, it does not rule out a strong EIC emission either. In the study of Adams et al. (2022), it was estimated that the distance to the SMBH should be at least about one parsec to avoid too much gamma-gamma absorption from the BLR. This is relatively consistent with the estimated distance of \(D_{\rm BH}\) from our fit which is found at above 0.96 pc from the SMBH.
## 6 Computational performances and general using advice
Bjet_MCMC makes full use of the paralellized capability of the emcee package. By running several tests, we observed a roughly linear improvement in the computation time following the number of parallel threads used for the fitting process. We have not performed extensive testing to check if this linear relation was holding true at more than about a dozen of threads. It is expected that I/O processes will diminish the relevance of large parallelization at some point. We recommend using a large number of computing threads if available, likely at least 4 for the pure SSC and at least 15 for SSC+EIC if a user wants to get results overnight. Bjet_MCMC will be the most relevant if used in a computing center with several tens of available computing threads.
We propose a few ways to estimate if a user gets enough walkers and steps to be confident in the output of Bjet_MCMC, with some warnings and advice. The favored test to check if the fit is optimal is to get a look at the "average \(\chi^{2}\) per step" plot. For example, one can see in Figure 4 for 1RXS J101015.9-311909 that the average \(\chi^{2}\) plateaued at about 2500 steps. We can confidently deduce that only 3000 steps would have been enough for this fit as no further improvements are observed afterward. Now looking at the same plot from PKS 1222+216 in Figure 6, we observe a good convergence of the average \(\chi^{2}\) but not a full plateauing yet. This means that the full extent of the 1-sigma parameter space is likely going to change marginally. This is usually not a big issue, but you should avoid drawing too firm a conclusion from the exact number of the error associated with parameters. A good practice would be to add an extra 20% on the parameter errors to get a more conservative parameter range when the average \(\chi^{2}\) curve does not fully flatten out. If the average \(\chi^{2}\) curve does not show any sign of asymptotical behavior, then the number of steps and/or walkers needs to be increased.
Note that the best \(\chi^{2}\) always converges faster than the average \(\chi^{2}\) (see Figures 4, 6). The best \(\chi^{2}\) convergence gives a confidence estimation on the best model while the average \(\chi^{2}\) convergence gives a confidence estimation of the associated parameter errors. The plot "\(\chi^{2}\) per step" gives a view of the entirety of walkers. It is a relatively efficient way of checking for local \(\chi^{2}\) minima in the parameter space. As soon as most walkers converge toward the best solution, this should not have significant consequences in the results. If most of the walkers appear to be stuck in local minima, then your SED dataset is not constraining enough for the complexity of the model. You can run a longer chain to hope for the MCMC method to eventually converge, or reduce the model complexity by freezing parameters.
Some spectral points with very small errors may be overconstraining the fit to the detriment of another energy range. For example, a 10% model flux variation in optical may be as constraining as a factor 2 for gamma rays. From a statistical point of view, this is exactly how we expect the fit to behave following the error bars of spectral points. However, the flux "real" errors are often widely underestimated in low energies. Users may miss that blazars
vary in all wavelengths in a short timescale. The error in each spectral point should reflect the flux variations during the integration period used to build the SED. We advise users to use the RMS error of flux variation instead of the statistical error of individual observations, if larger. It appears that both optical SEDs used in this paper likely have overconstraining error bars.
emcee considers unsafe a number of steps fewer than 50 times the integrated autocorrelation time \(\tau_{\rm corr}\) (in steps). We find this boundary challenging to achieve in most of our tests, but also not systematically leading to better results when met. The value \(\tau_{\rm corr}\) is given as an output of Bjet_MCMC. We generally advise a safe boundary of \(n_{\rm steps}\geq 10\tau_{\rm corr}\). However, a general check on the \(\chi^{2}\) convergence plots and a visual check on the SED itself should always be given to assess the fit quality.
A full version of the multi-zones Bjet is available to be used through the package Bjet_MCMC (through the name Bjet_core, but contains a full amount of 23 free parameters, and consequently is not included in the MCMC method. The multi-zone model is recommended to be used only by users having a deep knowledge of blazar emission models as there are significant risks of having an inconsistent or unphysical set of parameters. This risk has been mitigated for the pure SSC and SSC+EIC models through parameter constraints, but the final assessment of interpreting the quality and relevance of Bjet_MCMC results is the responsibility of the user.
## 7 Conclusion
Bjet_MCMC is a new tool in the growing family of SSC models of blazars. Its full version includes 2 SSC zones + EIC, based on Hervet et al. (2015). However, only the one-zone pure SSC and one-zone SSC+EIC are fully implemented in the automatic MCMC fitting method. Bjet_MCMC is aimed to be a user-friendly tool that only requires minimal input from the user, namely a configuration file and a SED data file. It is fully parallelized and can take advantage of computing clusters. The code is working as intended and produces consistent results. There are other publicly available SSC models at this time that contains SED fitting algorithms, among of the most known are AGNpy (Nigro et al., 2022), JetSet (Tramacere et al., 2011) and Naima (Zabalza, 2015). We do not provide any comparison between Bjet_MCMC and these models in terms of consistency of results, performances, and capabilities. This will be addressed in further studies. However, it has to be noted that Bjet_MCMC appears to be at the current time the only public tool with an automatic fitting that can handle a full SSC+EIC model with up to 13 free parameters. It needs to be noted that excellent broadband energy coverage of the SED has to be built in order to obtain a good convergence of all parameters for the most extensive model version. This tool needs the user to have some knowledge of the blazar emission processes to understand the limit of SSC models and infer scientific interpretation from their outputs. Finally, we highlight that the model is still having frequent updates, and some information mentioned in this paper may be quickly outdated. The most updated version and information is available publicly on Github.5
Footnote 5: [https://github.com/Ohervet/Bjet_MCMC](https://github.com/Ohervet/Bjet_MCMC)
We thank David Williams for his advice through the multiple years of development of Bjet_MCMC. We thank the PHE team members of the LUTH (Paris Observatory) that actively contributed to the early developments of Bjet. We are grateful to the multiple preliminary users of Bjet_MCMC that provided feedback to improve the code. This work was made possible thanks to the NSF support under grant PHY-2011420.
emcee (Foreman-Mackey et al., 2013), Astropy (Astropy Collaboration et al., 2013), SciPy (Jones et al., 2001-), NumPy (Walt et al., 2011), Matplotlib (Hunter, 2007))
|
2302.13344 | Tailoring Language Generation Models under Total Variation Distance | The standard paradigm of neural language generation adopts maximum likelihood
estimation (MLE) as the optimizing method. From a distributional view, MLE in
fact minimizes the Kullback-Leibler divergence (KLD) between the distribution
of the real data and that of the model. However, this approach forces the model
to distribute non-zero (sometimes large) probability mass to all training
samples regardless of their quality. Moreover, in the attempt to cover the
low-probability regions in the data distribution, the model systematically
overestimates the probability of corrupted text sequences, which we conjecture
is one of the main reasons for text degeneration during autoregressive
decoding. To remedy this problem, we leverage the total variation distance
(TVD) with its robustness to outliers, and develop practical bounds to apply it
to language generation. Then, we introduce the TaiLr objective that balances
the tradeoff of estimating TVD. Intuitively, TaiLr downweights real data
samples that have low model probabilities with tunable penalization intensity.
Experimental results show that our method alleviates the overestimation of
degenerated sequences without sacrificing diversity and improves generation
quality on a wide range of text generation tasks. | Haozhe Ji, Pei Ke, Zhipeng Hu, Rongsheng Zhang, Minlie Huang | 2023-02-26T16:32:52Z | http://arxiv.org/abs/2302.13344v1 | # Tailoring Language Generation Models under Total Variation Distance
###### Abstract
The standard paradigm of neural language generation adopts maximum likelihood estimation (MLE) as the optimizing method. From a distributional view, MLE in fact minimizes the Kullback-Leibler divergence (KLD) between the distribution of the real data and that of the model. However, this approach forces the model to distribute non-zero (sometimes large) probability mass to all training samples regardless of their quality. Moreover, in the attempt to cover the low-probability regions in the data distribution, the model systematically overestimates the probability of corrupted text sequences, which we conjecture is one of the main reasons for text degeneration during autoregressive decoding. To remedy this problem, we leverage the total variation distance (TVD) with its robustness to outliers, and develop practical bounds to apply it to language generation. Then, we introduce the TailL1 objective that balances the tradeoff of estimating TVD. Intuitively, TailL downweights real data samples that have low model probabilities with tunable penalization intensity. Experimental results show that our method alleviates the overestimation of degenerated sequences without sacrificing diversity and improves generation quality on a wide range of text generation tasks.2
Footnote 1: Pronounced as “tailor”.
Footnote 2: Code is available at [https://github.com/thu-coai/Tailr](https://github.com/thu-coai/Tailr).
## 1 Introduction
The dominant approach to train language generation models is to maximize the likelihood of text samples in training data. With the development of pre-training techniques, the quality of texts generated by current models has been improved by a large margin (Radford et al., 2019; Brown et al., 2020). However, the text degeneration phenomena, e.g., repetitions (Holtzman et al., 2020; Welleck et al., 2020), incoherence (Guan et al., 2021; Ji and Huang, 2021), and other ill-formed generation results sampled from the noisy long tail (Dou et al., 2022; LeBrun et al., 2022), are still widely observed in large pre-trained models. These results indicate that using MLE as the optimizing method has theoretical limitations that are hard to be compensated by increasing the model size.
Given the real data distribution \(p(\mathbf{x})\) and the model distribution \(q(\mathbf{x})\) defined by a learned generation model, we can view MLE as minimizing the KLD between \(p(\mathbf{x})\) and \(q(\mathbf{x})\). However, minimizing \(D_{\text{KL}}(p,q)\) will lead to a _zero-avoiding_ solution of \(q(\mathbf{x})\) that spreads itself to cover all the modes in the real data (Minka, 2005; Malinin and Gales, 2019). As the model is forced to take into account all the modes regardless of their quality and saliency, this behavior could deteriorate the overall generation quality when (i) the data inherently exhibits too many variations, e.g., in open-ended generation, the model often over-presents unrelated words in the unreliable long tail of its distribution (Holtzman et al., 2020). (ii) the data contains flawed or noisy references, e.g., hallucination and missing contents in text summarization (Zhao et al., 2020) degrade the generation quality of the model.
In language generation, the attempt to cover all the non-zero probability regions in the data distribution would lead to a problem directly related to text degeneration, which we term as _data void
_overestimation_. Concretely, the model assigns considerably more probability mass than it should to the _void_ of the real data distribution, where degenerated text sequences lie. An intuitive illustration is shown in Figure 1 where KLD pushes the model to place large mass on the zero-probability region of the target distribution to cover the minor mass portion on the right. These degenerated texts include random word sequences and partially corrupted texts that have high lexical overlap with the real texts. Therefore, during free-run generation, the model is likely to trap into the void regions and produce "over-generalized" text samples that are unlike the training data (Huszar, 2015).
In this work, we start with a robust alternative to KL divergence, i.e., the total variation distance (TVD). TVD is known to be robust to outliers in the data (Beran, 1977; Knoblauch and Vomfell, 2020), as it measures the absolute difference between two probability distributions averaging at each point. In SS2.2, we show that TVD allows the model to place zero probability to low-quality training samples and prevent overestimation of the data void region through gradient analysis. Though appealing, TVD cannot be directly applied to text generation because (i) TVD measures the distance at the sequence level while we desire a token-level criterion for autoregressive generation models, (ii) we only have samples from the data distribution, whereas calculating TVD demands the real data probability \(p(\mathbf{x})\) of the training sample \(\mathbf{x}\). We overcome these two issues by (i) developing an upper bound on the sequence-level TVD with its token-level factorization (SS3.1), and (ii) introducing a proxy distribution (SS3.2) that handles the bias-variance tradeoff during estimating TVD (SS3.3). Finally, we derive the **T**otal **V**ariation **G**uild **L**anguage **G**eneration (Tailr) objective by leveraging access to the non-zero gradient of TVD to guide the model. Intuitively, Tailr weights the log-likelihood of a text sequence at each position according to the model probability and uses a tunable hyperparameter to control the penalization intensity.
We first conduct experiments on synthetic data to show that Tailr achieves better generation quality without sacrificing diversity and reduces the overestimation of degenerated texts compared to MLE. Further experiments on real data demonstrate that the proposed method outperforms existing methods that modify MLE at different aspects on a wide range of language generation tasks, including machine translation, text summarization, and long text generation.
## 2 Background and Motivation
We consider natural language generation tasks where a conditional generation model \(p_{\theta}(\mathbf{y}|\mathbf{x})\) parametrized by \(\theta\) is required to generate the target text sequence \(\mathbf{y}=(y_{1},\cdots,y_{T})\) given the context \(\mathbf{x}\). Let \(p_{o}(\mathbf{y}|\mathbf{x})\) denote the real data distribution, MLE training is equivalent to minimizing the KL divergence between \(p_{o}\) and \(p_{\theta}\):
\[D_{\text{KL}}(p_{o},p_{\theta})=-\mathbb{E}_{\mathbf{y}\sim p_{o}}\Bigg{[}\sum_{t =1}^{T}\log p_{\theta}(y_{t}|\mathbf{y}_{<t},\mathbf{x})\Bigg{]}-H(p_{o}), \tag{1}\]
where the generation probability is factorized into the product of conditional token probabilities given the prefix \(\mathbf{y}_{<t}\) and the context \(\mathbf{x}\): \(p_{\theta}(\mathbf{y}|\mathbf{x})=\prod_{t=1}^{T}p_{\theta}(y_{t}|\mathbf{y}_{<t},\mathbf{x})\). The first term pushes the model to minimize the negative log-likelihood (NLL) of the training data. The second term is a constant with respect to \(\theta\) and therefore is commonly ignored in MLE.
Despite its simplicity and practical benefits for optimization, MLE is known to suffer from a mismatch to the evaluation metric (Pang and He, 2021) and brittleness to noise in the training data (Kang and Hashimoto, 2020). Motivated by the literature in probability metrics, we draw attention to total variation distance (TVD) as a naturally robust alternative to KLD. We present the definitions of TVD (Van Handel, 2014) between the data distribution \(p_{o}\) and the model distribution \(p_{\theta}\) given the context \(\mathbf{x}\):
\[D_{\text{TV}}(p_{o},p_{\theta}) =\frac{1}{2}\sum_{\mathbf{y}\in\mathcal{Y}}\bigg{|}p_{o}(\mathbf{y}|\mathbf{x} )-p_{\theta}(\mathbf{y}|\mathbf{x})\bigg{|} \tag{2a}\] \[=1-\sum_{\mathbf{y}\in\mathcal{Y}}\min\bigg{(}p_{o}(\mathbf{y}|\mathbf{x}),p_ {\theta}(\mathbf{y}|\mathbf{x})\bigg{)}, \tag{2b}\]
where \(\mathcal{Y}\) is the space of all possible text sequences. Intuitively, TVD measures the average of the absolute difference between \(p_{o}(\mathbf{y}|\mathbf{x})\) and \(p_{\theta}(\mathbf{y}|\mathbf{x})\) on all possible text sequence \(\mathbf{y}\in\mathcal{Y}\). Therefore
the model learns to properly allocate its probability mass to best describe the major part of the data distribution and ignore outliers. TVD is also correlated with the _distinguishability_ of samples generated by the model, which is shown to be a balanced criterion that takes both quality and diversity into account (Hashimoto et al., 2019). Existing work proposed to optimize distinguishability in an generative adversarial manner (Goodfellow, 2015; Caccia et al., 2020) while Kang & Hashimoto (2020) argued that minimizing its heuristic surrogate via loss truncation is better in practice. Additional related work is provided in Appendix B. In this work, we first analyze the property of TVD and seek to directly minimize TVD or at least its upper bound in the task of natural language generation.
### A Toy Experiment and Its Implications
We first present a toy experiment to illustrate the behavioral difference of KLD and TVD when countering imperfect data, where a single Gaussian model is required to fit a mixture of two Gaussians. As shown in Figure 1, minimizing KLD forces the model to learn a flat distribution that spans itself to cover all the non-zero probability regions, which causes underfitting of the major part of the target probability mass. Furthermore, the model places considerably high probability mass to the _void region_ in the target distribution which does not correspond to real samples. On the other hand, TVD focuses on the major target mass without overestimating degenerated samples that are unlikely under the target distribution.
In language generation, this scenario is realistic and pervasive. For many language generation tasks, it is hard to circumvent noisy or invalid references during the data collection process, e.g., hallucination in text summarization (Zhao et al., 2020) and image captioning (Xiao & Wang, 2021). For applications like open-ended generation, existing autoregressive models pre-trained on large corpus are still reported to over-present the artifacts in the noisy long tail (Holtzman et al., 2020).
### Gradient Analysis
To better understand the reason behind the behaviorial difference of KLD and TVD in optimization, we analyze their gradients with respect to the model parameter \(\theta\). Given a context-target text pair \((\mathbf{x}^{*},\mathbf{y}^{*})\) sampled from the data distribution \(p_{o}\), we approximate the gradient of KLD with respect to \(\theta\) using Monte-Carlo sampling3:
Footnote 3: For clarity, we only use one sample per batch in this analysis, and the result still holds for large batch size.
\[\nabla_{\theta}D_{\text{KL}}(p_{o},p_{\theta})\approx-p_{\theta}(\mathbf{y}^{*}| \mathbf{x}^{*})^{-1}\nabla_{\theta}p_{\theta}(\mathbf{y}^{*}|\mathbf{x}^{*}). \tag{3}\]
The result is the negative gradient of the model probability weighted by the reciprocal of the model probability on this sample. Intuitively, when a low-quality context-target pair is sampled, the model will be affected by this sample and shift the distribution towards it. If \(p_{\theta}(\mathbf{y}^{*}|\mathbf{x}^{*})\approx 0\), the norm of the gradient will become very large, which leads to a huge step of parameter update towards that noisy direction. This explains the phenomena illustrated in SS2.1, where KLD pushes the model to cover all the training samples resulting in an unfocused and flat distribution.
For comparison, we calculate the gradient of TVD with respect to \(\theta\) using equation (2b). The derivation details are provided in Appendix A.1.
\[\nabla_{\theta}D_{\text{TV}}(p_{o},p_{\theta})\approx\begin{cases}-p_{o}(\mathbf{y }^{*}|\mathbf{x}^{*})^{-1}\nabla_{\theta}p_{\theta}(\mathbf{y}^{*}|\mathbf{x}^{*}),&p_{ \theta}(\mathbf{y}^{*}|\mathbf{x}^{*})<p_{o}(\mathbf{y}^{*}|\mathbf{x}^{*})\\ 0,&p_{\theta}(\mathbf{y}^{*}|\mathbf{x}^{*})\geq p_{o}(\mathbf{y}^{*}|\mathbf{x}^{*}),\end{cases} \tag{4}\]
where the result switches between a non-zero gradient term and 0 by comparing the model probability and the real data probability. When the model probability exceeds the real data probability (**overestimation**), the gradient becomes 0 to prevent the model from fitting dubious data points. When the model predicts a probability lower than the real probability of the sample (**underestimation**), the weight is the reciprocal of the real probability of the sample, which has a smaller norm than equation (3). This means that the update towards noisy directions is more conservative, and the model is allowed to assign 0 probability to those low-quality training samples.
Figure 1: Results of the toy experiment: KLD is sensitive to outliers while TVD is more robust.
## 3 Methodology
Despite the attractive attribute of TVD, we still face several challenges to apply TVD to natural language generation. First, TVD measures the difference of the sequence-level probability. For autoregressive language generation models, it is typical to use a token-level criterion to supervise the factorized model probability. Although the sequence-level objective can also be adopted as a reward function using policy gradient (Williams, 1992; Sutton et al., 1999), this approach is shown to suffer from a high variance and sparse rewards. Second, calculating TVD requires the real data probability \(p_{o}(\mathbf{y}|\mathbf{x})\) of the sample \(\mathbf{y}\) to be known. One straightforward solution is to train a classifier that estimates the density ratio between \(p_{o}(\mathbf{y}|\mathbf{x})\) and \(p_{\theta}(\mathbf{y}|\mathbf{x})\)(Song et al., 2020). However, the density ratio estimator would introduce undetermined biases due to miscalibration (Grover et al., 2019). In this work, we tackle these challenges by developing practical upper bounds on TVD, and derive a sampling-based learning criterion which can directly substitute for the MLE objective in practice.
### Token-level Factorization
As KLD has the nice property of factorizing the sequence-level loss into summation of the token-level loss conditioned on the prefix as illustrated in equation (1), we wonder if TVD also has this property. We first write the autoregressive factorization of the data probability as \(p_{o}(\mathbf{y}|\mathbf{x})=\prod_{t=1}^{T}p_{o}(y_{t}|\mathbf{y}_{<t},\mathbf{x})\). For simplicity, we use \(p_{o}^{<t}(y_{t})\) and \(p_{\theta}^{<t}(y_{t})\) to denote \(p_{o}(y_{t}|\mathbf{y}_{<t},\mathbf{x})\) and \(p_{\theta}(y_{t}|\mathbf{y}_{<t},\mathbf{x})\) respectively. Then we have the following proposition that manifests the relationship between the sequence-level objective and its token-level factorization.
**Proposition 1**.: _Given \(p_{o}(\mathbf{y}|\mathbf{x})=\prod_{t=1}^{T}p_{o}^{<t}(y_{t})\) and \(p_{\theta}(\mathbf{y}|\mathbf{x})=\prod_{t=1}^{T}p_{\theta}^{<t}(y_{t})\), then the following condition holds:_
\[D_{\text{TV}}(p_{o},p_{\theta})\leq\mathbb{E}_{\mathbf{y}\sim p_{o}}\Bigg{[}\sum_ {t=1}^{T}D_{\text{TV}}(p_{o}^{<t},p_{\theta}^{<t})\Bigg{]}. \tag{5}\]
The condition follows from applying triangle inequality (Hein & Bousquet, 2005) to the right hand side of equation (2a). The complete proof is provided in Appendix A.2. This result indicates that minimizing the expected sum of the TVD on token-level probabilities is equivalent to minimizing the upper bound of the TVD on their products where the bound becomes tight as \(p_{\theta}\) approaches \(p_{o}\). Therefore, we are guaranteed to train the model using the MLE fashion that calculates the loss at each position given the prefix of the target sequence.
### Estimation with Proxy Distribution
Another difficulty of directly applying TVD to train language generation model is the explicit demand of the data probability distribution \(p_{o}\) while we only have a finite number of samples drawn from it. In contrast to using an additional density ratio estimation model that is both hard to train and potentially biased with undetermined deviation, we try to estimate the target using a _proxy probability distribution_ and analyze the estimation error.
We start by considering the one-hot distribution \(e^{(w)}\) where only the \(w\)-th index is \(1\) and others are \(0\). \(w\) is the target token sampled from the conditional oracle probability \(p_{o}^{<t}(\cdot)\). It is easy to see that the expectation of the one-hot distribution is exactly the oracle probability distribution:
\[\mathbb{E}_{w\sim p_{o}^{<t}}\Big{[}e^{(w)}\Big{]}=p_{o}^{<t}. \tag{6}\]
Then we use \(e^{(w)}\) to substitute the oracle probability \(p_{o}^{<t}\) in TVD and present the following proposition which states that the expectation of this estimation serves as an upper bound of the original TVD between the oracle distribution and the model distribution.
**Proposition 2**.: _Given \(w\sim p_{o}^{<t}\) and the one-hot distribution \(e^{(w)}\), then the following condition holds:_
\[D_{\text{TV}}(p_{o}^{<t},p_{\theta}^{<t})\leq\mathbb{E}_{w\sim p_{o}^{<t}} \bigg{[}D_{\text{TV}}(e^{(w)},p_{\theta}^{<t})\bigg{]}. \tag{7}\]
The proof utilizes the Jensen inequality and the convexity of the TVD. The full proof is presented in Appendix A.3. The proposition states that minimizing the TVD between the model distribution and the one-hot distribution _on average_ leads to the minimization of the TVD between the model distribution and the oracle distribution. By introducing an _unbiased_ estimation of the unknown oracle distribution, we sidestep the need of density ratio estimation and derive a practical upper bound using Monte-Carlo sampling from the oracle distribution.
### The Bias-variance Tradeoff
However, using the one-hot distribution as an approximation in practice sometimes leads to high estimation variance. For example, in some applications where the real data distribution has a high entropy, using the one-hot proxy can hardly cover the diverse candidates. Therefore we consider a general form of the proxy distribution \(\hat{p}^{(w)}\) where \(w\) is the target token. We denote the expectation of the general proxy distribution as \(\hat{p}^{<t}=\mathbb{E}_{w\sim p_{\theta}^{<t}}\left[\hat{p}^{(w)}\right]\). Then we show that the upper bound of the estimation error can be decomposed into a bias term and a variance term in the following:
\[\text{Error}_{\hat{p}^{(w)}}\leq\underbrace{D_{\text{TV}}(\hat{p}^{<t},p_{o}^ {<t})}_{\text{Bias}}+\underbrace{\mathbb{E}_{w\sim p_{o}^{<t}}\left[D_{\text{ TV}}(\hat{p}^{(w)},\hat{p}^{<t})\right]}_{\text{Variance}}, \tag{8}\]
where \(\text{Error}_{\hat{p}^{(w)}}\) is defined as the difference of the practical estimation \(\mathbb{E}_{w\sim p_{\theta}^{<t}}\left[D_{\text{TV}}(\hat{p}^{(w)},p_{\theta }^{<t})\right]\) and the ideal target \(D_{\text{TV}}(p_{o}^{<t},p_{\theta}^{<t})\). The derivation applies triangle inequality to bound the error term (detailed derivation can be found in Appendix A.4). Specifically, we consider the one-hot distribution as an example: it has zero estimation bias (equation (6)). However we show in Appendix A.5 that its variance equals to \(2H_{\alpha}(p_{o}^{<t})\) when \(\alpha=2\), where \(H_{\alpha}\) is the Tsallis \(\alpha\)-entropy (Tsallis, 1988). Therefore, the one-hot proxy suffers from a large variance when the entropy of \(p_{o}^{<t}\) is high.
In order to handle the bias-variance tradeoff, we consider a \(\gamma\)-mixture proxy distribution that interpolates the one-hot distribution and the model distribution with \(\gamma\): \(\hat{p}^{(w)}=\gamma e^{(w)}+(1-\gamma)p_{\theta}^{<t}\). Below we show the bias and variance in equation (8) using this mixture proxy distribution:
\[\text{Bias}=(1-\gamma)\cdot D_{\text{TV}}(p_{\theta}^{<t},p_{o}^{<t}),\quad \text{Variance}=\gamma\cdot\mathbb{E}_{w\sim p_{o}^{<t}}\left[D_{\text{TV}}( e^{(w)},p_{o}^{<t})\right]. \tag{9}\]
When we tune \(\gamma\) from 1 to 0, the proxy distribution smoothly transfers from the unbiased one-hot distribution to a soft distribution, which reduces the variance of the one-hot estimation and stablizes training in the early stage. Although this comes at the cost of an increased estimation bias at the beginning of training, the bias gradually decreases as the model fits the data distribution more accurately when the training goes on.
### Total Variation Guided Language Generation (TaiLr)
Finally, we introduce the TaiLr objective by summarizing the above results. Given the target token \(w\), we derive the TVD between the proxy distribution \(\hat{p}^{(w)}=\gamma e^{(w)}+(1-\gamma)p_{\theta}^{<t}\) and the model distribution \(p_{\theta}^{<t}\) following equation (2b):
\[D_{\text{TV}}(\hat{p}^{(w)},p_{\theta}^{<t})=1-\mathbb{E}_{y_{\lambda}\sim \hat{p}^{(w)}}\min\Big{(}1,\frac{p_{\theta}^{<t}(y_{t})}{\hat{p}^{(w)}(y_{t}) }\Big{)}, \tag{10}\]
Figure 2: Computational graph of the TaiLr objective where the log-likelihood is weighted position-wisely.
where the expectation is approximated by sampling from the proxy distribution using Monte-Carlo sampling. When sampling \(y_{t}\neq w\), the gradient of \(D_{\text{TV}}(\hat{p}^{(w)},p_{\theta}^{<t})\) is always 0 which is inefficient for optimization. Therefore, we consider the non-zero gradient when \(y_{t}\) is sampled as the target token \(w\) to guide the model, i.e., \(-\nabla_{\theta}p_{\theta}^{<t}(w)/\hat{p}^{(w)}(w)\), and devise the TaiLr objective whose gradient is equivalent to it:
\[\mathcal{L}_{\text{Tailr}}(w;\theta)=-\Bigg{(}\frac{p_{\theta}^{<t}(w)}{\gamma +(1-\gamma)p_{\theta}^{<t}(w)}\Bigg{)}\log p_{\theta}^{<t}(w), \tag{11}\]
where the weighting factor is detached in the back-propagation and only the log term receives gradient. The equivalence of \(\nabla_{\theta}\mathcal{L}_{\text{Tailr}}(w;\theta)\) and the non-zero gradient of \(D_{\text{TV}}(\hat{p}^{(w)},p_{\theta}^{<t})\) can be seen by applying \(f(x)\nabla_{x}\log f(x)=\nabla_{x}f(x)\). In Figure 2, we show the computational graph of the TaiLr objective. As \(\gamma\) switches from 1 to 0, TaiLr is biased from an estimation of TVD to unweighted MLE. Intuitively, TaiLr downweights samples with low probabilities assigned by the model so that the model focuses on modeling the high-quality samples during training and reduces overestimation of degenerated texts during inference. To counter the negative effect of random prediction at the early training stage, we set a threshold as a lower bound of the weighting factor.
## 4 Experiments
In the previous sections, we show that the proposed method is a practical estimation of TVD with theoretical guarantees. Next, we demonstrate its empirical performance. First, we conduct a synthetic experiment to investigate the behavior of the model trained by TaiLr and MLE in controlled settings where the underlying oracle distribution is known. Second, we compare TaiLr with other baselines in a more realistic setting where we train generation models with standard architectures or finetune pre-trained models on a wide range of language generation benchmarks. More experimental details which are not included in the following sections are provided in Appendix E.1.
### Synthetic Experiments
**The synthetic data.** In this subsection, our goal is to test the behavior of TaiLr in the task of text generation. Since we seek to analyze the distributional properties, we sample training data an oracle model whose distribution is known. Instead of using random distributions (Yu et al., 2017; Guo et al., 2018), we follow LeBrun et al. (2022) and train an oracle model on real human texts to generate synthetic data so that the results can better generalize to real data. Specifically, we train a 1-layer LSTM on the texts of the COCO image caption dataset (Lin et al., 2014) without any conditional inputs. We sample 10K synthetic data for training and 5K for validation.
**The model setting.** We train two LSTMs with the same architecture as the oracle model using MLE and TaiLr, which we denoted as \(p_{\text{MLE}}\) and \(p_{\text{Tailr}}\), respectively. We train both models for \(100\) epochs and pick the best checkpoint with the lowest perplexity on the development set. We use random sampling to obtain text samples from the learned generation models.
**Performance evaluation.** To thoroughly evaluate the generation performance of the two models, we follow Yu et al. (2017); Caccia et al. (2020) to evaluate the generation quality with \(\text{PPL}_{oracle}\) and the coverage of the oracle distribution with \(\text{PPL}_{test}\). Specifically, \(\text{PPL}_{oracle}\) is the likelihood of the oracle model calculated on the samples generated by the learned model, while \(\text{PPL}_{test}\) is the likelihood of the learned model evaluated on the held-out data. We also include BLEU score (Papineni et al., 2002) to calculate the average \(n\)-gram overlap between the generated sample and the
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Model & \(\text{PPL}_{oracle}\downarrow\) & \(\text{PPL}_{test}\downarrow\) & BLEU-4 \(\uparrow\) & SelfBLEU-4 \(\downarrow\) \\ \hline Training data & - & - & **35.40** & 30.83 \\ \(p_{\text{MLE}}\) & 31.64 & 22.64 & 27.74 & 33.27 \\ \(p_{\text{MLE}}\) & **26.91** & **22.42** & 28.36 & **28.91** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Automatic evaluation results of the models trained by MLE and TaiLr. \(\text{PPL}_{oracle}\) and BLEU assess the generation quality, while \(\text{PPL}_{test}\) and SelfBLEU emphasize on sample diversity. **Boldface** and underline indicate the highest and the second highest performance respectively.
held-out corpus, and SelfBLEU (Zhu et al., 2018) which computes the average overlap of each generated sample to other samples generated by the model4. For evaluation, we use 20K held-out data and sample 20K texts from the two generation models, respectively. As shown in Table 1, TaiLr improves the generation quality by nearly 5 points of \(\text{PPL}_{oracle}\) without sacrificing the coverage to the oracle distribution as it achieves similar \(\text{PPL}_{test}\) as MLE. We also observe that TaiL achieves higher BLEU-4 than MLE while lower than the training data. Finally, we show that MLE has the highest SelfBLEU-4 which shows its tendency to over-generalize to unseen samples that may include repeated patterns and degrade the diversity, while TaiLr achieves the lowest SelfBLEU-4. We also report the result using GPT2-Medium as the oracle model in Appendix E.1.1.
Footnote 4: The scripts of both BLEU and SelfBLEU are from [https://github.com/geek-ai/Texygen/blob/master/utils/metrics](https://github.com/geek-ai/Texygen/blob/master/utils/metrics)
**Perturbation evaluation.** Next, we evaluate models' behavior on perturbed sequences and relate text degeneration as an implication of model's overestimation behavior on the perturbed data. To quantify the deviation of the model's estimation from the real data distribution, we define the model estimation error of a sample \(\mathbf{x}\) as the difference between the sequence-level log probability given by the model and its true log probability, i.e., \(\text{Error}(\mathbf{x})=\log p_{\theta}(\mathbf{x})-\log p_{o}(\mathbf{x})\). Then we show the construction of the perturbed dataset. Given \(\mathbf{x}\) sampled from \(p_{o}\), we iteratively apply small perturbations to \(\mathbf{x}\) so that each lexical change is small. After \(N\) perturbations, \(\mathbf{x}\rightarrow\mathbf{x}^{(1)}\rightarrow\cdots\rightarrow\mathbf{x}^{(N)}\) smoothly transfers a data point into a perturbed sample. We propose the following perturbations that highlight the widely observed text degeneracy patterns in generation: (1) **Repeat** a token in \(\mathbf{x}\) (repetition, Welleck et al. (2020)). (2) **Delete** the last token in \(\mathbf{x}\) (oversmoothing, Kulikov et al. (2021)). (3) **Substitute** a token in \(\mathbf{x}\) with a token from the vocabulary (incoherence, Holtzman et al. (2020)). We sample 20K samples from \(p_{o}\) and apply \(N=30\) perturbations to each sample.
We first plot the estimation error map of the two models on the perturbed dataset in Figure 4. For each perturbed sample \(\mathbf{x}^{(i)}\), the oracle log probability \(\log p_{o}(\mathbf{x}^{(i)})\) is shown in the x-axis, the number of perturbations \(i\) performed is shown in the y-axis, and the estimation error \(\text{Error}(\mathbf{x}^{(i)})\) is reflected by the shade. From the figures on the left, we first restate LeBrun et al. (2022)'s finding that existing models underestimate real samples while overestimating degenerated samples. Next, by comparing the two figures, we observe that as the number of perturbations increases, \(p_{\text{TailL}}\) alleviates the overestimation phenomenon of \(p_{\text{MLE}}\) especially in the long tail. Finally, we draw cases from different regions in the figure to illustrate its implication on text degeneration. We first present a degenerated sample 1 on the top-right corner which has low oracle probability. We then present two real data samples 2 and 3 that have the same model probability under \(p_{\text{TailL}}\) and \(p_{\text{MLE}}\) as the degenerated one 1. Although 3 is actually more probable than the degenerated one 1 in the oracle distribution, MLE cannot distinguish between them leading to degeneracy patterns during generation.
Footnote 1: The scripts of both BLEU and SelfBLEU are from [https://github.com/geek-ai/Texygen/blob/master/utils/metrics](https://github.com/geek-ai/Texygen/blob/master/utils/metrics)
To quantify the overestimation problem of perturbation, we further define the maximum overestimation error over \(N\) perturbations as \(\max\limits_{i=1,\cdots,N}\text{Error}(\mathbf{x}^{(i)})\). To manifest the overestimation problem during generation, we plot the maximum overestimation error averaged on samples grouped with the similar length in Figure 4. Note that the average NLL of these degenerated samples for \(p_{\text{MLE}}\) and \(p_{\text{TailL}}\) is 11.09 and 10.82 respectively. For MLE the overestimation error ampli
Figure 3: Estimation error map of \(p_{\text{TailL}}\) (left) and \(p_{\text{MLE}}\) (right) on the perturbed dataset. We present examples from different regions to illustrate the estimation behavior of the two models.
length increases while Tailr maintains the error nearly at a constant. This result demonstrates that Tailr alleviates MLE' s tendency to sample degenerated texts with the growth of the generation length by weighting the likelihood at each position of the sequence during training.
**Error accumulation analysis.** Finally, we analyze the error accumulation during autoregressive decoding. We follow Arora et al. (2022) and use the metric ExAccErr that calculates the percentage of _excess_ errors due to the discrepency of training (conditioning on contexts sampled from \(p_{o}\)) and inference (conditioning on contexts sampled from \(p_{\theta}\)), i.e., exposure bias. Detailed definitions borrowed from Arora et al. (2022) are provided in Appendix E.1.2. We found that the excess error of MLE model (\(40.1\%\)) is substantially higher than the model trained with Tailr (\(8.6\%\)), which demonstrates that Tailr effectively reduces the error accumulation during autoregressive decoding.
### Real-Data Experiments
In this subsection, we describe the empirical evaluation of Tailr on a wide range of real-world language generation tasks, including: (1) **Machine Translation**: Given a sentence in the source language, the goal is to translate it into the target language. (2) **Text summarization**: Given a passage, the goal is to generate a short sentence that summarizes the main point of the passage. (3) **Long text generation**: Given a title, the goal is to generate a coherent long passage that conforms with the title. Statistics and sources of all datasets used in experiments are provided in Appendix D.
Apart from MLE, we also consider the following typical baselines that proposed new training objectives beyond MLE: (1) **Unlikelihood training**(Welleck et al., 2020) penalizes unlikely generations, e.g., token repetitions, through an auxiliary unlikelihood loss. (2) **D2GPo**(Li et al., 2020) proposes a data-dependent gaussian prior objective that smooths the one-hot target distribution based on word embedding distance. (3) **Loss truncation**(Kang and Hashimoto, 2020) abandons a \(c\)-fraction of the training samples with the highest NLL, which heuristically optimizes distinguishability. (4) **GOLD**(Pang and He, 2021) learns from human demonstrations using the off-policy setting of Reinforcement Learning (RL). We choose to compare with GOLD-\(\delta\) that does not use scoring models with additional parameters for a fair comparison. We use the paired bootstrap resampling (Koehn, 2004) in all tasks for significance testing.
**Machine Translation.** We evaluate the proposed method on a widely-used machine translation benchmark IWSLT14 De-En using the standard Transformer architecture (Vaswani et al., 2017). Training settings and detailed hyperparameters of different models are provided in Appendix E.2. The best checkpoint is selected based on the highest BLEU (Papineni et al., 2002) score on the development set. We used beam search with a beam size of 5 for decoding. In Table 2, we show the performance of our method and the baseline methods in terms of BLEU score. The results show that Tailr achieves higher BLEU score compared to MLE, which indicates that TVD effectively improves the generation quality over KLD. Tailr also significantly outperforms other objectives that modify the MLE baseline.
**Text summarization.** We then test the proposed method on abstractive text summarization. We used the Annotated Gigaword corpus (Rush et al., 2015) as it is known to have noisy references due to annotation errors (Klebanov and Beigman, 2010; Kang and Hashimoto, 2020). As pre-trained Transformer models have achieved strong performance, we thus propose to finetune the BART-base (Lewis et al., 2020) model with different methods and see whether they still improve upon the strong baseline. More training details and hyperparameter settings are provided in Appendix E.3. We select the best checkpoint based on the highest ROUGE-L (Lin, 2004) score on the development set. During inference, we use beam search with a beam size of 5 and prohibit decoding repeated \(3\)-grams. We report the ROUGE-1/2/L scores on the test set of the Gigaword dataset in Table 3 where Tailr outperforms all the baseline methods in terms of all evaluation metrics. The result demonstrates the effectiveness of our method in the realistic setting where noisy data pairs exist.
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & Dev BLEU & Test BLEU \\ \hline MLE & \(35.81^{\ddagger}\) & \(34.27^{\ddagger}\) \\ Unlikelihood & \(33.92^{\ddagger}\) & \(32.82^{\ddagger}\) \\ D2GPo & \(36.09^{\ddagger}\) & \(34.50^{\ddagger}\) \\ Loss truncation & \(35.63^{\dagger}\) & \(34.48^{\ddagger}\) \\ GOLD & \(35.74^{\ddagger}\) & \(34.68^{\dagger}\) \\ Tailr & **36.44** & **35.05** \\ \hline \hline \end{tabular}
\end{table}
Table 2: BLEU score comparison on the dev and test set of IWSLT14 De-En. \(\dagger/\ddagger\) means Tailr is significantly better with p-value \(<0.05/0.01\).
**Long text generation.** Finally, we evaluate Tailr on the task of long text generation to show its performance in open-ended generation. We evaluate on the WritingPrompts (Fan et al., 2018) dataset and leverage the generation ability of the pre-trained model by finetuning the BART-base model. More training details are provided in Appendix E.4. For evaluation, we sampled 1,000 titles from the test set following Ji and Huang (2021). We use Nucleus sampling (Holtzman et al., 2020) with \(p=0.95\) and restrict a maximum generation length of 1,024 subwords. For automatic evaluation, we use BLEU-\(n\) (B-\(n\)) to evaluate the \(n\)-gram overlap to the human reference, Distinct-\(n\) (Li et al., 2016) (D-\(n\)) to compute the ratio of unique \(n\)-grams, rep-\(l\)(Welleck et al., 2020) to calculate the repetition rate within the context window of \(l\), and Mauve (Pillutla et al., 2021) that assesses the distributional deviation of model-generated texts and human language by calculating the area under the divergence curve. As shown in Table 4, Tailr outperforms the MLE baseline in terms of all metrics. For other baselines, Loss truncation abandons long samples with high NLL leading to overly short generations and low n-gram overlap to the reference. GOLD tends to concentrate on very few modes in the target distribution as discussed by Pang and He (2021), which causes low diversity and large discrepency to the distribution of human language.
**Ablation study and discussion.** We conduct ablation study of adjusting \(\gamma\) on different tasks to show that its tendency and sensitivity interval vary on different tasks. In Figure 5 in Appendix E.5, we present the result of adjusting \(\gamma\) on WritingPrompts on the left and observe that the highest Mauve score is achieved when \(\gamma\) is around \(10^{-5}\), while the performance quickly degrades as \(\gamma\) approaches 1. On the right of Figure 5, we observe that the best performance is achieved when \(\gamma\) is around \(0.1\) on IWSLT14 De-En while either increasing or decreasing \(\gamma\) leads to a notable performance drop. From an empirical view, the scale of the best performing \(\gamma\) is related to the intrinsic entropy of the dataset. For stable training, we require the estimation variance in equation (9) to be small, which leads to small \(\gamma\) when the entropy of the data is high. Since the model generally has higher NLL on long text generation than on machine translation, the scale of the best \(\gamma\) is thereby shifted towards 0 on WritingPrompts. To further determine the sensitivity interval of \(\gamma\), we suggest to tune the scale of \(\gamma\) based on the average NLL on the training data, where the empirical principle is to make the weighting factor in equation (11) relatively large to stablize training. Although simple, we argue that this parameter is crucial to the generality of the application, and we leave other solutions of dynamically adjusting or annealing this hyperparameter to future work.
## 5 Conclusion
In this work, we draw attention to the total variation distance (TVD), a robust alternative to KL divergence (KLD). We show that TVD addresses the zero-avoiding problem of KLD and mitigates overestimation of the degenerated sequences, which in turn improves the overall generation quality. To apply TVD to the task of language generation, we derive practical upper bounds, and introduce our Total Variation Guided Language Generation (Tailr) objective that balances the bias-variance tradeoff of estimating TVD with a tunable hyperparameter. Our experiments on synthetic data and real-data benchmarks demonstrate that Tailr alleviates the overestimation problem and the error accumulation during autoregressive decoding, and improves the generation quality over competitive baselines beyond MLE on a wide range of language generation tasks.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & R-1 & R-2 & R-L \\ \hline MLE & \(38.24^{\ddagger}\) & \(19.12\) & \(35.70^{\dagger}\) \\ Unlikelihood & \(37.80^{\ddagger}\) & \(18.34^{\ddagger}\) & \(34.84^{\ddagger}\) \\ D2GPo & \(38.52^{\dagger}\) & \(18.92^{\ddagger}\) & \(35.64^{\ddagger}\) \\ Loss truncation & \(38.62\) & \(19.29\) & \(35.85^{\dagger}\) \\ GOLD & \(38.57^{\dagger}\) & \(19.27\) & \(35.79^{\dagger}\) \\ Tailr & **38.82** & **19.50** & **36.24** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Generation performance of different methods on the test set of the Gigapord dataset. \(\dagger/\ddagger\) means TaiLr is significantly better with p-value \(<0.05/0.01\).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & B-1 \(\uparrow\) & D-4 \(\uparrow\) & rep-8 \(\downarrow\) & Mauve \(\uparrow\) \\ \hline MLE & \(27.85\) & \(84.28\) & \(10.31^{\dagger}\) & \(56.42^{\ddagger}\) \\ Unlikelihood & \(27.88\) & \(85.46\) & \(10.06\) & \(59.35^{\ddagger}\) \\ D2GPo & \(22.73^{\ddagger}\) & \(84.10\) & \(10.04\) & \(53.35^{\ddagger}\) \\ Loss truncation & \(19.49^{\ddagger}\) & \(76.51^{\ddagger}\) & \(13.41^{\ddagger}\) & \(45.35^{\ddagger}\) \\ GOLD & \(25.25^{\ddagger}\) & \(46.98^{\ddagger}\) & \(28.23^{\ddagger}\) & \(15.44^{\ddagger}\) \\ Tailr & **28.62** & **85.56** & **9.73** & **64.64** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of automatic metrics on the test set of the WritingPrompts dataset. \(\dagger/\downarrow\) means the higher/lower the better. \(\ddagger/\ddagger\) means TaiLr is significantly better with p-value \(<0.05/0.01\).
## Acknowledgements
This work was supported by the Major Project of the New Generation of Artificial Intelligence (No. 2018AAA0102900). This work was also supported by the National Key Research and Development Program of China (No. 2021ZD0113304) and the National Science Foundation for Distinguished Young Scholars (with No. 62125604).
|
2310.03346 | Combining Datasets with Different Label Sets for Improved Nucleus
Segmentation and Classification | Segmentation and classification of cell nuclei in histopathology images using
deep neural networks (DNNs) can save pathologists' time for diagnosing various
diseases, including cancers, by automating cell counting and morphometric
assessments. It is now well-known that the accuracy of DNNs increases with the
sizes of annotated datasets available for training. Although multiple datasets
of histopathology images with nuclear annotations and class labels have been
made publicly available, the set of class labels differ across these datasets.
We propose a method to train DNNs for instance segmentation and classification
on multiple datasets where the set of classes across the datasets are related
but not the same. Specifically, our method is designed to utilize a
coarse-to-fine class hierarchy, where the set of classes labeled and annotated
in a dataset can be at any level of the hierarchy, as long as the classes are
mutually exclusive. Within a dataset, the set of classes need not even be at
the same level of the class hierarchy tree. Our results demonstrate that
segmentation and classification metrics for the class set used by the test
split of a dataset can improve by pre-training on another dataset that may even
have a different set of classes due to the expansion of the training set
enabled by our method. Furthermore, generalization to previously unseen
datasets also improves by combining multiple other datasets with different sets
of classes for training. The improvement is both qualitative and quantitative.
The proposed method can be adapted for various loss functions, DNN
architectures, and application domains. | Amruta Parulekar, Utkarsh Kanwat, Ravi Kant Gupta, Medha Chippa, Thomas Jacob, Tripti Bameta, Swapnil Rane, Amit Sethi | 2023-10-05T06:56:54Z | http://arxiv.org/abs/2310.03346v1 | # Combining Datasets with Different Label Sets for Improved Nucleus Segmentation and Classification
###### Abstract
Segmentation and classification of cell nuclei in histopathology images using deep neural networks (DNNs) can save pathologists' time for diagnosing various diseases, including cancers, by automating cell counting and morphometric assessments. It is now well-known that the accuracy of DNNs increases with the sizes of annotated datasets available for training. Although multiple datasets of histopathology images with nuclear annotations and class labels have been made publicly available, the set of class labels differ across these datasets. We propose a method to train DNNs for instance segmentation and classification on multiple datasets where the set of classes across the datasets are related but not the same. Specifically, our method is designed to utilize a coarse-to-fine class hierarchy, where the set of classes labeled and annotated in a dataset can be at any level of the hierarchy, as long as the classes are mutually exclusive. Within a dataset, the set of classes need not even be at the same level of the class hierarchy tree. Our results demonstrate that segmentation and classification metrics for the class set used by the test split of a dataset can improve by pre-training on another dataset that may even have a different set of classes due to the expansion of the training set enabled by our method. Furthermore, generalization to previously unseen datasets also improves by combining multiple other datasets with different sets of classes for training. The improvement is both qualitative and quantitative. The proposed method can be adapted for various loss functions, DNN architectures, and application domains.
Cell nuclei, classification, histopathology, segmentation,
## I Introduction
Histopathology is practice of preparation of tissue slides and their examination to identify visual signs and grades of various diseases, including cancers. A surgical or biopsied tissue sample is fixed, embedded, sliced, mounted on a glass slide, and stained most commonly with hematoxylin and eosin (H&E) to highlight various tissue components. A slide thus prepared is either observed using a high powered microscope or scanned as a gigapixel whole slide image (WSI). Tissue abnormalities can be identified using visual features, such as nucleus to cytoplasm ratio, nuclear pleomorphism, and counts of various types of cells. Usually histopathological examination relies on nuclear details for estimating these features as the cell (cytoplasmic) boundaries are not easy to identify in H&E stained samples. Automating instance segmentation and classification of nuclei using deep neural networks (DNNs), such as HoVerNet [1] and StarDist [2], can bring efficiencies and objectivity to several types of histological diagnostic and prognostic tasks.
Challenges to accurate segmentation and classification of nuclei using DNNs include intra-class variation, inter-class feature (e.g. size) overlap, the presence of physically overlapping nuclei in certain disease conditions, and the need for domain generalization. Domain differences are present due to the diversity of nuclear shapes and sizes across organs and diseases, as well as the variance in slide staining protocols and reagents and digital scanner or camera characteristics across pathology labs. Because DNNs can be scaled to generalize better with more diverse and larger datasets, it is necessary to accurately annotate and label multiple large datasets for their training. In the last few years, several annotated histological datasets have been released that differ in the sets of nuclear
class labels, magnification, source hospitals, scanning equipment, organs, and diseases. For instance, while the PanNuke dataset covers 19 organs with semi-automated annotation of five nuclear classes \(-\) neoplastic, non-neoplastic epithelial, inflammatory, connective, dead [3]; the MoNuSAC covers four organs with the following four nuclear classes \(-\) epithelial, lymphocytes, macrophages, and neutrophils [4]. While most of this input diversity is beneficial to train generalized DNNs, combined training across datasets with different sets of class labels remains a challenge. Existing methods to train DNNs on multiple datasets that differ class label sets are not satisfactory. For instance, transfer [7] and multi-task learning [8], do not train the last (few) layer(s) of a DNN on more than one dataset.
We propose a method to train DNNs for instance segmentation and classification over multiple related datasets for the same types of objects that have different class label sets. Specifically, we make the following contributions. (1) We propose a method to modify a wide variety of loss functions used for segmentation and classification. (2) The method is applicable whenever the class label sets across the datasets can be expressed as a part of a common coarse-to-fine class hierarchy tree. That is, the method can jointly utilize multiple datasets of the same types of objects wherein some datasets may have labels for finer sub-classes while others may have labels for coarser super-classes, or a mix of these, from the same class hierarchy tree. Apart from this type of relation among datasets, the method has no other constrains. That is, it can be used to train a wide variety of DNNs for instance segmentation and classification for various types of objects of interest, although we used the segmentation of nuclei in histopathology using StarDist [2] as a case study. (3) We demonstrate quantitative and qualitative improvements in nuclear segmentation and classification test accuracy using the proposed method to train on multiple datasets with different class label sets. (4) We also show that thus using multiple datasets also improves domain generalization on a previously unseen dataset.
## II Datasets, Background, and Related Work
In this section, we review nucleus segmentation datasets and methods, and previous attempts to combine knowledge from multiple datasets.
### _Nucleus segmentation datasets_
Over the last few years, several datasets with careful annotations and labeling of cell nuclei have been released to the public to enable research on better instance segmentation and classification models. Some notable datasets are shown in Table I. These datasets meet our goals as they contain images with more nuclear details at 40x magnification and contain labels for nuclei from multiple classes, unlike, for example, MoNuSeg [9] or CryoNuSeg [10].
### _Nucleus instance segmentation and classification methods_
Over the years several DNN architectures have been developed to segment nuclei. These either use state-of-the-art image classification DNNs, such as ResNets [11], VGGnets [12], and EfficientNet [13] as backbones for feature extraction or finetune derivatives of U-Net [14]. But owing to their poor generalizability and adaptability over a task as specialized as nucleus segmentation, their usage as backbone architectures has recently decreased. These have been replaced by the development of combination architectures (fusion of multiple networks) and specialized architectures. For instance, HoVerNet [1] was proposed to predict whether a pixel location is inside a nucleus and its horizontal and vertical distances from the nuclear boundary. This concept has been generalized to predict multi-directional distance using StarDist [2]. These architectures are specifically designed for histological images with overcrowded nuclei and have demonstrated state-of-the-art results compared to previous methods, such as mask R-CNN [15] or nucleus boundary mapping [9].
### _Previous attempts to use multiple datasets_
In order to combine knowledge from multiple datasets, transfer and multi-task learning have been proposed for natural
\begin{table}
\begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Dataset & Classes & Organs & Mag. & Nuclei & Images & Img. Size \\ \hline \multirow{4}{*}{PanNuke [3]} & 5: Inflammatory, Neoplastic, Dead, Connective, Non-neoplastic Epithelial & Bladder, Ovary, Pancreas, Thyroid, Liver, Tesis, Prostrate, Stomach, Kidney, Adrenal gland, Skin, Head \& Neck, Cervix, Lamp, Utreus, Esophagus, Bile-duct, Colon, Breast & 40x & 216,345 & 481 & 224x224 \\ \cline{2-7} & 4: Epithelial, lymphocytes, macrophages,neutrophils & 4: Breast, Kidney, Liver, prostrate & & & & \\ \hline \multirow{4}{*}{CoNSeP [5]} & 7: Healthy Epithelial, Inflammatory, Muscle, Fibroblast, Dysplastic/Malignant Epithelial, Endothelial,Other & 1: Colon & 40x & 46,909 & 310 & 82x35 to 142x2162 \\ \cline{2-7} & 4: & & & & & \\ \cline{1-1} \cline{2-7} & Basal-like, Mesenchymal, Endothelial, luminal androgen receptor, immune-enriched & 1: Breast & 40x & 4,022 & 50 & 512x512 \\ \hline \end{tabular}
\end{table} TABLE I: Characteristics of notable nucleus segmentation and classification datasets
and medical images. For instance, [16] proposes a transfer learning technique using the MedCLNet database. DNNs were pre-trained through the proposed method and were used to perform classification on the colorectal histology MNIST dataset. The GSN-HVNET [17] was proposed with an encoder-decoder structure for simultaneous segmentation and classification of nuclei, and was pre-trained on the PanNuke dataset.
Although coarse-to-fine class structure has been exploited for knowledge transfer in other domains [18], it has not been used in medical datasets for increasing the available data for training or for domain generalization. All the methods described so far have only dealt with the scenario of carrying out segmentation and classification by splitting the same dataset into training and testing, or using the same set of classes across training and testing. At best, transfer learning was carried out where only the lower pretrained layers were retained and new upper layers were randomly initialized and trained on target datasets. There are no loss functions or training methods that can train the entire DNN on multiple datasets to utilize them for all layers as well as for cross-domain (dataset) generalization of segmentation and classification.
## III Proposed Method
We propose a method to train DNNs for segmentation and classification on multiple datasets with related but potentially different class label sets. We assume that the class label sets across the datasets are different cuts of the same class hierarchy tree. Within each dataset, the class labels are mutually exclusive, but need not be collectively exhaustive. An example of a class hierarchy tree with different cuts for labels for three different datasets is given in Figure 1, where nuclei can be divided into four super-classes, which in turn can be divided into 11 sub-classes. Deeper and wider hierarchies can also be used. Class label sets that are not a part of a common class hierarchy tree are out of the scope of this work.
Our key idea is to modify a class of loss functions whose computation involves sums over predicted and ground truth class probability terms in conjunction with sums over instances or pixels. This description covers a wide array of loss functions, including cross entropy, Dice loss, focal loss, Tversky loss, focal Tversky loss [19]. As an astute reader might have guessed by now, we propose to sum the predicted probabilities of fine-grained sub-classes when the class label can only be given for their coarser super-class. The set of finer sub-classes to be combined using such a method of loss computation can even dynamically change from dataset-to-dataset, epoch-to-epoch, batch-to-batch, or even instance-to-instance. To keep things simple, we first train the model on one dataset for a few epochs, and then train it on a second dataset for the remaining epochs.
This method is also applicable to any DNN architecture or application domain (e.g., natural images) that can be trained using these losses. As a case study, we use it to modify cross entropy and focal Tversky loss functions [20] to train a UNet-based StarDist DNN [2] for H&E-stained histopathology nucleus segmentation and classification on MoNuSAC, PanNuke, and CoNSeP datasets.
Although this method can be extended to multiple levels, for simplicity of explanation we will assume that a class label can be at one of the two levels - a super-class or a sub-class. We design a neural architecture that makes predictions at the finest level of the hierarchy, which is the set of all sub-classes (plus background) in this case. When the label for a training instance is available at the super-class level, we add the predicted probabilities of its sub-classes, and update their weights with an equal gradient, as should be done backwards of a sum node. This way, the weights leading to the prediction of all sub-classes are trained even when only the super-class label is available. The gradient and output obtained from this approach is at the finest (sub-class) level, but we interpret the
Fig. 1: The hierarchy of cell nucleus classes and their correspondence to the three label sets of the datasets used in this study
results for a dataset only for its corresponding training label set. That is, we do not assess sub-class level performance when only super-class labels are available, even though we train the DNN to predict at the sub-class level. On the other hand, when we come across a training instance where a sub-class label is available, we skip the sum-based merging of probability masses. In this case, class-specific weight update and the interpretation of predictions proceeds in the usual fashion.
Consider the cross entropy loss for a fixed set of class labels:
\[L_{CE}=-\sum_{i=1}^{n}\sum_{j=1}^{c}t_{ij}\log(y_{ij}), \tag{1}\]
where \(n\) is the number of training instances, \(c\) is the number of classes, \(t_{ij}\) are one-hot labels, and \(y_{ij}\) are the predicted class probabilities such that \(\forall i\sum_{j}t_{ij}=1,\sum_{j}y_{ij}=1\). In case a subset of classes belong to a super-class \(k\) denoted by \(j\in S_{k}\), then we modify Equation 1 as follows:
\[L_{MCE}=-\sum_{i=1}^{n}\sum_{k=1}^{m}t_{ik}\log\left(\sum_{j\in S_{k}}y_{ij} \right), \tag{2}\]
where \(t_{ik}\) is a binary indicator label for the super-class \(k\), and \(m\) is the size of the class label set. That is, \(t_{ik}=\sum_{j\in S_{k}}t_{ij}\), but the individual terms \(t_{ij}\) may not be known in the given labels. As is clear from Equation 2, that although for notational simplicity, the sum over classes runs at the super-classes enumerated by \(k\) at the same level, the modification applies independently to each branch of the class hierarchy tree (see Figure 1 for an example), as was done in our implementation. Additionally, it is also clear that the method can be extended to deeper and wider hierarchy trees with label sets that are arbitrary cuts of the tree.
We next consider a slightly more complex loss - the focal Tversky loss [19]:
\[L_{FT}=\sum_{i=1}^{n}\left(1-\frac{\sum_{j=1}^{c}\left(t_{ij}y_{ij}+\epsilon \right)}{\alpha\sum_{j=1}^{c}t_{ij}+(1-\alpha)\sum_{j=1}^{c}y_{ij}+\epsilon} \right)^{\gamma}, \tag{3}\]
where \(\epsilon\) is a small constant to prevent division by \(0\), and \(\alpha>0,\gamma>0\) are hyper-parameters. Following the same principles as used to propose the loss in Equation 2, we now propose a modified focal Tversky loss:
\[L_{MFT}=\] \[\sum_{i=1}^{n}\left(1-\frac{\sum_{k=1}^{m}\left(t_{ik}\sum_{j\in S _{k}}y_{ij}+\epsilon\right)}{\alpha\sum_{k=1}^{m}t_{ik}+(1-\alpha)\sum_{k=1}^ {m}\sum_{j\in S_{k}}y_{ij}+\epsilon}\right)^{\gamma}. \tag{4}\]
Once again, it is clear that \(L_{MFT}\) can also be modified to be applied independently to each branch and sub-branch of a class hierarchy tree, including labels at different levels of the tree that are in different branches.
In our implementation of nuclear instance segmentation and classification, we used a positive combination of \(L_{MCE}\) and \(L_{MFT}\) with suitable modifications of Equations 2 and 4 to handle classes at different levels, as shown in Figure 1. These are modified versions of the losses used in the original implementation of StarDist [2] for accurate classification while dealing with class imbalance.
## IV Experiments and Results
We tested two specific hypotheses in our experiments. Firstly, we hypothesized that using the proposed method, pre-training on a related dataset A with class labels derived from the same class hierarchy tree as that of a target dataset B can improve the instance segmentation and classification metrics on the held-out test cases of dataset B compared to training only on dataset B. Secondly, we hypothesized that using the proposed method, domain generalization to a previously unseen dataset C can improve when trained on dataset A and dataset B, as compared to training only on dataset B, where the label sets for the three datasets may be different but are derived from the same class hierarchy tree. For experiments to confirm either hypotheses, we did not discard the last (few) layer(s) after training on dataset A, as is done in transfer learning and multi-task learning. We trained, retained, and re-trained the same last layer by using the proposed adaptive loss functions.
Testing these hypotheses required us to select a test bench, which comprised the following datasets, metrics, DNN architectures, pre-processing methods, training methods and loss functions.
### _Datasets used_
Due to their large size, 40x magnification with clear nuclear details, and a minimal overlap in nuclear classes, we selected three datasets for our experiments - the Multi-organ Nuclei Segmentation And Classification (MoNuSAC) [4] dataset, the PanNuke dataset [3], and the Colorectal Nuclear Segmentation and Phenotypes (CoNSeP) dataset [21]. More details about these datasets can be found in Section II-A.
### _Test metric_
Due to its integrated evaluation of instance segmentation and classification, we used panoptic quality (PQ) [22] to assess our results, which is expressed as follows:
\[PQ=\frac{\Sigma_{(p,g)\in TP}IOU(p,g)}{|TP|+0.5|FP|+0.5|FN|}, \tag{5}\]
where \(p\) is predicted segment and \(g\) is the ground truth segment, \(FP\) are false positive predictions, \(FN\) are false negative predictions, \(TP\) are true positive predictions, and \(IoU\) refers to the intersection over union metric. This metric is now widely used for assessing nucleus segmentation and classification.
### _DNN architecture_
We used an instance segmentation and classification architecture used in [23] (which is a modification of the StarDist [2] model) because it has specific training procedures and post-processing steps for H&E-stained histology images. It also gives enhanced object localization, leading to higher precision in segmentation, especially of overlapping or closely
located nuclei. Additionally, its code repository (made publicly available under BSD license) allowed us to customize the training loss, shape prior, and augmentation techniques.
The architecture consists of a UNet-based backbone network, which can be either a standard UNet [24] or other backbones that are similar to it or derived from it.After the backbone, there are additional convolutional layers which predict a probability map which gives instance segmentation and class probabilities. Additionally, it predicts a distance from the nuclear boundary for multiple directions for each pixel (hence, the name StarDist) to form a polygon.
### _Data preprocessing_
Patches of size 256x256 were extracted from each dataset. Smaller images were appropriately padded. Some patches were overlapping while others were cut-off to fit within 256x256. Images had three channels corresponding to RGB. The ground truth masks had two channels - the first was the instance segmentation map ranging from 0 to number of nuclei and the second was the classification map ranging from zero to number of classes in the dataset's class label set.
Sometimes due to environmental conditions and staining time, histopathology images face the issue of staining variability of the different dyes such and hematoxylin and eosin that are used to stain the nuclei and the background. This can make it difficult for DNNs to generalize. To combat staining variability, random brightness, hue, and saturation augmentations were performed on the images. To combat class imbalance, geometric augmentations (90 degree flips and rotations) and elastic augmentations were performed more frequently on the less populated classes.
### _Training and Testing details_
We followed the same training approach as described previously [23]. The loss function used was a combination of modified cross entropy (Equation 2) and modified focal Tversky loss (Equation 4). The optimizer used was Adam. We monitored the validation loss for early stopping. Once we finished training the model on one dataset (dataset A) using one instantiation of the modified loss function for a few epochs, we further trained (finetuned) the same model - without adding or removing any layers or weights - on the second dataset (dataset B) for a few more epochs by adapting the loss to the second day. The method is flexible enough to take training instances from multiple datasets down to batch-level, but we simplified the procedure to keep the training consistent at an episode (group of epochs) level, where only one dataset was used for training per episode.
### _Results on test subsets_
Table II summarizes the results of testing the first hypothesis that the test results can improve by pre-training on another dataset using the proposed method. Pre-training on another dataset and then fine-tuning for a small number of epochs on our target dataset consistently gave better results for all three target datasets as compared to training only on the target dataset. Thus, our model is able to learn from both datasets even if the labels of the pre-training dataset are different from those of the fine-tuning dataset. Additionally, these results compare favorably with the state-of-the-art for training and testing on various splits of a single dataset [2].
A sample of qualitative results shown in Figure 2 also shows better overlap between predicted nuclei and annotations for test images when multiple training datasets are used for training using our method as compared to training on a single dataset.
It is worth noting that the improvement is more pronounced when the pretraining dataset is more generalized and has a super-set of classes and organs as compared to the target dataset. For example, the PanNuke dataset has most of the cell classes present in it. Thus, pre-training on PanNuke and then fine-tuning on other more specialized datasets gives significant improvement in the predictions on those datasets. Pretraining on a smaller specialized dataset like CoNSeP will not benefit the model much, when it is fine-tuned on a broader dataset like PanNuke. Based on this observation and reasoning, the most general dataset in terms of labels can be chosen for pre-training by surveying the classes of the available open source datasets.
### _Evolution of loss upon switching the dataset_
Figure 3 shows an example of the evolution of the training and validation losses as the training progressed for the MoNuSAC dataset as the target dataset. When trained only on MoNuSAC (case (a)), the model starts to overfit as it can be seen that the validation loss starts to increase. However, when pretrained on PanNuke (case (b)), the validation loss shows a marked further drop when the dataset is switched to the training subset of MoNuSAC as compared to that of case (a). This shows the utility of pre-training using our method.
### _Results on domain generalization_
To test the second hypothesis that domain generalization can improve by training on multiple datasets, we trained the model on the first dataset while monitoring its validation loss to prevent overfitting. After this, we fine-tuned the model on a second dataset. Then we tested on a third dataset, which did not contribute to the training at all. Table III summarizes the results of this experiment. Pre-training on a dataset and then fine-tuning for a small number of epochs on another dataset gives better results on an unseen dataset as compared
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Pre-Train** & **Epochs** & **Fine-tune** & **Epochs** & **Test** & **PQ** \\ \hline CoNSeP & 100 & - & 0 & CoNSeP & 0.5404 \\ MoNuSAC & 175 & CoNSeP & 75 & CoNSeP & **0.555** \\ PanNuke & 250 & CoNSeP & 75 & CoNSeP & **0.5707** \\ \hline MoNuSAC & 175 & - & 0 & MoNuSAC & 0.5789 \\ CoNSeP & 100 & MoNuSAC & 130 & MoNuSAC & **0.5871** \\ PanNuke & 250 & MoNuSAC & 130 & MoNuSAC & **0.6018** \\ \hline PanNuke & 250 & - & 0 & PanNuke & 0.6095 \\ CoNSeP & 100 & PanNuke & 187 & PanNuke & 0.6056 \\ MoNuSAC & 175 & PanNuke & 187 & PanNuke & 0.6102 \\ \hline \end{tabular}
\end{table} TABLE II: Quantitative results on test splits
to training only on the first dataset. Thus, our model is able to consolidate the knowledge of two datasets and show improvement in a domain generalization task.
A sample of qualitative results shown in Figure 4 also shows better overlap between predicted nuclei and annotations for images from an unseen dataset when multiple training datasets are used for training using our method as compared to training on a single dataset.
We can observe that a more pronounced improvement occurs when the fine-tuning dataset is more generalized and has a super-set of classes and organs as compared to the other datasets. We must take care not to use the most generalized dataset (with a superset of classes) for pretraining because on finetuning with a more specialized dataset, the model loses its accuracy on the unseen dataset instead of benefitting from the fine-tuning. For example, CoNSeP and MoNuSAC are more specialized datasets with classes that have less overlap, but their classes are both subsets of the classes present in PanNuke. In this case, using CoNSeP to finetune the model that was pretrained on PanNuke will lead to decreased performance on MoNuSAC. Now the most general dataset in terms of labels can be chosen by surveying the classes of the available open source datasets.
## V Conclusion and Discussion
In this paper, we have proposed a method to combine multiple datasets with different class labels for segmenting and classifying nuclei. We achieved this by creating a hierarchical
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Pre-Train** & **Epochs** & **Fine-tune** & **Epochs** & **Test** & **PQ** \\ \hline CoNSeP & 100 & - & 0 & MoNuSAC & 0.4333 \\ CoNSeP & 100 & PanNuke & 62 & MoNuSAC & **0.5631** \\ \hline CoNSeP & 100 & - & 0 & PanNuke & 0.4326 \\ CoNSeP & 100 & MoNuSAC & 43 & PanNuke & **0.4342** \\ \hline MoNuSAC & 175 & - & 0 & CoNSeP & 0.3444 \\ MoNuSAC & 175 & PanNuke & 62 & CoNSeP & **0.4485** \\ \hline MoNuSAC & 175 & - & 0 & PanNuke & 0.3955 \\ MoNuSAC & 175 & CoNSeP & 25 & PanNuke & **0.4048** \\ \hline \end{tabular}
\end{table} TABLE III: Quantitative results for domain generalization
Fig. 2: A qualitative sample of test split results.
class label tree to relate the class label sets of different datasets to each other as various cuts of the same tree. We devised a way to combine the losses of the sub-classes, allowing us to train models sequentially on multiple datasets even when the labels are available at a coarser super-class level for some classes and datasets. We demonstrated improved results on test splits and unseen domains (datasets). Our technique can be adapted to other loss functions that involve sum of class probabilities and binary labels, such as focal loss. The principle can also be applied to other application domains (data types), DNN architectures, and tasks such as object detection in settings where different datasets have different label sets which have some overlap with each other. Thus, the method has further scope in various applications and needs to be explored further.
|
2301.02719 | SDSS DR17: The Cosmic Slime Value Added Catalog | The "cosmic web", the filamentary large-scale structure in a cold dark matter
Universe, is readily apparent via galaxy tracers in spectroscopic surveys.
However, the underlying dark matter structure is as of yet unobservable and
mapping the diffuse gas permeating it lies beyond practical observational
capabilities. A recently developed technique, inspired by the growth and
movement of Physarum polycephalum "slime mold", has been used to map the cosmic
web of a low redshift sub-sample of the SDSS spectroscopic galaxy catalog. This
model, the Monte Carlo Physarum Machine (MCPM) was shown to promisingly
reconstruct the cosmic web. Here, we improve the formalism used in calibrating
the MCPM to better recreate the Bolshoi-Planck cosmological simulation's
density distributions and apply them to a significantly larger cosmological
volume than previous works using the Sloan Digital Sky Survey (SDSS, $z < 0.1$)
and the Extended Baryon Oscillation Spectroscopic Survey (eBOSS) Luminous Red
Galaxy (LRG, $z \lesssim 0.5$) spectroscopic catalogs. We present the "Cosmic
Slime Value Added Catalog" which provides estimates for the cosmic overdensity
for the sample of galaxies probed spectroscopically by the above SDSS surveys.
In addition, we provide the fully reconstructed 3D density cubes of these
volumes. These data products were released as part of Sloan Digital Sky Survey
Data Release 17 and are publicly available. We present the input catalogs and
the methodology for constructing these data products. We also highlight
exciting potential applications to galaxy evolution, cosmology, the
intergalactic and circumgalactic medium, and transient phenomenon localization. | Matthew C. Wilde, Oskar Elek, Joseph N. Burchett, Daisuke Nagai, J. Xavier Prochaska, Jessica Werk, Sarah Tuttle, Angus G. Forbes | 2023-01-06T21:22:50Z | http://arxiv.org/abs/2301.02719v1 | # SDSS DR17: The Cosmic Slime Value Added Catalog
###### Abstract
The "cosmic web", the filamentary large-scale structure in a cold dark matter Universe, is readily apparent via galaxy tracers in spectroscopic surveys. However, the underlying dark matter structure is as of yet unobservable and mapping the diffuse gas permeating it lies beyond practical observational capabilities. A recently developed technique, inspired by the growth and movement of _Physarum polycephalum_'slime mold', has been used to map the cosmic web of a low redshift sub-sample of the SDSS spectroscopic galaxy catalog. This model, the _Monte Carlo Physarum Machine_ (MCPM) was shown to promisingly reconstruct the cosmic web. Here, we improve the formalism used in calibrating the MCPM to better recreate the Bolshoi-Planck cosmological simulation's density distributions and apply them to a significantly larger cosmological volume than previous works using the Sloan Digital Sky Survey (SDSS, \(z<0.1\)) and the Extended Baryon Oscillation Spectroscopic Survey (eBOSS) Luminous Red Galaxy (LRG, \(z\lesssim 0.5\)) spectroscopic catalogs. We present the 'Cosmic Slime Value Added Catalog' which provides estimates for the cosmic overdensity for the sample of galaxies probed spectroscopically by the above SDSS surveys. In addition, we provide the fully reconstructed 3D density cubes of these volumes. These data products were released as part of Sloan Digital Sky Survey Data Release 17 and are publicly available. We present the input catalogs and the methodology for constructing these data products. We also highlight exciting potential applications to galaxy evolution, cosmology, the intergalactic and circumgalactic medium, and transient phenomenon localization.
0000-0002-4882-8879]Matthew C. Wilde
0000-0002-4880-7073]Oskar Elek
0000-0002-4880-7888]Joseph N. Burchett
0000-0001-8878-7888]Daisuke Nagai
0000-0002-4888-7888]J. Xavier Prochaska
0000-0002-4888-7888]Jessica Werk
0000-0002-4888-7888]Sarah Tuttle
## 1 Introduction
The cosmic web is an emergent prediction of \(\Lambda\)CDM cosmology and is ubiquitously reproduced and readily identifiable in cosmological simulations, where the underlying density distribution is known (e.g., Springel et al., 2005; Vogelsberger et al., 2014). However, unveiling the large-scale structure in the observational realm using galaxies and absorption lines as tracers of the intergalactic medium (IGM) is much less straightforward. The underlying dark matter distribution remains unobservable. The two most accessible tracers, such as galaxies and quasar absorption lines, are limited by the practical observational constraints of galaxy redshift surveys and the scarcity of quasars in the universe, respectively. Even when observational tracers are available at relatively high sampling densities, the problem of reconstructing the cosmic web is highly complex.
We highlight two of the myriad scientific motivations for cosmic web reconstruction. First, of paramount concern in galaxy astrophysics is the impact of a galaxy's environment on its evolution. Correlations between environmental metrics and galaxy properties, such as morphology (e.g., Dressler, 1980), color (e.g., Abell, 1965), and star formation (e.g., Balogh et al., 1999; Peng et al., 2010), have been known about for many decades, but the physical mechanisms and their relative importance remain heavily pursued problems. Galaxy-environment analyses typically fall along one of two paths: local environment-centric or large-scale environment-centric. In the former, one employs an environmental density metric, such as a nearest-neighbor distance or density within some aperture (Kauffmann et al., 2004; Peng et al.,
2010), or galaxies are associated with a local group or cluster environment (Yang et al., 2007; Berlind et al., 2006) and galaxy properties are studied with respect to the properties of the group or cluster (Carollo et al., 2013; Catinella et al., 2013; Poggianti et al., 2009).
The latter path is less straightforward, as one must infer the large-scale structure from tracers, typically the galaxies themselves, and correlate galaxies back to that structure in some way. Various methods have been devised to reconstruct the cosmic web from discrete tracers. Libeskind et al. (2018) reviewed a number of these, and we refer the reader to this valuable resource for an overview of the techniques employed and comparisons between them. Once the underlying density field is inferred, one can correlate galaxy properties with this density field (an approach one can directly employ with the catalog described here) or attempt to geometrically relate a galaxy's position to the structure identified, e.g., the distance to a filament. One should appreciate that filament identification (e.g., DisPerSE; Luber et al., 2019; Tempel et al., 2014), whether from a density field or some other methodology, is a separate problem from the inference of the field itself.
Studies of galaxy properties and their dependence on the cosmic environment report mixed results. Kuutma et al. (2017) find a higher elliptical-to-spiral ratio and decreasing star formation rate (SFR) towards filament spines. Similarly, Crone Odekon et al. (2018) report that, at fixed stellar mass, galaxies closer to filaments or in higher density environments are more deficient in HI. These large-scale environmental correlations with galaxies have also been investigated using modern hydrodynamical cosmological simulations. Codis et al. (2018) measure the spin-filament alignment in IllustrisTNG (Vogelsberger et al., 2014) and find a strong dependence on spin alignment with galaxy mass. Pasha et al. (2022) find that the collapse of large-scale structure into sheets at higher redshifts (\(z\sim 3\)) can create shocks that explain quenching in dwarf galaxies similar to the effects seen in the presence of clusters and groups.
Second, in addition to the galaxies themselves, the IGM studied in context with the cosmic web environment can yield important insight. Wakker et al. (2015) measured the Ly\(\alpha\) absorption in quasar spectra probing a foreground visually identified filament, finding increasing absorber equivalent width and linewidth with decreasing projected distance to the center of the filament. With a larger archival sample of QSOs and filaments, Bouma et al. (2021) find similar results, with Ly\(\alpha\) absorbers showing both greater incidence and column density at a small projected distance and velocity offsets from filaments first identified by Courtois et al. (2013). In the first application of the reconstruction framework we use here, Burchett et al. (2020) analyzed the Ly\(\alpha\) optical depth as a function of cosmic web density probed by QSO sightlines. They found three distinct regimes: (1) a void regime at low matter overdensity with no detected absorption, (2) an onset of absorption in the outer skins of filaments with monotonically increasing optical depth, and (3) the highest-density regime where the absorption no longer increases with local density but rather turns over and declines at the highest densities. Associating the IGM to the cosmic web provides important constraints on hydrodynamical processes modeled in cosmological simulations that may be used to interpret the environmental quenching conundrums.
In this manuscript, we employ the novel method first introduced in Burchett et al. (2020) described in detail by Elek et al. (2022), which is based on the morphology of the _Physarum polycephalum_ slime mold organism to map the cosmic density field. This model implicitly traces the cosmic web structure by efficiently finding optimal pathways between the galaxies that trace filaments. We apply our model to two large galaxy catalogs, the NASA Sloan Atlas (NSA) (Blanton et al., 2011) and the catalogs of Luminous Red Galaxies (LRGs) from the SDSS-IV Extended Baryon Oscillation Spectroscopic Survey (Bautista et al., 2018). Our method faithfully reconstructs the cosmic matter density of the cosmic web throughout the observed volume, allowing the study of the dark matter distribution with respect to any objects of interest in the survey footprints, not just at the input galaxy locations. We have released this data as part of the SDSS Data Release 17 (DR17) as a Value Added Catalog (VAC) publicly available for the community's use.
Unless stated otherwise, we adopt the Planck15 (Planck Collaboration et al., 2016) cosmology as encoded in the ASTROPY package (Astropy Collaboration et al., 2013; Price-Whelan et al., 2018).
## 2 Data
We first describe the required inputs for reconstructing the map of cosmic densities produced by MCPM. MCPM takes as input a 3D catalog of galaxy positions with known masses and reproduces a data cube reconstructing the filamentary structure connecting the galaxy halos. To optimize the parameters in MCPM, we also require a known density field from a cosmological simulation to compare our reconstruction. We then apply the tuned model to observational catalogs of galaxies with known masses to reconstruct the physical cosmic web.
We employ the dark matter-only Bolshoi-Plank \(\Lambda\)CDM (BP) simulation (described below) as our training density field. We then apply our model to spectroscopic surveys that provide large samples of precise redshifts combined with value-added catalogs that estimate the galaxy masses. We use two primary catalogs for our galaxy positions, the NASA-Sloan Atlas (NSA, or NSA/SDSS) for galaxies with \(z<0.1\) and the Large Scale Structure catalogs from Sloan Digital Sky Survey (SDSS) for galaxies at higher redshifts (\(z\lesssim 0.5\)). These two catalogs each offer advantages and disadvantages and are described below. Note that no new DR17 data were used in this VAC. We now describe the galaxy catalogs and the simulations used as inputs to MCPM.
### NASA Sloan Atlas
The NASA Sloan atlas (NSA) is a value-added catalog constructed from reprocessed SDSS \(ugriz\) photometry combined with Galaxy Evolution Explorer (GALEX) photometry in the ultraviolet. It was designed to improve the standard SDSS sky subtraction pipeline (Blanton et al., 2011). We use the most recent version of this catalog, **nsa_v1\(0\)1.fits**, which contains galaxies out to \(z=0.15\). In order to prioritize completeness in this data set, we we imposed an upper redshift cut to those galaxies with \(z=0.1\) resulting in a catalog of 325321 galaxies. We will often refer to this catalog in this paper as simply "NSA/SDSS" to distinguish it from the other catalogs from BOSS.
### LRG catalogs
For the higher redshift portion of our catalog, we use a sample of Luminous Red Galaxies (LRGs) from the Baryon Oscillation Spectroscopic Survey (BOSS). BOSS was part of the SDSS III project, which at the time of its release, provided the largest survey of galaxy redshifts available in terms of the number of redshifts measured by a single survey and the effective cosmological volume covered. We chose to use the LRG catalogs as tracers of the dark matter (DM) density as these catalogs are more complete at these redshifts with respect the selection function than using the more general SDSS galaxy sample. The BOSS LRG sample derives from the large scale structure catalogs provided by the team and is broken into Northern and Southern Galactic Cap regions (LRG-NGC and LRG-SGC, respectively) (Ross et al., 2011; Ho et al., 2012; Ross et al., 2012). We use the LOWZ catalogs, which provide a sample of LRGs to \(z\lesssim 0.5\) and are found in the files1galaxy_DR12v5_LOWZ_North.fits.gz and galaxy_DR12v5_LOWZ_South.fits.gz. The procedure to create this catalog is mostly based on Reid et al. (2016) with modifications to the redshift failure and systematic corrections described in Bautista et al. (2018).
Footnote 1: [https://www.sdss.org/dr14/spectro/lss/](https://www.sdss.org/dr14/spectro/lss/)
### Mass Determination
We used the LRG galaxy stellar masses from the Firefly VAC. The Firefly VAC2(Comparat et al., 2017) provides galaxy properties of all SDSS, BOSS, and eBOSS spectra using the FIREFLY fitting routine (Wilkinson et al., 2017) (v1\(0\)4 for DR14 and v1_1 for DR16), which incorporates the stellar population models of Maraston & Stromback (2011). The Firefly catalog includes light- and mass-weighted stellar population properties (age and metallicity), E(B-V) values, and most crucially to this work, stellar mass for all galaxies in the catalog. We used the DR14 catalog to determine masses for the galaxies in the file sdss_eboss_firefly-dr14.fits.
Footnote 2: [https://www.sdss.org/dr16/spectro/eboss-firefly-value-added-catalog/](https://www.sdss.org/dr16/spectro/eboss-firefly-value-added-catalog/)
The lower redshift NSA/SDSS catalog contains many galaxies that are spatially resolved and require more careful photometric analysis (e.g., Blanton et al., 2011). The most recent version of this catalog provides elliptical Petrosian aperture photometry, which is more accurate than the standard SDSS pipeline. We adopt the Petrosian aperture-derived mass to estimate the galaxy's stellar mass for this sample.
### Bolshoi-Planck Simulations
To calibrate our MCPM density estimates to the cosmic matter density, we use the dark matter only Bolshoi-Plank \(\Lambda\)CDM (BP) simulation (Klypin et al., 2016; Rodriguez-Puebla et al., 2016). The BP simulation uses \(2048^{3}\) particles in a volume of \(250\)h\({}^{-1}\) Mpc\({}^{3}\) and is based on the 2013 Planck (Planck Collaboration et al., 2014) cosmological parameters and compatible with the Planck 2015 parameters (Planck Collaboration et al., 2016). We utilize density field from the simulation smoothed by Gaussian kernel over scales of 0.25 Mpc h\({}^{-1}\)(Lee et al., 2017; Goh et al., 2019). We also employ the BP halo catalog produced using the Rockstar algorithm (Behroozi et al., 2012).
## 3 Methodology
### The MCPM algorithm
We produced the VAC data with the Monte Carlo Physarum Machine (MCPM) algorithm implemented in the _Polyphorm_ software3. MCPM was first used in Burchett et al. (2020) to reconstruct a 3D density
field estimate of the large-scale structure spanning 37.6k SDSS galaxies within the \(0.018<z<0.038\) range. The detailed description of methodology and analyses are described in Elek et al. (2022). We provide a brief summary of the model here.
MCPM is a massively parallel agent-based model inspired by the growth patterns of _Physarum polycephalum_ slime mold. Its main modalities are visualized in Figure 2. Using a swarm of millions of particle-like agents, MCPM iteratively traces the network structures implicit in the input data: dark matter halos or galaxies represented as a weighted 3D point cloud. In linear proportion to their halo mass, the data points emit a virtual marker which the agents navigate toward at every iteration.
The key innovation of this model is the probabilistic navigation of the agents: the sampling of their trajectories according to PDFs derived from the data-emitted marker field. For reference, the deterministic baseline model, where the agents always follow the maximum marker concentration, leads to the collapse of some filamentary configurations and the omission of a significant portion of data points, approximately a third as measured in Burchett et al. (2020). In contrast, MCPM fits over 99% of all input data points and can reconstruct configurations where multiple filaments branch out from a single origin, e.g., in massive galaxy clusters.
MCPM produces two main quantities: the _trace_ field and the _orientation_ field. The trace field \(f_{\mathrm{T}}:\mathbb{R}^{3}\rightarrow\mathbb{R}_{+}\) accumulates the superimposed trajectories of all active agents and represents the reconstructed LSS density field (after statistical standardization, Section 3.5). The orientation field \(f_{\mathrm{O}}:\mathbb{R}^{3}\rightarrow\mathbb{R}_{+}^{3}\) records the averaged unsigned directions of the agents and serves as a clustering criterion in our FoG compensation step (Section 3.6). Both are robust (i.e., stable in time) Monte-Carlo estimates of the equilibrium agent distributions.
Compared to our earlier applications of the MCPM model (Burchett et al., 2020; Simha et al., 2020), we introduce a few methodological and implementation changes aimed at improving the quality of the fits (more on this in Section 3.2):
1. Linear accumulation of \(f_{\mathrm{T}}\) and \(f_{\mathrm{O}}\) values instead of the original exponential floating window aver
Figure 1: Distribution of the galaxy redshifts for the NSA/SDSS (blue, solid) and LRG-NGC (multi-colored, solid) and LRG-SGC (multi-colored, dashed) data sets that were used to reconstruct the cosmic density map. The NSA/SDSS catalog includes all galaxies out to \(z=0.1\) and is denoted as RUN0 in the MCPM VAC. The LRG catalogs extend to higher redshifts but only include the rarer LRGs, hence the lower galaxy count. This figure also shows the slicing scheme used to self-consistently fit the MCPM model in subsets of redshift as the density of galaxies decreases with luminosity distance in comoving Mpc.
aging. The latter is used for the supervised part of the fitting when exploring different MCPM configurations. After finding the optimal data-specific set of model parameters, we switch to linear averaging, which dramatically reduces the solution variance.
2. To avoid numerical errors, we increase the numerical precision from fp16 to fp32 for both \(f_{\rm T}\) and \(f_{\rm O}\). This slows the implementation by 10-20%, which is acceptable for maintaining interactivity during fitting.
3. We redesigned the agent rerouting step. Rerouting is invoked when an agent encounters no data for too many subsequent steps, indicating either a boundary of the dataset or a large void. Our original rerouting assigned such an agent to a random location in space; currently, we repositioned it to the location of a random data point. This change leads to a significant decrease of background noise and effectively increases the dynamic range of the obtained solutions for both \(f_{\rm T}\) and \(f_{\rm O}\).
### MCPM fit to Bolshoi-Planck
This section describes how we calibrate the MCPM algorithm using the Bolshoi-Planck data. We refer readers to Elek et al. (2022) for more details of the fitting procedure and the impact of the model hyperparameters on the resulting reconstruction geometry. Readers interested in the catalog data can skip to Section 3.3.
Fitting MCPM to input data (either galaxies or halos) is a semi-supervised procedure, where the operator focuses on maximizing the fitness function \(E\) while maintaining the connectedness and continuity of the reconstructed geometry. We define the fitness \(E\) of a given reconstructed trace field \(f_{\rm T}\) over a dataset \(D\) as
\[E(f_{\rm T},D)=\frac{1}{|D|}\sum_{d\in D}\frac{f_{\rm T}(d_{\rm position})}{d_ {\rm mass}}.\]
This results in a maximum likelihood estimator normalized by each data point's mass to avoid overfitting to the most massive objects (given the large dispersion of typical galaxy and halo masses). Since we do not yet have a precise mathematical description of the fit's connectedness, we rely on the interactive visualization in _Polyphorm_ to ensure that the fit does not collapse into a disconnected set of 'islands'. Defining this property
Figure 2: Overview of MCPM’s operating modalities, demonstrated on the \(0.018<z<0.038\) sample of SDSS galaxies. Clockwise from top left: input data points and the marker concentration emitted by the data (yellow), reconstructed trace field \(f_{\rm T}\) (purple), corresponding orientation field \(f_{\rm O}\) (XYZ directions mapped to RGB colors).
rigorously and developing a fully automated fitting procedure remains a future work for us.
To calibrate MCPM's hyperparameters, we fit the model to two snapshots of the Bolshoi-Planck simulation dataset (at \(z=0\) and \(z=0.5\), both containing roughly 16M halos extracted with the Rockstar algorithm). We adopted some of the parameter estimates from our previous work (Burchett et al., 2020), including the sensing angle at 20 deg, moving angle at 10 deg, moving distance at 0.1 Mpc and persistence of 0.9 (now adjusted to 0.92 due to the finer granularity of halos used here). We focused on constraining the remaining critical parameters: sampling exponent (which controls the acuity of obtained structures, especially filaments) and sensing distance (which determines the scale of the structures, such as mean segment length and by transition the diameter of loops, voids, etc). In addition, we maximize the _monotonicity_ of the obtained overdensity mapping as shown in Figure 5 as an additional constraint when determining the sampling exponent.
Using this fitting procedure, we matched the MCPM fits to the ground truth densities in Bolshoi-Planck. We determined the optimal sampling exponent to be 2.5 at \(z=0\) and 2.2 at \(z=0.5\), which is consistent with the observation that the LSS at higher redshifts is less condensed. For the sensing distance, the optimal value was 2.37 Mpc. It is worth noting that these sampling exponent and sensing distance values pose lower limits for the values used to fit the observational data, because of the significantly lower spatial density of data points in the galaxy catalogs relative to BP simulations, which was compensated for by proportionally increasing the two parameters.
In Figure 4, we demonstrate that MCPM reconstructs not just the halos that we feed into it but the cosmic structure, including filaments and voids. More quantitative assessments are available in Elek et al. (2022).
### Fit to NASA-Sloan Atlas
The first component of the VAC is based on the MCPM fit to the NASA-Sloan Atlas catalog for \(0<z<0.1\), which contains roughly 325k galaxies in luminosity distances between 44 and 476 Mpc. Similar to
Figure 3: Our reconstructed cosmic web data products and their spatial relation to another. The green bands highlight regions of overlapping LRG slices. The SDSS portion of the data is magnified to visualize the higher amount of recovered structure owing to the denser observations.
the BP dark matter halos, we treat the galaxies as 3D point attractors, in this case, weighted by their stellar masses.
The fits are based on the hyperparameters calibrated on the BP simulations. Furthermore, to reflect the lower spatial density of the galaxies in comparison to the halos, we adjust the two critical parameters of MCPM: sampling exponent to 3.5 and sensing distance to 5.2. To make these adjustments, we again used the semi-supervised fitting procedure described in Section 3.2.
To verify the consistency of the fit across different \(z\) values, we have split the SDSS catalog into 3 overlapping slices (44-270 Mpc / 250-370 Mpc / 350-476 Mpc, each containing about 120k galaxies) and fitted them separately by only adjusting the sensing distance parameter. The resulting optimal values (Figure 6) follow a linear trend, implying that the spatial density of galaxies decreases in corresponding proportion. However, the obtained variation of sensing distance (3.8-5.6) is well within the ability of the model to perform a consistent fit using a single parameter value. Therefore, we opt for a single fit to the entire catalog using the aforementioned sensing distance value of 5.2.
### Fit to LRG Catalogs
The procedure of fitting to the LRG NGC and SGC catalogs is identical with the SDSS data: using the sampling exponent of 3.5 and the BP-calibrated values for the other hyperparameters, we continued increasing sensing distance until reaching an optimal fit.
Due to the much lower spatial density of LRG observations compared to SDSS, the optimal values of sensing distance end up being considerably higher (Figure 3). Also, unlike SDSS, the LRG galaxies span a significantly more extended range of redshifts. The consequence is nearly a two-fold growth of the optimal
Figure 4: Comparison of the Bolshoi-Planck simulations (top row; where the density field is known) at redshifts of \(z=0.0\) (left) and \(z=0.5\) (right) to the MCPM trace of the simulations (bottom row; density recovered from halos alone). MCPM faithfully reconstructs the cosmic structure from the galaxy halo population.
sensing distance value across the catalog's redshift range (Figure 6). Therefore to construct the VAC, we split the LRG galaxies into 4 overlapping'slices' of approximately equal numbers of galaxies (about 70k per slice for NGC, about 25k per slice for SGC) and fit each separately. The resulting distance intervals are 0-1000 Mpc (\(z\approx 0-0.2\)), 900-1600 Mpc (\(z\approx 0.18-0.3\)), 1500-2100 Mpc (\(z\approx 0.28-0.38\)), and 2000-3000 Mpc (\(z\approx 0.36-0.51\)).
Figure 3 shows the visualization of all obtained density slices and their spatial relations. An added benefit of this approach is the higher resolution of each slice we can afford. This is desirable again due to the massive redshift range of the LRG data.
### Statistical Standardization & Mapping
The MCPM densities that fit each survey slice, although related to the true physical density, are rather the density of agents in the fit. To translate the MCPM density to cosmic overdensity, we standardize each distribution to the MCPM fit of the simulation so that a mapping between MCPM and cosmic overdensity can be applied. The MCPM fits to the galaxy surveys differ from the fits to the BP simulations because they suffer from luminosity selection functions and are thus much sparser. This particularly affects the lowest density regime of the density distribution. To account for this effect, we used the Wasserstein distance4 or the "Earth Movers Distance" to calculate the _stretch_ and _shift_ values such that the distribution of MCPM densities of the surveys could be linearly transformed to best fit the BP-MCPM fit. That is TargetDist = _stretch_\(\times\)SurveyDist + _shift_. Where the TargetDist is the BP-MCPM density distribution, and SurveyDist is the density distribution of each survey slice. The benefit of this method is that we can impose a lower limit on the density distributions to only take into account the higher density wing of the distribution corresponding to densities that contain structure and avoid the empty space in the survey fits.
Footnote 4: [https://docs.scipy.org/doc/scipy/reference/generated/](https://docs.scipy.org/doc/scipy/reference/generated/) scipy.stats.wasserstein_distance.html
In order to retrieve the cosmic matter density, \(\rho_{m}/\langle\rho_{m}\rangle\), we must map the MCPM trace density to that of the BP simulations at each redshift. We fit the BP simulations using the MCPM algorithm and then apply a mapping from MCPM density to cosmic matter density. This mapping was achieved by sampling the MCPM fits in bins of equal density and then determining the density from the BP simulations at the same location. This is shown by the multi-colored stripes in Figure 7. We then determine the median (and 1\(\sigma\) limits) of each MCPM density bin. The median densities in each bin were then used to create a mapping function. We based our mapping function on the rectified linear activation function (ReLU), where the maximum change of the median of the bins determines the inflec
Figure 5: Comparison of different sampling exponents in increasing order from top to bottom. We find that a sampling exponent of 2.5 produces the most linear mapping between the MCPM densities and the cosmic matter densities from the BP simulations, especially at lower densities where previous versions of the MCPM have generally failed to recover the lowest density structures. (see Figure 10 in Burchett et al., 2020).
Figure 6: Plot of MCPM agents’ sensing distance (the main feature scaling parameter) as read out from the best fits for the LRG data, radially sliced into 4 runs at overlapping luminous distance intervals. For comparison, we also show the best-fit sensing distances for 3 SDSS slices, which manifest a similar linear growth as we observe in the LRG data.
tion point. On the right-hand side of the flat part of the function, we fit a cubic polynomial to the data, creating a piece-wise continuous mapping function. This method was chosen over other methods, such as interpolating the bins or using a spline function because fitting to densities above or below those found in the MCPM fits is not well defined. Our method is illustrated in Figure 7 where the thicker black line shows the mapping function applied to the \(z=0\) simulation. The thinner black lines show the \(1\sigma\) limits of our mapping, which correspond to \(\pm 0.5\) dex in log cosmic matter overdensity, \(\rho_{m}/\langle\rho_{m}\rangle\).
### Correction for Redshift Space Distortions
As MCPM operates in 3D space, applying the algorithm necessitates attaching physical distances to the input dataset. Although distance measurements via more direct methods (e.g., tip of the red giant branch or Type Ia supernovae) (Tully et al., 2016) may be available for a small subset of the galaxies (and therefore tracers of the underlying density field), we must primarily assume distances concordant with the Hubble flow. Thus, we initially attach to each galaxy the luminosity distance given the adopted cosmology and galaxy redshift. Denser environments such as galaxy groups and clusters will include galaxies with large peculiar velocities. These peculiar velocities will result in redshift space distortions (RSDs), or 'fingers of god' (FoG), if adopted directly. For example, a typical velocity dispersion for a \(>10^{14}\)\(M_{\odot}\) galaxy cluster (\(\sim 1000\) km/s) would propagate to a systematic error in the distance by assuming pure Hubble flow of \(>10\) Mpc. This issue plagues our low-redshift SDSS sample significantly more than the LRG samples for two reasons: 1) Low-mass galaxies are much more abundant and likely to be observed at low \(z\) in the magnitude-limited SDSS, which results in many objects composing apparent false structures along the direction pointing away from (and towards) the observer; (2) High-mass galaxies, which will dominate the samples at progressively higher redshifts, preferentially reside as central galaxies in their local environments (Lan et al., 2016). Therefore, these galaxy samples will be less subject to systematic error in cosmological distance than our lowest redshift sample. Thus, we employ an RSD correction for the \(z<0.1\) SDSS galaxy sample that we detail here.
A key feature of MCPM is that the cosmic web reconstruction converges to an equilibrium state but is a dynamical system nonetheless. The adopted 'densities' are aggregated trajectories of the millions of agents seeking efficient pathways between galaxy tracers. MCPM also outputs the components of an aggregated three-dimensional agent velocity vector for each cell in the volume. We use these velocities to identify RSDs, as the agent velocities producing them will be preferentially oriented perpendicular to the plane of the sky along the line of sight and will be clustered in their celestial coordinates. We select points in the MCPM cube by orientation as follows: we (1) convert each input galaxy's location in the MCPM-output cube to its equivalent celestial coordinates, (2) find the three components of a unit radial vector parallel to the line of sight in Cartesian space to match the coordinate system of the MCPM velocity vectors, and (3) calculate the dot product between the aggregated velocity vector at each galaxy's position in the cube with the unit radial vector and assign the result to that galaxy. Galaxies within an RSD structure (FoG), having either parallel or antiparallel velocity vectors to the unit radial vector, should not have dot product absolute values close
Figure 7: Mapping of the MCPM derived density to the cosmic matter density from the BP simulation. The MCPM densities were binned evenly in MCPM space in bins of 0.1 dex as demarcated by the colored stripes. The custom ReLU mapping function fit to the medians of the bins (thick black line) and \(1\sigma\) limits (thinner black lines) are plotted on top of the data. This mapping function provides a translation from the MCPM density to the cosmic overdensity.
to zero. Therefore, we filter out galaxies with dot product absolute values less than 10, chosen upon inspecting the distribution of galaxy dot product values as a conservative cut. To identify galaxy positions with similar velocity orientation _and_ projected location on the sky, we then employ the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm as implemented in the scikit-learn5 python package, feeding it the sky coordinates and redshift. For this step, we further filter the galaxy catalog by mass to those with \(M_{*}>10^{10}\)\(M_{\odot}\), as the completeness of SDSS declines for less massive galaxies at the upper end of our redshift range (\(z\sim 0.1\)). DBSCAN operates by locating high-density cores in the data, which are the beginnings of the clusters. The algorithm searches out from these cores, adding points until no more points are found in within some distance tolerance (in whatever space the data occupy). This algorithm contains several advantages over other possible choices, including scalability, compatibility with non-flat geometries, and the feature that certain points may not be included in any cluster (they are deemed 'noise'). Two critical parameters for DBSCAN are the distance tolerance (eps) and the minimum number of points to be considered a core in the data (min_samples). We chose min_samples=3 as a minimum number of galaxies (e.g., such as in a group or cluster) that might form a false RSD structure (FoG) in the MCPM model. We chose a value of eps=2 upon experimenting with several values through visual inspection to balance the inclusion of FoGs (which are readily identified by the eye), containing a relatively small number of galaxies while minimizing false identification of filaments not oriented antiparallel to the plane of the sky as RSD structures. Figure 8 shows the resulting clusters identified by DBSCAN in a slice in declination of our galaxy catalog, with galaxies belonging to the same cluster having the same color.
Footnote 5: [https://scikit-learn.org/stable/](https://scikit-learn.org/stable/)
From the output clusters identified by DBSCAN, we find the velocity range spanned by galaxy redshifts within each cluster using the full-width-half-maximum (FWHM) of the velocity distribution (\(v_{\rm FWHM}\)). For clusters with \(v_{\rm FWHM}>300\) km/s, we adopt new redshifts for the associated galaxies to be commensurate with more realistic physical distance separations inferred from a simple luminosity distance based on the redshift; this procedure is as follows. Assuming the cluster members are bound to the same virialized structure, we convert the velocity FWHM to a velocity dispersion by the relation:
\[\sigma_{v}=\frac{v_{\rm FWHM}}{2\ \sqrt{\ln\ 2}}. \tag{1}\]
We then use this velocity dispersion to infer a virial radius, \(R_{200}\), of the cluster:
\[R_{200}^{\rm infer}=\frac{\sigma_{v}}{4/3\pi G\Delta_{200}\rho_{\rm crit}}, \tag{2}\]
where \(\Delta_{200}\) and \(\rho_{\rm crit}\) are the overdensity and critical density, respectively. We then adopt new redshifts (solely for the purpose of feeding MCPM) about the median redshift of the cluster members by sampling from a normal distribution with standard deviation corresponding to the change in redshift that would result in a luminosity distance difference equal to the inferred \(R_{200}\). Finally, we convert these galaxy coordinates and adopted redshifts to 3D Cartesian space via luminosity distances based on the new redshifts; these then serve as inputs to MCPM.
## 4 Data Products
### The Catalog
The final value-added catalog contains the positions and redshifts and the stellar mass of the galaxies in the NASA-Sloan Atlas and the eBOSS Firefly Value-Added Catalog. We include a column, MASS_SOURCE, to indicate which catalog was used to estimate the mass. The MCPM algorithm uses the galaxy mass to build the matter density field. The primary field of interest here is MATTERDENS, the matter density field at the location of a given galaxy, which was derived from fits of MCPM models in 3D volumes and mapped to the cosmological matter density (relative to the mean matter density) using MCPM fits to the Bolshoi-Planck simulations. The catalogID is a combination of plate-mjd-fiberid. A unique identifier is the combination of catalogID and mcpmRun. Objects with the same value of mcpmRun were fitted with the MCPM model simultaneously. The data were sliced in redshift to yield samples producing self-consistent large-scale structures over the volume in each slice. mcpmRun = 0 correspond to \(0.01<z<0.1\) SDSS galaxies with masses from the NASA/Sloan Atlas. Samples of LOWZ LRGs are marked 1-2 (\(z<0.20\)), 3-4 (\(0.18<z<0.30\)), 5-6 (\(0.28<z<0.38\)), and 7-8 (\(0.36<z<0.51\)); each pair (e.g., 3-4), corresponds to the NGC/SGC samples in some redshift slice, with odd and even numbers for NGC and SGC, respectively. The data model for the catalog is described in Table 1.
### 3D Density Cube
In addition to the VAC, which contains the density at the location of each galaxy, we offer the full 3D density
field of the relevant volumes, available at [https://data.sdss.org/sas/dr17/eboss/lss/mcpm/v1](https://data.sdss.org/sas/dr17/eboss/lss/mcpm/v1)\(0\)0/datacube/. These may be queried using our custom package, pyslime6. The data will unzip to a directory which may be opened by pyslime. This will enable the user to query the overdensity at arbitrary points in the cube, allowing the study of voids and filamentary structures outside the local environment of the input galaxy field.
Footnote 6: [https://github.com/jhburchett/pyslime](https://github.com/jhburchett/pyslime)
## 5 Discussion
### Comparison to Peng et al. (2010)
We can additionally validate our model by comparing our findings to that of other surveys. Although Burchett et al. (2020) has demonstrated the efficacy of our model, we present comparisons to other studies, leveraging our deeper and larger surveys.
Peng et al. (2010) used a method based on the 5th nearest galaxy neighbors to estimate the environmental density and studied the SFR and the quenched fraction of galaxies as a functions of this density metric and galaxy mass. Burchett et al. (2020) illustrate that the MCPM method of computing cosmic density qualitatively matches the results (see Figure 5 & 6 in Peng et al., 2010). In Figure 9, we demonstrate the improvement in signal gained with the NSA/SDSS sample as the increase in the number of galaxies is significant and the reproduction of their density-stellar mass-sSFR relations.
### Potential Applications
Our primary aim in this manuscript is to showcase the dataset and describe its construction. There are, however, many exciting applications for this dataset that are well beyond the scope of this publication. Here, we list four general areas of application:
* **Galaxy evolution in the cosmic web:** A vast amount of galaxy properties measured and inferred from both multiwavelength photometry and spectroscopy have been cataloged for SDSS galaxies (many also released as VACs; e.g., Salim et al., 2016) via straightforward crossmatching with our catalog, myriad galaxy-environment analyses may be readily conducted. Figure 9 highlights one direct application of the galaxy-density catalog to study the possible impacts of a galaxy's location within the cosmic web on its evolution. In particular, Figure 9 shows the dependence of star formation activity as a function of large-scale structure density. Our dataset is ideal for comparing effects induced by the more local environment (groups/clusters) to those induced by the cosmic web.
* **Void finding:** In the linear regime, the sizes of voids and their correlation statistics are sensitive to cosmology, particularly dark energy (Pisani et al., 2015). Although most of the analyses we have alluded to thus far focus on the denser regions of the cosmic web, namely filaments and nodes, our density cubes naturally include the un
\begin{table}
\begin{tabular}{c c c c} \hline \hline Name & Type & Unit & Description \\ (1) & (2) & (3) & (4) \\ \hline catalogID & char[13] & & Combination of PLATE-MJD-FIBERID \\ plate & int32 & & Plate number \\ mjd & int32 & & MJD of observation \\ fiberid & int32 & & Fiber identification number \\ ra & float64 & deg & Right ascension of fiber, J2000 \\ dec & float64 & deg & Declination of fiber, J2000 \\ z & float32 & & Best redshift \\ massSource & char[7] & & Source of the mass determination (nsa or firefly) \\ mcpmRun & int8 & & Index of galaxy sample fitted simultaneously with MCPM \\ mstars & float64 & \(M_{\odot}\) & Stellar mass \\ matterDens & float32 & & log10 of the ratio of the matter density relative to the mean matter density \\ \hline \end{tabular} Note. –Schema for the MCPM Value-Added Catalog, v1.0.0 as found in slimeMoldgalaxy catalogv1\(0\)0.fits.
\end{table}
Table 1: Data Model
Figure 8: A slice in declination of our input galaxy catalog (grey points, top). RSD structures identified by DBSCAN are shown in various colors overlayed on the original points (bottom).
derdense regions. Simple centroiding and clustering algorithms may be readily applied to these density fields to directly identify and characterize the voids, which in turn may be used as inputs for cosmological parameter estimations using, e.g., the Alcock-Paczynski effect (Alcock and Paczynski, 1979).
* **The intergalactic medium:** Hydrodynamical cosmological simulations predict a rich multiphase structure in the intergalactic gas permeating throughout the cosmic web (e.g., Cen and Ostriker, 1999; Dave and Tripp, 2001; Tepper-Garcia et al., 2012). In addition to the physical states of gas resulting from large-scale structure formation (Bertschinger, 1985; Molnar et al., 2009), energetic feedback from the galaxies themselves might extend well beyond the virial radius, which is often adopted as a fiducial extent of a galaxy's halo (Finlator and Dave, 2008; Schaye et al., 2015; Nelson et al., 2019). Burchett et al. (2020) used HST-observed background quasar sightlines through the MCPM reconstructed volume to find a relationship between cosmic web density and Ly\(\alpha\) optical depth. A similar analysis could and should be done leveraging our higher redshift LRG reconstruction with other absorption tracers, such as Mg II.
* **Multimessenger transient followup:** Transient phenomena, such as gravitational waves and fast radio bursts, are typically detected with imprecise localization, with scales of minutes or degrees on the sky (Chen and Holz, 2016; CHIME/FRB Collaboration et al., 2019). Space-based and ground-based facilities around the world then follow up these detections to identify and characterize the sources (e.g., Coulter et al., 2017). As extragalactic sources are statistically more likely to be found within the large-scale structure, transient observers could employ our reconstructed density field of the cosmic web in follow-up imaging campaigns to prioritize pointings toward regions of the sky most likely to contain the source counterparts.
### Known Limitations
The VAC volumes have the usual luminosity function systematics that are present in the underlying SDSS and LRG catalogs. Specifically, the sampling density of galaxies is more significant at lower redshifts. This is reflected in the trace and can be seen in the SDSS data as well as each slice of the LRG catalogs, as shown in Figure 3. This presents itself as an increased density at the lower redshift end of the volume. However, the mean matter density at the low and high redshift ends of each volume is consistent.
Some sub-optimality of the model fit arises from the fact that the optimal sensing distance grows linearly according to the data in Figure 6, whereas the catalog is a piece-wise constant approximation of this.
Due to the differing sensing distance in each slice, there is a slight discontinuity of the MCPM densities extracted from the overlaps between the LRG slices. Thus, we recommend comparing densities on a slice-by-slice basis and avoiding comparing quantities based on the density at different redshift slices.
## 6 Conclusion
Herein we leverage the _Monte Carlo Physarum Machine_ (MCPM) methodology, inspired by the growth and movement of Physarum polycephalum slime mold, to map the cosmic web within several sub-samples of the SDSS spectroscopic galaxy catalogs. The MCPM model inputs a galaxy field with known masses and outputs the large-scale structure density field. We train our model using the Bolshoi-Planck cosmological survey, producing a reconstruction of the simulated cosmic web where the underlying density is known. Using
Figure 9: The dependence of star formation activity on galaxy environment and stellar mass for the galaxies within the NSA/SDSS volume (\(z<0.1\)). The color coding denotes sSFR in the population within each mass/environment bin, where the environmental density is determined from our MCPM cosmic web reconstruction algorithm. A comparison with Figure 6 of Peng et al. (2010) shows a similarly increasing red fraction as a function of both mass at fixed density and density at fixed mass.
the simulation as ground truth, we describe the supervised tuning of MCPM parameters to produce an optimal fit. We apply this tuned model to the NASA-Sloan Atlas and the eBOSS LRG Firefly Value-Added Catalogs to create both a 3D density cube and a catalog of cosmic densities at the location of the galaxies. The SDSS NASA-Sloan Atlas catalogs include a more complete galaxy sample at \(z<0.1\). We describe and employ a novel method on this dataset to reduce the effect of peculiar motions on the spectroscopic distances. The MCPM fits to the eBOSS LRG North and South Galactic Cap catalogs capture the larger-scale cosmic web out to \(z\lesssim 0.5\). This paper describes the release the _Cosmic Slime Value Added Catalog_, part of SDSS DR17, which is the combination the two galaxy catalogs with density estimates as well as the resultant 3D density cubes of the two galaxy samples. Finally, we highlight some exciting potential applications of this data set, which include galaxy evolution in the context of the cosmic web, void finding, studies of the intergalactic medium, and multimessenger transient followup.
## 7 Acknowledgements
The authors would like to especially acknowledge Joel Primack and Doug Hellinger for sharing the outputs of the Boloshoi-Planck simulations. We also gratefully acknowledge the hospitality and support of the 2019 Kavli Summer Program in Astrophysics at UC Santa Cruz.
JB would like to acknowledge funding support from the National Science Foundation LEAPS-MPS award #2137452.
MCW and JKW acknowledge support for this work from NSF-AST 1812521, NSF-CAREER 2044303, the Research Corporation for Science Advancement, grant ID number 26842.
|
2309.01432 | On the Pólya conjecture for the Neumann problem in planar convex
domains | Denote by $N_{\cal N} (\Omega,\lambda)$ the counting function of the spectrum
of the Neumann problem in the domain $\Omega$ on the plane. G. P\'olya
conjectured that $N_{\cal N} (\Omega,\lambda) \ge (4\pi)^{-1} |\Omega|
\lambda$. We prove that for convex domains $N_{\cal N} (\Omega,\lambda) \ge (2
\sqrt 3 \,j_0^2)^{-1} |\Omega| \lambda$. Here $j_0$ is the first zero of the
Bessel function $J_0$. | N. Filonov | 2023-09-04T08:28:55Z | http://arxiv.org/abs/2309.01432v1 | # On the Polya conjecture for the Neumann problem in planar convex domains
###### Abstract
Denote by \(N_{\mathcal{N}}(\Omega,\lambda)\) the counting function of the spectrum of the Neumann problem in the domain \(\Omega\) on the plane. G. Polya conjectured that \(N_{\mathcal{N}}(\Omega,\lambda)\geqslant(4\pi)^{-1}|\Omega|\lambda\). We prove that for convex domains \(N_{\mathcal{N}}(\Omega,\lambda)\geqslant(2\sqrt{3}\,j_{0}^{2})^{-1}|\Omega|\lambda\). Here \(j_{0}\) is the first zero of the Bessel function \(J_{0}\). 1
Footnote 1: Keywords: Polya conjecture, Neumann problem, convex domains.
## 1 Formulation of the result
Let \(\Omega\subset\mathbb{R}^{d}\) be a bounded domain with Lipschitz boundary. We consider the Dirichlet and Neumann problems for the Laplace operator in \(\Omega\),
\[\begin{cases}-\Delta u=\lambda u&\text{in}\ \ \Omega,\\ u=0&\text{on}\ \ \partial\Omega,\end{cases}\qquad\begin{cases}-\Delta v=\mu v& \text{in}\ \ \Omega,\\ \frac{\partial v}{\partial\nu}=0&\text{on}\ \ \partial\Omega.\end{cases}\]
It is well known that the spectra of the both problems are discrete. Denote by \(\lambda_{k}\) and \(\mu_{k}\) the corresponding eigenvalues taking multiplicity into account,
\[0<\lambda_{1}<\lambda_{2}\leqslant\lambda_{3}\leqslant\dots,\qquad\lambda_{ k}\to+\infty,\]
\[0=\mu_{1}<\mu_{2}\leqslant\mu_{3}\leqslant\dots,\qquad\mu_{k}\to+\infty,\]
Introduce also the counting functions
\[N_{\mathcal{D}}(\Omega,\lambda):=\#\{k:\lambda_{k}\leqslant\lambda\},\qquad N _{\mathcal{N}}(\Omega,\lambda):=\#\{k:\mu_{k}\leqslant\lambda\}.\]
G. Polya in his book [7] conjectured that the estimates
\[N_{\mathcal{D}}(\Omega,\lambda)\leqslant\frac{|B_{1}||\Omega|}{(2\pi)^{d}}\ \lambda^{d/2},\]
\[N_{\mathcal{N}}(\Omega,\lambda)\geqslant\frac{|B_{1}||\Omega|}{(2\pi)^{d}}\ \lambda^{d/2} \tag{1.1}\]
hold true for all domains \(\Omega\) and for all \(\lambda\geqslant 0\). Here \(|\Omega|\) denotes the volume of the set \(\Omega\), and \(B_{1}\) is the unit ball in \(\mathbb{R}^{d}\). Note that the coefficient in front of \(\lambda^{d/2}\) coincides with the coefficient in the Weyl asymptotics.
We list the known results on the Polya conjecture for the Neumann case:
* in 1961, Polya himself proved [8] the estimate (1.1) for regular tiling domains. The domain \(\Omega\) is called tiling if the whole space \(\mathbb{R}^{d}\) can be covered by non-intersecting copies of \(\Omega\) up to a set of measure zero; the domain is regular tiling if the corresponding covering is periodic;
* in 1966, Kellner proved [4] the estimate (1.1) for all tiling domains;
* in 1992, Kroger proved [5] the inequality \[N_{\mathcal{N}}(\Omega,\lambda)\geqslant\frac{2}{d+2}\cdot\frac{|B_{1}|| \Omega|}{(2\pi)^{d}}\;\lambda^{d/2}\] (1.2) for all domains \(\Omega\) and all \(\lambda\geqslant 0\).
* In 2022, the estimate (1.1) was proved [2] for the disk and for circular sectors of arbitrary aperture in the plane.
In this paper we prove the following
**Theorem 1.1**.: _Let \(\Omega\subset\mathbb{R}^{2}\) be a convex bounded domain. Then_
\[N_{\mathcal{N}}(\Omega,\lambda)\geqslant\frac{|\Omega|\lambda}{2\sqrt{3}\,j_ {0}^{2}}. \tag{1.3}\]
_Here and everywhere below we denote by \(j_{\nu}\) the first positive root of the Bessel function \(J_{\nu}\). In particular, \(j_{0}\approx 2,4048\)._
Note that in 2D case the Polya conjecture (1.1) and the Kroger estimate (1.2) take the form
\[N_{\mathcal{N}}(\Omega,\lambda)\geqslant\frac{|\Omega|\lambda}{4\pi}\qquad \text{and}\qquad N_{\mathcal{N}}(\Omega,\lambda)\geqslant\frac{|\Omega| \lambda}{8\pi}\]
respectively. We have
\[\frac{1}{8\pi}\approx 0,0398,\quad\frac{1}{2\sqrt{3}\,j_{0}^{2}}\approx 0,0499,\quad\frac{1}{4\pi}\approx 0,0796.\]
Thus, the coefficient in (1.3) is better than the coefficient in (1.2), but we prove (1.3) only for convex domains.
**Remark 1.2**.: In terms of the eigenvalues themselves in the two-dimensional case the inequalities (1.1), (1.2) and (1.3) read as follows:
\[\mu_{l+1}\leqslant\frac{4\pi l}{|\Omega|},\qquad\mu_{l+1}\leqslant\frac{8\pi l }{|\Omega|},\qquad\text{and}\qquad\mu_{l+1}\leqslant\frac{2\sqrt{3}\,j_{0}^{2} \,l}{|\Omega|}\]
respectively.
## 2 Lemmas
**Lemma 2.1**.: _Let \(J_{\nu}\) be the Bessel function of order \(\nu\geqslant 0\). Then_
\[\int_{0}^{s}\left(\left(t^{-\nu}J_{\nu}(t)\right)^{\prime}\right)^{2}t^{2\nu+1} dt\leqslant\int_{0}^{s}J_{\nu}(t)^{2}t\,dt\qquad\text{for all}\quad s\in[0,j_{\nu}].\]
Proof.: Integrating by parts we obtain
\[\int_{0}^{s}\left(\left(t^{-\nu}J_{\nu}(t)\right)^{\prime}\right)^{2}t^{2\nu+1 }dt=\left.\left(t^{-\nu}J_{\nu}(t)\right)^{\prime}t^{\nu+1}J_{\nu}(t)\right|_{ 0}^{s}-\int_{0}^{s}t^{-\nu}J_{\nu}(t)\left(\left(t^{-\nu}J_{\nu}(t)\right)^{ \prime}t^{2\nu+1}\right)^{\prime}dt. \tag{2.1}\]
Further,
\[\left(t^{-\nu}J_{\nu}(t)\right)^{\prime}t^{2\nu+1}=t^{\nu+1}J_{\nu}^{\prime}( t)-\nu t^{\nu}J_{\nu}(t),\]
\[\left(\left(t^{-\nu}J_{\nu}(t)\right)^{\prime}t^{2\nu+1}\right)^{\prime}=t^{ \nu+1}J_{\nu}^{\prime\prime}(t)+t^{\nu}J_{\nu}^{\prime}(t)-\nu^{2}t^{\nu-1}J_ {\nu}(t)=-t^{\nu+1}J_{\nu}(t) \tag{2.2}\]
due to the Bessel equation. We have \(J_{\nu}(t)\geqslant 0\) on \([0,j_{\nu}]\), therefore the right hand side of (2.2) is non-positive, and the function \(\left(t^{-\nu}J_{\nu}\right)^{\prime}t^{2\nu+1}\) decreases on \([0,j_{\nu}]\). On the other hand,
\[\left(t^{-\nu}J_{\nu}\right)^{\prime}t^{2\nu+1}\Big{|}_{t=0}=0,\]
so
\[\left(t^{-\nu}J_{\nu}\right)^{\prime}t^{2\nu+1}<0\quad\text{for}\quad 0<t \leqslant j_{\nu}.\]
This means that the first term in the right hand side of (2.1) is non-positive. Now, (2.1) and (2.2) imply
\[\int_{0}^{s}\left(\left(t^{-\nu}J_{\nu}(t)\right)^{\prime}\right)^{2}t^{2\nu+ 1}dt\leqslant-\int_{0}^{s}t^{-\nu}J_{\nu}(t)\left(\left(t^{-\nu}J_{\nu}(t) \right)^{\prime}t^{2\nu+1}\right)^{\prime}dt=\int_{0}^{s}J_{\nu}(t)^{2}t\,dt.\qed\]
**Lemma 2.2**.: _Let \(r>0\). Introduce notations_
\[\gamma_{1}=(2r;0),\quad\gamma_{2}=(r;\sqrt{3}\,r),\qquad\Gamma=\left\{\gamma =n_{1}\gamma_{1}+n_{2}\gamma_{2}\right\}_{n_{1},n_{2}\in\mathbb{Z}}, \tag{2.3}\]
\(\Gamma\) _is a regular triangular lattice in the plane. If \(\gamma,\tilde{\gamma}\in\Gamma\), \(\gamma\neq\tilde{\gamma}\), then \(\left|\gamma-\tilde{\gamma}\right|\geqslant 2r\)._
This Lemma is obvious.
**Lemma 2.3**.: _Let \(\Omega\subset\mathbb{R}^{2}\) be a measurable set of finite measure, \(\left|\Omega\right|<\infty\). Let \(r>0\). Then there is a vector \(b\in\mathbb{R}^{2}\) such that_
\[\#\left(\Omega\cap(\Gamma+b)\right)\geqslant\frac{\left|\Omega\right|}{2\sqrt {3}\,r^{2}},\]
_where the lattice \(\Gamma\) is defined in (2.3), and \(\Gamma+b=\left\{\gamma+b\right\}_{\gamma\in\Gamma}\) is the shifted lattice._
Proof.: Denote by \(\mathcal{O}\) a cell of \(\Gamma\),
\[\mathcal{O}=\left\{t_{1}\gamma_{1}+t_{2}\gamma_{2}\right\}_{t_{1},t_{2}\in[0, 1)}.\]
Clearly, \(\left|\mathcal{O}\right|=2\sqrt{3}\,r^{2}\). We have
\[\#\left(\Omega\cap(\Gamma+b)\right)=\sum_{\gamma\in\Gamma}\chi_{\Omega}(\gamma +b),\]
where \(\chi_{\Omega}\) is the characteristic function of \(\Omega\). So,
\[\int_{\mathcal{O}}\#\left(\Omega\cap(\Gamma+b)\right)db=\int_{\mathcal{O}}\sum_{ \gamma\in\Gamma}\chi_{\Omega}(\gamma+b)\,db=\int_{\mathbb{R}^{2}}\chi_{\Omega}( y)\,dy=|\Omega|.\]
Therefore, there is a vector \(b\in\mathcal{O}\) such that
\[\#\left(\Omega\cap(\Gamma+b)\right)\geqslant\frac{|\Omega|}{|\mathcal{O}|}. \qquad\blacksquare\]
## 3 Proof of Theorem 1.1
In the recent paper K. Funano proved the following inequality.
**Theorem 3.1** ([3], Lemma 3.1).: _Let \(\Omega\subset\mathbb{R}^{d}\) be a convex bounded domain. Let \(r>0\), \(l\in\mathbb{N}\). Assume that there are points \(x_{1},\ldots,x_{l}\in\Omega\) such that_
\[|x_{j}-x_{k}|\geqslant 2r\quad\text{if}\quad j\neq k.\]
_Then the \(l\)-th eigenvalue of the Neumann problem in \(\Omega\) satisfies the estimate_
\[\mu_{l}\leqslant c_{0}\,d^{2}\,r^{-2},\]
_where \(c_{0}\) is an absolute constant._
We refine this inequality.
**Theorem 3.2**.: _Under the assumptions of Theorem 3.1 we have_
\[\mu_{l}\leqslant j_{\frac{2}{2}-1}^{2}\,r^{-2}.\]
Proof.: Introduce the function
\[F(\rho)=\rho^{1-d/2}J_{\frac{d}{2}-1}\left(\frac{\rho\,j_{\frac{d}{2}-1}}{r} \right),\qquad\rho>0,\]
and define
\[f_{k}(x)=\begin{cases}F\left(|x-x_{k}|\right),&\text{if }|x-x_{k}|<r,\\ 0,&\text{if }|x-x_{k}|\geqslant r,\end{cases}\qquad k=1,\ldots,l.\]
Clearly, \(f_{k}\in W^{1}_{2}(\Omega)\) and
\[\nabla f_{k}(x)=\begin{cases}F^{\prime}\left(|x-x_{k}|\right)\cdot\frac{x-x_{ k}}{|x-x_{k}|},&\text{if }|x-x_{k}|<r,\\ 0,&\text{if }|x-x_{k}|\geqslant r.\end{cases}\]
The intersection of the convex domain \(\Omega\) with a ball is also convex. It can be described in spherical coordinates as
\[B_{r}(x_{k})\cap\Omega=\left\{x=x_{k}+(\rho;\omega):\omega\in S^{d-1},0 \leqslant\rho<R_{k}(\omega)\right\},\]
where \(S^{d-1}\) denotes the unit sphere in \(\mathbb{R}^{d}\), and \(R_{k}\) is a continuous function on \(S^{d-1}\),
\[0<R_{k}(\omega)\leqslant r\qquad\forall\ \omega.\]
We have
\[\begin{split}&\int_{\Omega}|f_{k}(x)|^{2}dx=\int_{S^{d-1}}dS( \omega)\int_{0}^{R_{k}(\omega)}F(\rho)^{2}\rho^{d-1}d\rho\\ &\qquad\qquad=\int_{S^{d-1}}dS(\omega)\int_{0}^{R_{k}(\omega)}J_{ \frac{d}{2}-1}\left(\frac{\rho j_{\frac{d}{2}-1}}{r}\right)^{2}\rho\,d\rho\\ &\qquad=\left(\frac{r}{j_{\frac{d}{2}-1}}\right)^{2}\int_{S^{d-1} }dS(\omega)\int_{0}^{R_{k}(\omega)\,j_{\frac{d}{2}-1}r^{-1}}J_{\frac{d}{2}-1}( t)^{2}\,t\,dt,\end{split} \tag{3.1}\]
where we naked the change of variables
\[\rho=\frac{r\,t}{j_{\frac{d}{2}-1}}. \tag{3.2}\]
On the other hand,
\[\begin{split}&\int_{\Omega}|\nabla f_{k}(x)|^{2}dx=\int_{S^{d-1}} dS(\omega)\int_{0}^{R_{k}(\omega)}F^{\prime}(\rho)^{2}\rho^{d-1}d\rho\\ &=\int_{S^{d-1}}dS(\omega)\int_{0}^{R_{k}(\omega)}\left(\frac{d} {d\rho}\left(\rho^{1-d/2}J_{\frac{d}{2}-1}\left(\frac{\rho j_{\frac{d}{2}-1}} {r}\right)\right)\right)^{2}\rho^{d-1}\,d\rho\\ &\qquad=\int_{S^{d-1}}dS(\omega)\int_{0}^{R_{k}(\omega)\,j_{ \frac{d}{2}-1}r^{-1}}\left(\frac{d}{dt}\left(t^{1-d/2}J_{\frac{d}{2}-1}(t) \right)\right)^{2}t^{d-1}\,dt,\end{split} \tag{3.3}\]
where we naked the same change (3.2). Lemma 2.1 with \(\nu=\frac{d}{2}-1\) yields
\[\int_{0}^{R_{k}(\omega)\,j_{\frac{d}{2}-1}r^{-1}}\left(\frac{d}{dt}\left(t^{1 -d/2}J_{\frac{d}{2}-1}(t)\right)\right)^{2}t^{d-1}\,dt\leqslant\int_{0}^{R_{ k}(\omega)\,j_{\frac{d}{2}-1}r^{-1}}J_{\frac{d}{2}-1}(t)^{2}\,t\,dt, \tag{3.4}\]
as \(R_{k}(\omega)\leqslant r\). Now, (3.1), (3.3) and (3.4) imply the inequality
\[\int_{\Omega}|\nabla f_{k}(x)|^{2}dx\leqslant\left(\frac{j_{\frac{d}{2}-1}}{r }\right)^{2}\int_{\Omega}|f_{k}(x)|^{2}dx.\]
Introduce the space of linear combinations of \(f_{k}\)
\[\mathcal{L}:=\left\{f(x)=\sum_{k=1}^{l}c_{k}f_{k}(x)\right\}_{c_{k}\in\mathbb{ C}}.\]
By construction,
\[\operatorname{mes}\left(\operatorname{supp}f_{j}\cap\operatorname{supp}f_{k} \right)=0\qquad\text{if}\quad j\neq k.\]
Therefore,
\[\int_{\Omega}|\nabla f(x)|^{2}dx\leqslant\left(\frac{j_{\frac{d}{2}-1}}{r} \right)^{2}\int_{\Omega}|f(x)|^{2}dx\qquad\forall f\in\mathcal{L}.\]
As \(\dim\mathcal{L}=l\) the claim follows.
_Proof of Theorem 1.1._ Let \(\lambda>0\). Put \(r=\frac{j_{0}}{\sqrt{\lambda}}\). By virtue of Lemma 2.2 and Lemma 2.3 one can pick out points \(x_{1},\ldots,x_{l}\) in \(\Omega\) such that
\[|x_{j}-x_{k}|\geqslant 2r\quad\text{if}\quad j\neq k,\qquad\text{and}\quad l \geqslant\frac{|\Omega|}{2\sqrt{3}\,r^{2}}.\]
Theorem 3.2 implies now \(\mu_{l}\leqslant j_{0}^{2}\,r^{-2}\), and therefore,
\[N_{\mathcal{N}}(\Omega,\lambda)\geqslant l\geqslant\frac{|\Omega|}{2\sqrt{3} \,r^{2}}=\frac{|\Omega|\lambda}{2\sqrt{3}\,j_{0}^{2}}.\qquad\blacksquare\]
**Remark 3.3**.: Let \(\Gamma_{d}\) be a lattice in \(\mathbb{R}^{d}\) such that \(|\gamma-\tilde{\gamma}|\geqslant 2\) for all \(\gamma,\tilde{\gamma}\in\Gamma_{d}\), \(\gamma\neq\tilde{\gamma}\). Denote by \(\mathcal{O}_{d}\) a cell of \(\Gamma_{d}\). In the same manner as above we obtain that for any convex bounded domain \(\Omega\subset\mathbb{R}^{d}\)
\[N_{\mathcal{N}}(\Omega,\lambda)\geqslant\frac{|\Omega|\,\lambda^{d/2}}{| \mathcal{O}_{d}|(j_{\frac{d}{2}-1})^{d}}. \tag{3.5}\]
On the other hand it is easy to see that
\[\frac{|B_{1}|}{|\mathcal{O}_{d}|}\leqslant\delta_{d} \tag{3.6}\]
where \(\delta_{d}\) is the optimal sphere packing density in \(\mathbb{R}^{d}\). The exact value of \(\delta_{d}\) is known today for \(d=1,2,3,8\) and \(24\) only. The value of \(\delta_{d}\) in other dimensions is a famous open question, see for example [1] and references therein. The estimate (3.6) and the known estimate [6]
\[\delta_{d}\leqslant\frac{(j_{\frac{d}{2}})^{d}}{2^{2d}\,\Gamma\left(\frac{d+2 }{2}\right)^{2}}\]
imply that the coefficient in the right hand side of (3.5) satisfies
\[\frac{1}{|\mathcal{O}_{d}|(j_{\frac{d}{2}-1})^{d}}\leqslant\frac{\delta_{d}}{ |B_{1}|(j_{\frac{d}{2}-1})^{d}}<\frac{2|B_{1}|}{(d+2)(2\pi)^{d}}\quad\text{if} \quad d\geqslant 3.\]
So, the bound (3.5) does not improve the Kroger bound (1.2) for \(d\geqslant 3\).
|
2310.17333 | Arabic Fine-Grained Entity Recognition | Traditional NER systems are typically trained to recognize coarse-grained
entities, and less attention is given to classifying entities into a hierarchy
of fine-grained lower-level subtypes. This article aims to advance Arabic NER
with fine-grained entities. We chose to extend Wojood (an open-source Nested
Arabic Named Entity Corpus) with subtypes. In particular, four main entity
types in Wojood, geopolitical entity (GPE), location (LOC), organization (ORG),
and facility (FAC), are extended with 31 subtypes. To do this, we first revised
Wojood's annotations of GPE, LOC, ORG, and FAC to be compatible with the LDC's
ACE guidelines, which yielded 5, 614 changes. Second, all mentions of GPE, LOC,
ORG, and FAC (~44K) in Wojood are manually annotated with the LDC's ACE
sub-types. We refer to this extended version of Wojood as WojoodF ine. To
evaluate our annotations, we measured the inter-annotator agreement (IAA) using
both Cohen's Kappa and F1 score, resulting in 0.9861 and 0.9889, respectively.
To compute the baselines of WojoodF ine, we fine-tune three pre-trained Arabic
BERT encoders in three settings: flat NER, nested NER and nested NER with
subtypes and achieved F1 score of 0.920, 0.866, and 0.885, respectively. Our
corpus and models are open-source and available at
https://sina.birzeit.edu/wojood/. | Haneen Liqreina, Mustafa Jarrar, Mohammed Khalilia, Ahmed Oumar El-Shangiti, Muhammad Abdul-Mageed | 2023-10-26T11:59:45Z | http://arxiv.org/abs/2310.17333v2 | # Arabic Fine-Grained Entity Recognition
###### Abstract
Traditional NER systems are typically trained to recognize coarse-grained entities, and less attention is given to classifying entities into a hierarchy of fine-grained lower-level subtypes. This article aims to advance Arabic NER with fine-grained entities. We chose to extend Wojood (an open-source Nested Arabic Named Entity Corpus) with subtypes. In particular, four main entity types in Wojood, geopolitical entity (GPE), location (LOC), organization (ORG), and facility (FAC), are extended with \(31\) subtypes. To do this, we first revised Wojood's annotations of GPE, LOC, ORG, and FAC to be compatible with the LDC's ACE guidelines, which yielded \(5,614\) changes. Second, all mentions of GPE, LOC, ORG, and FAC (\(\sim 44K\)) in Wojood are manually annotated with the LDC's ACE subtypes. We refer to this extended version of Wojood as \(\textit{Wojood}_{Fine}\). To evaluate our annotations, we measured the inter-annotator agreement (IAA) using both Cohen's Kappa and \(F_{1}\) score, resulting in \(0.9861\) and \(0.9889\), respectively. To compute the baselines of \(\textit{Wojood}_{Fine}\), we fine-tune three pre-trained Arabic BERT encoders in three settings: flat NER, nested NER and nested NER with subtypes and achieved \(F_{1}\) score of \(0.920\), \(0.866\), and \(0.885\), respectively. Our corpus and models are open-source and available at [https://sina.birzeit.edu/wojood/](https://sina.birzeit.edu/wojood/).
## 1 Introduction
Named Entity Recognition (NER) is the task of identifying and classifying named entities in unstructured text into predefined categories such as people, organizations, locations, disease names, drug mentions, among others (li et al., 2020). NER is widely used in various applications such as information extraction and retrieval (Jiang et al., 2016), question answering (Liu et al., 2020), word sense disambiguation (Jarrar et al., 2023; Al-Haji and Jarrar, 2021), machine translation (Jain et al., 2019; Khurana et al., 2022), automatic summarization (Summerscales et al., 2011; Khurana et al., 2022), interoperability (Jarrar et al., 2011) and cybersecurity (Tikhomirov et al., 2020).
Traditional NER systems are typically trained to recognize coarse and high-level categories of entities, such as person (pers), location (loc), geopolitical entity (gpe), or organization (org). However, less attention is given to classifying entities into a hierarchy of fine-grained lower-level subtypes (Zhu et al., 2020; Desmet and Hoste, 2013). For example, locations (loc) like Asia and Red Sea could be further classified into Continent and Water-Body, respectively. Similarly, organizations like Amazon, Cairo University, and Sphinx Cure can be classified into commercial, educational, and health entities, respectively. Belgium, Beirut, and Brooklyn can be classified into Country, Town, and Neighborhood instead of classifying them all as gpe. The importance of classifying named entities into subtypes is increasing in many application areas, especially in question answering, relation extraction, and ontology learning (Lee et al., 2006).
As will be discussed in the following sub-section, the number of NER datasets that support subtypes is limited, particularly for the Arabic language. The only available Arabic NER corpus with subtypes is the LDC's ACE2005 (Walker et al., 2005). However, this corpus is expensive. In addition, ACE2005 was collected two decades ago and hence may not be representative of the current state of Arabic language use. This is especially the case since language models are known to be sensitive to temporal and domain shifts (see section 5).
To avoid starting from scratch, we chose to extend upon a previously published and open-source Arabic NER corpus known as 'Wojood' (Jarrar et al., 2022). Wojood consists of \(550K\) tokens manually annotated with \(21\) entity types. In particular, we manually classify four main entity types in Wojood (gpe, loc, org, and FAC) with \(31\) new fine-grained subtypes. This extension is not straightforward as we have to change (\(5,614\) changes) the original annotation of these four types of entities to align with LDC guidelines before extending them with subtypes. The total number of tokens that are annotated with the \(31\) subtypes is \(47.6\)K.
Our extended version of Wojood is hereafter called \(\mathit{Wojood}_{Fine}\). We measure inter-annotator agreement (IAA) using both Cohen's Kappa and \(F_{1}\), resulting in \(0.9861\) and \(0.9889\), respectively.
To compute the baselines for \(\mathit{Wojood}_{Fine}\), we fine-tune three pre-trained Arabic BERT encoders across three settings: (i) flat, (ii) nested without subtypes, and (iii) nested with subtypes, using multi-task learning. Our models achieve \(0.920\), \(0.866\), and \(0.885\) in \(F_{1}\), respectively.
The remaining of the paper is organized as follows: Section 2 overviews related work, and Section 3 presents the \(\mathit{Wojood}_{Fine}\) corpus, the annotation process, and the inter-annotator-agreement measures. In Section 4, we present the experiments and the fine-tuned NER models. In Section 5 we present error analysis and out-of-domain performance and we conclude in Section 6.
## 2 Related Work
Most of the NER research is focused on coarse-grained named entities and typically targets a limited number of categories. For example, Chinchor and Robinson (1997) proposed three classes: person, location and organization. The Miscellaneous class was added to CoNLL-2003 Sang and De Meulder (2003). Additional four classes (geo-political entities, weapons, vehicles, and facilities) were also introduced in the ACE project Walker et al. (2005). The OntoNotes corpus is more expressive as it covers \(18\) types of entities Weischedel et al. (2013).
Coarse-grained NER is a good starting point for named entity recognition, but it is not sufficient for tasks that require a more detailed understanding of named entities Ling and Weld (2012); Hamdi et al. (2021).
Substantial research has been undertaken to identify historical entities. For instance, the HIPE shared task Ehrmann et al. (2020) focused on extracting named entities from historical newspapers written in French, German, and English. One of its subtasks was the recognition and classification of mentions according to finer-grained entity types. The corpus used in the shared task consists of tokens annotated with five main entity types and \(12\) subtypes, following the IMPRESSO guidelines Ehrmann et al. (2020). A similar corpus, called NewsEye, was collected from historical newspapers in four languages: French, German, Finnish, and Swedish Hamdi et al. (2021). The corpus is annotated with four main types: per, loc, org, and prod. The loc entities were further classified into five subtypes, and the org entities into two subtypes. Desmet and Hoste (2013) proposed a one million fine-grained NER corpus for Dutch, which was annotated using six main entity types and \(27\) subtypes (\(10\) subtypes for pers, three for org, nine for loc, three for prod, and two for events).
Zhu et al. (2020) noted that NER models cannot effectively process fine-grained labels with more than \(100\) types. Thus, instead of having many fine-grained entities at the top level, they propose a tagging strategy in which they use \(15\) main entity types and \(131\) subtypes. Additionally, Ling and Weld (2012) proposed a fine-grained set of \(112\) tags and formulated the tagging problem as multi-class multi-label classification.
A recent shared task was organized by Fetahu et al. (2023) at SemEval-2023 Task 2, called MultiCoNER 2 (Fine-grained Multilingual Named Entity Recognition). A multilingual corpus (MULTICONER V2) was extracted from localized versions of Wikipedia covering \(12\) languages - Arabic is not included. The corpus was annotated with a NER taxonomy consisting of \(6\) coarse-grained types and \(33\) fine-grained subtypes (seven subtypes for Person, seven for Group, five for PROD, five for Creative Work, and five for Medical). Most participating systems outperformed the baselines by about \(35\%\)\(F_{1}\).
There are a few Arabic NER corpora Darwish et al. (2021), but all of them are coarse-grained. The ANERCorp corpus covers four entity types Benajiba et al. (2007), CANERCorpus covers \(14\) religion-specific types Salah and Zakaria (2018), and Ontonotes covers \(18\) entities Weischedel et al. (2013). The multilingual ACE2005 corpus Walker et al. (2005), which includes Arabic, covers five coarse-grained entities and \(35\) fine-grained subtypes (3 subtypes for pers, \(11\) for gpe, seven for loc, nine for org, and five for fac). Nevertheless, the ACE2005 corpus is costly and covers only one domain (media articles) that was collected \(20\) years ago. The most recent Arabic NER corpus is Wojood Jarrar et al. (2022), which covers \(21\) nested entity types covering multiple domains. However, Wojood is a coarse-grained corpus and does not support entity subtypes.
To build on previous research on Arabic NER, we chose to extend the Wojood corpus with finer-grained subtypes. To ensure that our Wojood exten
sion is compatible with other corpora, we chose to follow the ACE annotation guidelines.
## 3 _Wojood\({}_{Fine}\)_ Corpus
_Wojood\({}_{Fine}\)_ expands the annotation of the Wojood corpus [14], by adding fine-grain annotations for named-entity subtypes. Wojood is a NER corpus with \(550\)K tokens annotated manually using \(21\) entity types. About \(80\)% of Wojood was collected from MSA articles, while the \(12\)% was collected from social media in Palestinian and Lebanese dialects [14, 15, 16]). One novelty of Wojood is its nested named entities, but some entity types can be ambiguous, which will affect downstream tasks such as information retrieval. For instance, the entity type "Organization" may refer to the government, educational institution, or a hospital to name a few. That is why _Wojood\({}_{Fine}\)_ adds subtypes to four entity types: Geopolitical Entity (GPE), Organization (ORG), Location (LOC), and Facility (FAC). Table 3 shows the overall counts of the main four entity types in Wojood and _Wojood\({}_{Fine}\)_. Note that creating _Wojood\({}_{Fine}\)_ was not a straightforward process as it required revision of the Wojood annotation guidelines, which we discuss later in this section. As discussed in [14], Wojood is available as a RESTful web service, the data and the source-code are also made publicly available [14, 15, 16].
### subtypes
All GPE, ORG, LOC and FAC tagged tokens in _Wojood\({}_{Fine}\)_ corpus were annotated with the appropriate subtype based on the context, adding an additional \(31\) entity subtypes to _Wojood\({}_{Fine}\)_. Throughout our annotation process, The LDC's ACE 2008 annotation guidelines for Arabic Entities V7.4.2 served as the basis for defining our annotation guidelines. Nevertheless, we added new tags (NEIGHBORHOOD, CAMP, SPORT, and ORG_FAC) to cover additional cases. Table 2 lists the frequency of each subtype in _Wojood\({}_{Fine}\)_. Tables 7 and 8 in Appendix A present a brief explanation and examples of each subtype.
### _Wojood\({}_{Fine}\)_ Annotation Guideline
We followed ACE annotation guidelines to annotate the subtypes in _Wojood\({}_{Fine}\)_. However, since _Wojood\({}_{Fine}\)_ is based on Wojood, we found a discrepancy between Wojood and ACE guidelines. To address this issue in _Wojood\({}_{Fine}\)_, we reviewed the annotations related to GPE, ORG, LOC and FAC to ensure compatibility with ACE guidelines. In this section, we highlight a number of the challenging annotation decisions we made in _Wojood\({}_{Fine}\)_.
**Country's governing body**: in Wojood, country mentions were annotated as GPE and if the intended meaning of the country is a governing body then it is annotated as ORG. However, in _Wojood\({}_{Fine}\)_, all ORG mentions that refer to the country's governing body are annotated as GPE with the subtype GPE_ORG. Figure 1 illustrates two examples to illustrate the difference between Wojood and _Wojood\({}_{Fine}\)_ guidelines. According to Wojood, _Wojoj_ /Nigeria is tagged once as GPE and once as GPE, while in _Wojood\({}_{Fine}\)_ both are GPE in the first level and in the second level one is tagged as Country and the other as GPE_ORG.
\begin{table}
\begin{tabular}{l|l|l} \hline
**Tag** & **Wojood** & _Wojood\({}_{Fine}\)_ \\ \hline GPE & 21,780 & 23,085 \\ ORG & 18,785 & 18,747 \\ LOC & 917 & 1,441 \\ FAC & 1,215 & 1,121 \\ \hline
**Total** & **42,697** & **44,394** \\ \hline \end{tabular}
\end{table}
Table 1: Frequency of the four entity types in Wojood and _Wojood\({}_{Fine}\)_.
\begin{table}
\begin{tabular}{l|l|l} \hline
**Tag** & **Sub-type Tag** & **Count** \\ \hline \hline \multirow{8}{*}{GPE} & COUNTF & 8,205 \\ & STATE-OR-PROVENCE & 1,890 \\ & TOWN & 12,014 \\ & NEIGHBORHOOD & 119 \\ & CAMP & 838 \\ & GPE,ORG & 1,530 \\ & SPORT & 8 \\ \hline \multirow{8}{*}{LOC} & CONTINENT & 214 \\ & CLUSTER & 303 \\ & ADDRESS & 0 \\ & BOUNDARY & 22 \\ & CELESTAL & 4 \\ & WATER-BODY & 123 \\ & LAND-REGION-NATURAL & 259 \\ & REGION-GENERAL & 383 \\ & REGION-INTERATIONAL & 110 \\ \hline \multirow{8}{*}{ORG} & GOV & 8,325 \\ & COM & 611 \\ & EDU & 1,159 \\ & ENT & 3 \\ & NONGOV & 5,779 \\ & MED & 4,111 \\ & REL & 96 \\ & SCI & 146 \\ & SPOR & 21 \\ & ORG,FAC & 114 \\ \hline \multirow{4}{*}{FAC} & PLANT & 1 \\ & AIRPORT & 6 \\ \cline{1-1} & BUILDING-ORG-GROUNDS & 1017 \\ \cline{1-1} & SUBAREA-FACILITY & 134 \\ \cline{1-1} & PATH & 76 \\ \hline
**Total** & & **47,621** \\ \hline \end{tabular}
\end{table}
Table 2: Counts of each subtype entity in the corpus.
**Facility vs. organization**: Wojood annotates buildings as fac but if the intended meaning, in the context is an organization, then it is annotated as org. In _Wojood\({}_{Fine}\)_, all mentions that refer to the facility's organization or social entity are annotated as org with the subtype org_fac. Figure 2 illustrates an example of this case. Instead of annotating (((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( ((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( (((((((((((((((((((((((((((((((((((((((((((((((((((((( (
Refer to Table 3 for the IAA for each subtype.
One can clearly observe that \(\kappa\) is high and that is for multiple reasons. First, we revised the annotations of the main four entity types (GPE, ORG, LOC and FAC) to better match ACE guideline. Second, once we verified the top level entity types, we started annotating the subtypes. Since the types and subtypes are hierarchically organized, that constraint the number of possible subtypes per token, leading to high IAA. Third, the NER expert gave a continuous feedback to the annotator and challenging entity mentions were discussed with the greater team.
As mentioned above, we calculated the IAA using both, Cohen's Kappa and \(F_{1}\), for the subtypes of GPE, ORG, LOC and FAC tags. In what follows we explain Cohen's Kappa and \(F_{1}\). Note that \(F_{1}\) is not normally used for IAA, but it is an additional validation of the annotation quality.
#### 3.4.1 Cohen's Kappa
To calculate Kappa for a given tag, we count the number of agreements and disagreements between annotators for a given subtype (such as GPE_country). At the token level, agreements are counted as pairwise matches; thus, disagreements happen when a token is annotated by one annotator (e.g., as GPE_country) and (e.g., as GPE_STATE-OR-province) by another annotator. As such, Kappa is calculated by equation 1 (Eugenio and Glass, 2004).
\[\kappa=\frac{P_{o}-P_{e}}{1-P_{e}} \tag{1}\]
where \(P_{o}\) represents the observed agreement between annotators and \(P_{e}\) represents the expected agreement, which is given by equation 2.
\[P_{e}=\frac{1}{N^{2}}\sum_{T}n_{T1}\times n_{T2} \tag{2}\]
where \(n_{Ti}\) is the number of tokens labeled with tag \(T\) by the \(i\)th annotator and \(N\) is the total number of annotated tokens.
#### 3.4.2 F-Measure
For a given tag \(T\), the \(F_{1}\) is calculated according to equation 3. We only counted the tokens that at least one of the annotators had labeled with the \(T\). We then conducted a pair-wise comparison. \(TP\) represents the true positives which is the number of agreements between annotators (i.e. number of tokens labeled GPE_town by both annotators). If the first annotator disagrees with the second, it is counted as false negatives (\(FN\)), and if the second disagrees with the first, it is counted as false positives (\(FP\)), with a total of disagreement being \(FN+FP\).
\[F_{1}=\frac{2TP}{2TP+FN+FP} \tag{3}\]
## 4 Fine-Grained NER Modeling
### Approach
For modeling, we have three tasks all performed on _Wojood\({}_{Fine}\)_: **(1)**_Flat NER_, where for each token, we predict a single label from a set of \(21\) labels, **(2)**_Nested NER_, where we predict multiple labels picked from the \(21\) tags (i.e., multi-label classification) for each token and **(3)**_Nested with Subtypes NER_, this is also a multi-label task, where we ask the model to predict the main entity types and subtypes for each token from \(52\) total labels. We frame this as multi-task approach
\begin{table}
\begin{tabular}{l|l|l} \hline
**Sub-Type Tag** & **Kappa** & **F1-Score** \\ \hline \hline COUNTRY & 0.9907 & 00.99 \\ STATE-OR-PRONIVCE & 0.9846 & 00.98 \\ TOWN & 0.9983 & 01.00 \\ NEIGHBORHOOD & 01.00 & 01.00 \\ CAMP & 01.00 & 01.00 \\ GPE, ORG & 0.9810 & 00.98 \\ SPORT & 01.00 & 01.00 \\ CONTINENT & 01.00 & 01.00 \\ CLUSTER & 0.9589 & 00.96 \\ ADDRESS & - & - \\ BOUNDARY & 01.00 & 01.00 \\ CELESTIAL & - & - \\ WATER-BODY & 01.00 & 01.00 \\ LAND-REGION- & 0.9333 & 00.93 \\ NATURAL & & \\ REGION-GENERAL & 0.9589 & 00.96 \\ REGION- & 0.9231 & 00.92 \\ INTERNATIONAL & & \\ GOV & 0.9760 & 00.98 \\ COM & 01.00 & 01.00 \\ EDU & 0.9807 & 00.98 \\ ENT & - & - \\ NONGOV & 0.9892 & 00.99 \\ MED & 01.00 & 01.00 \\ REL & 0.9630 & 00.96 \\ SCI & 01.00 & 00.10 \\ SPO & 01.00 & 01.00 \\ ORG\_FAC & 01.00 & 01.00 \\ PLANT & - & - \\ ATRPORT & - & - \\ BUILDING-OR- & 01.00 & 01.00 \\ GROUNDS & & \\ SUBAREA-FACILITY & 01.00 & 01.00 \\ PATH & 01.00 & 00.00 \\ \hline
**Overall** & **0.9861** & **0.9889** \\ \hline \end{tabular}
\end{table}
Table 3: Overall Kappa and F1-score for each sub-type.
since we are learning both the nested labels _and_ their subtypes jointly. In the multi-task case, each entity/subtype has its own classification layer, in the case of nested NER and nested with subtypes NER, the model consists of \(21\) and \(52\) classification layers, respectively. Since we use the IOB2 [20] tagging scheme, each linear layer is a multi-class classifier that outputs the probability distribution through softmax activation function for three classes, \(C\in\{I,O,B\}\)[17]. The model is trained with cross entropy loss objective computed for each linear layer separately, which are summed to compute the final cross entropy loss. All models are flat in the sense that we do not use any hierarchical architectures. However, future work can consider employing a hierarchical architecture where nested tokens are learnt first _then_ their subtypes within the model. For all tasks, we fine-tune three encoder-based models for Arabic language understanding. Namely, we use ARBERTv2 and MARBERTv2 [14], which are both improved versions of ARBERT and MARBERT [1], respectively, that are trained on bigger datasets. The third model is ARABERTv2, which is an improved version of ARABERT [1]. It is also trained on a bigger dataset, with improved preprocessing. Figure 4 offers a simple visualization of our models' architecture.
### Training Configuration
We split our dataset into three distinct parts for training (Train) \(70\)%, validation (Dev) \(10\)%, and blind testing (Test) \(20\)%. We fine-tune all three models for \(50\) epochs each with an early stopping patience of \(5\) as identified on Dev. We use the AdamW optimizer [15], an exponential learning rate scheduler and a dropout of \(0.1\). The maximum sequence length is \(512\), the batch size, \(B=8\), and the learning rate, \(\eta=1e^{-5}\). For each model, we report an average of three runs (each time with a different seed). We report in \(F_{1}\) along with the standard deviation from the three runs, on both Dev and Test, for each model. All models are implemented using PyTorch, Huggingface Transformers, and a custom version of the Wojood open-source code1.
Footnote 1: [https://github.com/SinaLab/ArabicNER](https://github.com/SinaLab/ArabicNER)
### Results
We show the results of our three fine-tuned models across each of the three tasks in Table 4. We briefly highlight these results in the following:
**Flat NER.** The three fine-tuned models achieve comparable results on the Flat NER task, with ARBERTv2 scoring slightly better on both the Dev and Test sets. ARBERTv2 achieves an \(F_{1}\) of \(92\%\) on the Test set, while ARBERTv2 and ARABERTv2 achieves \(91.3\%\) and \(90.3\%\), respectively.
**Nested NER.** ARABERTv2 slightly outperforms other pre-trained models with a small margin, on Dev and Test. On Test, it scores \(86.6\%\).
**Nested NER with Subtypes.** Here, ARABERTv2 achieves the highest score (\(88.5\%F_{1}\)).
## 5 Analysis
For all tasks, all models almost always converge in the first \(10\) epochs. For all models, there is a positive correlation between performance and the number of training samples. For example, for classes represented well in the training set (e.g.,
\begin{table}
\begin{tabular}{l|l|l|l} \hline Task & Model & Dev & Test \\ \hline \multirow{2}{*}{Flat} & M1 & 0.917\({}^{\pm 0.00}\) & **0.920\({}^{\pm 0.00}\)** \\ & M2 & 0.910\({}^{\pm 0.00}\) & 0.913\({}^{\pm 0.01}\) \\ & M3 & 0.902\({}^{\pm 0.00}\) & 0.907\({}^{\pm 0.01}\) \\ \hline \multirow{2}{*}{Nested} & M1 & 0.844\({}^{\pm 0.02}\) & 0.845\({}^{\pm 0.01}\) \\ & M2 & 0.868\({}^{\pm 0.02}\) & 0.861\({}^{\pm 0.02}\) \\ & M3 & 0.858\({}^{\pm 0.02}\) & **0.866\({}^{\pm 0.02}\)** \\ \hline \multirow{2}{*}{Nested} & M1 & 0.836\({}^{\pm 0.01}\) & 0.837\({}^{\pm 0.01}\) \\ & M2 & 0.880\({}^{\pm 0.01}\) & 0.883\({}^{\pm 0.01}\) \\ \cline{1-1} & M3 & 0.883\({}^{\pm 0.00}\) & **0.885\({}^{\pm 0.00}\)** \\ \hline \end{tabular}
\end{table}
Table 4: Results of fine-tuned models on the three different tasks. **M1**: ARBERTv2, **M2**: MARBERTv2 and **M3**: ARABERTv2. The results are represented as F1 averaged over 3 runs.
Figure 4: BERT refers to one of three pre-trained models we are using. For flat task, each softmax produce one class for each token, for other tasks each softmax is a set of softmax that produce multiple labels for each token.
country, town and gov), models perform at 0.90 \(F_{1}\) or above.
The inverse is also true, with poor performance on classes such as sport, boundary and celestial. There are also some nuances. For example, we can see that the best model is struggling with the com subtype class even though the model has scored good results with classes with fewer samples such as cluster. The main reason for this is that types such as cluster are a closed set of classes (e.g., "European Union", "African Union") where the model can easily memorize them, while the com refers to an infinite group of commercial entities, that can not be limited. Figure 5 is a plot of the number of samples in training data (X-axis) vs. performance (Y-axis) that clearly shows the general pattern of good performance positively correlating with the number of training samples.
### Out-of-Domain Performance
To assess the generalization capability of our models, we conducted an evaluation on three unseen domains and different time periods. Three corpora were collected, each covering a distinct domain: finance, science, and politics. These corpora were compiled from Aljazeera news articles published in 2023. Manual annotation of the three corpora was performed in accordance with the same annotation guidelines established for \(\mathit{Wojood}_{Fine}\). We apply the three versions of each of our three models trained on \(\mathit{Wojood}_{Fine}\) original training data (described in Section 4.2) on the new domains, for each of the three NER tasks. We present results for this out-of-domain set of experiments in Table 5. We observe that performance drastically drops on all three new domains, for all models on all tasks. This is not surprising, as challenges related to domain generalization are well-known in the literature. Our results here, however, allow us to quantify the extent to which model performance degrades on each of these three new domains. In particular, models do much better on the politics domain than they perform on finance or science. This is the case since our training data are collected from online articles involving news and much less content from financial or scientific sources. Figure 6 shows some examples for new mentions from those domains that have not been seen in \(\mathit{Wojood}_{Fine}\).
### Error Analysis
In order to understand the errors made by the model, we conduct a human error analysis on the errors generated by ARABERTv2 (i.e, best model on this task) on the first \(2\)K tokens of the Dev set of Nested NER with Subtypes task. We find that the model's errors can be categorized into six major error classes: **(1)**_wrong tag_, where the model predicts a different tag, **(2)**_no prediction_, where the model does not produce any tag (i.e. predict O), **(3)**_missing subtype_, the model succeeds in predicting parent tag but fails to predict the subtype,
Figure 5: Number of samples vs. \(F_{1}\) in each subtype class on Subtype classification task.
Figure 6: Some mentions from the three new domains that have not previously appeared in \(\mathit{Wojood}_{Fine}\). (a) (\(\mathit{j_{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_} \) in Politics domain, (b) (\(\mathit{j_{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_} \) in Finance domain, (c) (\(\mathit{j_{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_} \) in Science domain.
\begin{table}
\begin{tabular}{l|l|l|l|l} \hline Task & Model & Finance & Science & Politics \\ \hline \multirow{3}{*}{Flat} & M1 & 63.7\% \({}^{\text{\tiny{2001}}}\) & 0.670\({}^{\text{\tiny{2002}}}\) & **0.747\({}^{\text{\tiny{2002}}}\)** \\ & M2 & 0.573\({}^{\text{\tiny{3101}}}\) & **0.671\({}^{\text{\tiny{2002}}}\)** & 0.717\({}^{\text{\tiny{2001}}}\) \\ & M3 & **0.643\({}^{\text{\tiny{401}}}\)** & 0.670\({}^{\text{\tiny{2002}}}\) & 0.723\({}^{\text{\tiny{401}}}\) \\ \hline \multirow{3}{*}{Nested} & M1 & 0.458\({}^{\text{\tiny{401}}}\) & 0.494\({}^{\text{\tiny{4002}}}\) & 0.557\({}^{\text{\tiny{4000}}}\) \\ & M2 & 0.499\({}^{\text{\tiny{4005}}}\) & 0.554\({}^{\text{\tiny{4000}}}\) & 0.612\({}^{\text{\tiny{4001}}}\) \\ & M3 & **0.563\({}^{\text{\tiny{402}}}\)** & **0.583\({}^{\text{\tiny{402}}}\)** & **0.629\({}^{\text{\tiny{4003}}}\)** \\ \hline \multirow{3}{*}{Nested} & M1 & 0.449\({}^{\text{\tiny{4000}}}\) & 0.493\({}^{\text{\tiny{4002}}}\) & 0.497\({}^{\text{\tiny{401}}}\) \\ & M2 & 0.504\({}^{\text{\tiny{4003}}}\) & 0.544\({}^{\text{\tiny{4006}}}\) & 0.575\({}^{\text{\tiny{4002}}}\) \\ \cline{1-1} & M3 & **0.553\({}^{\text{\tiny{40}}}\)** & **0.545\({}^{\text{\tiny{4002}}}\)** & **0.593\({}^{\text{\tiny{408}}}\)** \\ \hline \end{tabular}
\end{table}
Table 5: Results of fine-tuned models on the three new domains, Finance, Science, and Politics. **M1**: MARBERTv2, **M2**: ARBERTv2 and **M3**: ARABERTv2. The results are represented as F1 averaged over 3 runs.
**(4)** _missing parent tag_: the model succeeds in predicting subtype tag but fails to predict the parent tag, **(5)** _MSA vs. DIA confusion_, the model makes a wrong prediction due to confusion between MSA and Dialect, and **(6)** _ordinal vs. cardinal_, in this class, the model assigns cardinal to an ordinal class. Figure 7 shows the distribution of different errors present in the Dev set, with the _wrong tag_ being the major source of errors followed by _no prediction_ error. A further breakdown of the _wrong tag_ error class shows that \(14.3\)% are due to usage of dialect words, a similar proportion are due to nested entities. Table 6 shows an example of each error class.
## 6 Conclusion and Future Work
We presented _Wojood\({}_{Fine}\)_, an extension to the Wojood NER corpus with subtypes for the GPE, LOC, org, and FAC. _Wojood\({}_{Fine}\)_ corpus is the first fine-grain corpus for MSA and dialectal Arabic with nested and subtyped NER. The GPE, ORG, FAC and LOC tags form more than \(44\)K tokens of the corpus, which was manually annotated using subtypes entities. Our inter-annotator agreement IAA evaluation of _Wojood\({}_{Fine}\)_ annotations achieved high levels of agreement among the annotators. The achieved evaluations are 0.9861 Kappa and 0.9889 \(F_{1}\).
We also fine-tune three pre-trained models ARBERTv2, MARBERTv2 and ARABERTv2 and tested their performance on different settings of _Wojood\({}_{Fine}\)_. We find that ARABERTv2 achieved the best performance on Nested and Nested with Subtypes tasks. In the future, we plan to test pre-trained models on nested subtypes with hierarchical architecture. We also plan to link named entities with concepts in the Arabic Ontology (Jarrar, 2021, 2011) to enable a richer semantic understanding of text. Additionally, we will extend the _Wojood\({}_{Fine}\)_ corpus to include more dialects, especially the Syrian Nabra dialects (Nayouf et al., 2023) as well as the four dialects in the Lisan (Jarrar et al., 2023) corpus.
## Acknowledgment
We would like to thank Sana Ghanem for her invaluable assistance in reviewing and improving the annotations, as well as for her support in the IAA calculations. The authors would also like to thank Tymaa Hammouda for her technical support and
\begin{table}
\begin{tabular}{l l l l} \hline \hline Example & Gold & Predicted & Error Type \\ \hline \hline \multirow{3}{*}{_LAN \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\downarrow\)_MSA \(\)_MSA \(\downarrow\)_MSA \(\
expertise in the data engineering of the corpus.
### Limitations
A number of considerations related to limitations and ethics are relevant to our work, as follows:
* **Intended Use.** Our models perform named entity recognition at a fine-grained level and can be used for a wide range of information extraction tasks. As we have shown, however, even though the models are trained with data acquired from several domains, their performance drops on data with distribution different than our training data such as the finance or science domains. We suggest this be taken into account in any application of the models.
* **Annotation Guidelines and Process.** Some of the entities are difficult to tag. Even though annotators have done their best and we report high inter-annotator reliability, the application of our guidelines may need to be adapted before application to new domains.
## Ethics Statement
We trained our models on publicly available data, thus we do not have any particular concerns about privacy.
|
2303.00777 | Fast and reliable entanglement distribution with quantum repeaters:
principles for improving protocols using reinforcement learning | Future quantum technologies such as quantum communication, quantum sensing,
and distributed quantum computation, will rely on networks of shared
entanglement between spatially separated nodes. In this work, we provide
improved protocols/policies for entanglement distribution along a linear chain
of nodes, both homogeneous and inhomogeneous, that take practical limitations
such as photon losses, non-ideal measurements, and quantum memories with short
coherence times into account. For a wide range of parameters, our policies
improve upon previously known policies, such as the "swap-as-soon-as-possible"
policy, with respect to both the waiting time and the fidelity of the
end-to-end entanglement. This improvement is greatest for the most practically
relevant cases, namely, for short coherence times, high link losses, and highly
asymmetric links. To obtain our results, we model entanglement distribution
using a Markov decision process, and then we use the Q-learning reinforcement
learning (RL) algorithm to discover new policies. These new policies are
characterized by dynamic, state-dependent memory cutoffs and collaboration
between the nodes. In particular, we quantify this collaboration between the
nodes. Our quantifiers tell us how much "global" knowledge of the network every
node has. Finally, our understanding of the performance of large quantum
networks is currently limited by the computational inefficiency of simulating
them using RL or other optimization methods. Thus, in this work, we present a
method for nesting policies in order to obtain policies for large repeater
chains. By nesting our RL-based policies for small repeater chains, we obtain
policies for large repeater chains that improve upon the
swap-as-soon-as-possible policy, and thus we pave the way for a scalable method
for obtaining policies for long-distance entanglement distribution. | Stav Haldar, Pratik J. Barge, Sumeet Khatri, Hwang Lee | 2023-03-01T19:05:32Z | http://arxiv.org/abs/2303.00777v4 | Fast and reliable entanglement distribution with quantum repeaters: principles for improving protocols using reinforcement learning
###### Abstract
Future quantum technologies such as quantum communication, quantum sensing, and distributed quantum computation, will rely on networks of shared entanglement between spatially separated nodes. Distributing entanglement between these nodes, especially over long distances, currently remains a challenge, due to limitations resulting from the fragility of quantum systems, such as photon losses, non-ideal measurements, and quantum memories with short coherence times. In the absence of full-scale fault-tolerant quantum error correction, which can in principle overcome these limitations, we should understand the extent to which we can circumvent these limitations. In this work, we provide improved protocols/policies for entanglement distribution along a linear chain of nodes, both homogeneous and inhomogeneous, that take practical limitations into account. For a wide range of parameters, our policies improve upon previously known policies, such as the "swap-as-soon-as-possible" policy, with respect to both the waiting time and the fidelity of the end-to-end entanglement. This improvement is greatest for the most practically relevant cases, namely, for short coherence times, high link losses, and highly asymmetric links. To obtain our results, we model entanglement distribution using a Markov decision process, and then we use the Q-learning reinforcement learning (RL) algorithm to discover new policies. These new policies are characterized by dynamic, state-dependent memory cutoffs and collaboration between the nodes. In particular, we quantify this collaboration between the nodes. Our quantifiers tell us how much "global" knowledge of the network every node has, specifically, how much knowledge two distant nodes have of each other's states. In addition to the usual figures of merit, these quantifiers add an extra important dimension to the performance analysis and practical implementation of quantum repeaters. Finally, our understanding of the performance of large quantum networks is currently limited by the computational inefficiency of simulating them using RL or other optimization methods. The other main contribution of our work is to address this limitation. We present a method for nesting policies in order to obtain policies for large repeater chains. By nesting our RL-based policies for small repeater chains, we obtain policies for large repeater chains that improve upon the swap-as-soon-as-possible policy, and thus we pave the way for a scalable method for obtaining policies for long-distance entanglement distribution under practical constraints.
## I Introduction
The development of advanced and practical quantum technologies is a hallmark of the _second quantum revolution_[1]. One of the frontiers of this revolution is a global-scale quantum internet [2; 3; 4; 5; 6; 7; 8; 9], which promises the realization of a plethora of quantum technologies, such as distributed quantum computing [10; 11], distributed quantum sensing [12; 13; 14; 15; 5], and quantum key distribution [16; 17; 18; 19]. A critical milestone on the road to a quantum internet is the ability to distribute quantum entanglement over long distances.
One of the main obstacles to achieving long-distance entanglement distribution, and consequently many of the above promises of quantum technologies, is noise, i.e., errors caused by the difficulty of maintaining good control over qubits and their environment. Noise arises in entanglement distribution due to loss in the quantum channels used to send qubits between spatially-separated nodes, and at every node noise arises due to short-lived quantum memories and imperfect entanglement swapping [20; 21]. These sources of noise ultimately limit the rate, the distance, and the quality of entanglement distribution. Quantum error correction [22; 23; 24; 25], which includes entanglement distillation [26; 27; 28], has been understood for almost two decades to be the primary method to combat noise in order to achieve long-distance entanglement distribution, as well as full-scale, fault-tolerant quantum computation more generally. However, building devices with several thousands of qubits and then implementing error correction is currently a major technological and engineering challenge.
Motivated by this current state of affairs, our work is inspired by the following ensuing idea: instead of viewing noise as something that should be fought, let us take noise as a _given_ and then see what protocols we can design, and what performance and potential advantages we can achieve. This idea lies at the intersection of theory and experiment, and our goal is to prove theoretical statements that provide a guide to researchers in the lab on how to design their devices in order to achieve the best performance. We note that this type of question, on making the best use of noisy quantum resources, has already been the focus of recent theoretical and experimental research on noisy intermediate-scale quantum computing [29; 30; 31], particularly in the context of noise resilience [32; 33; 34], quantum error mitigation [35], and quantum advantage [36].
Long-distance entanglement distribution typically proceeds by breaking up a quantum communication channel between a |
2304.05870 | Complexity and simplicity of self-gravitating fluids | We review a recently proposed definition of complexity of the structure of
self--gravitating fluids \cite{ch1}, and the criterium to define the simplest
mode of their evolution. We analyze the origin of these concepts and their
possible applications in the study of gravitation collapse. We start by
considering the static spherically symmetric case, extending next the study to
static axially symmetric case. Afterward we consider the non--static
spherically symmetric case. Two possible modes of evolution are proposed to be
the simplest one. One is the homologous conditio,, however, as was shown later
on, it may be useful to relax this last condition to enlarge the set of
possible solutions, by adopting the so-called quasi-homologous condition. As
another example of symmetry, we consider fluids endowed with hyperbolical
symmetry. Exact solutions for static fluid distributions satisfying the
condition of minimal complexity are presented.. An extension of the complexity
factor to the vacuum solutions of the Einstein equations represented by the
Bondi metric is discussed. A complexity hierarchy is established in this case,
ranging from the Minkowski spacetime (the simplest one) to gravitationally
radiating systems (the most complex). Finally we propose a list of questions
which, we believe, deserve to be treated in the future | L. Herrera | 2023-04-12T13:59:39Z | http://arxiv.org/abs/2304.05870v1 | # Complexity and Simplicity of Self-Gravitating Fluids
###### Abstract
We review a recently proposed definition of complexity of the structure of self-gravitating fluids [55], and the criterium to define the simplest mode of their evolution [56; 61]. We analyze the origin of these concepts and their possible applications in the study of gravitation collapse. We start by considering the static spherically symmetric case, extending next the study to static axially symmetric case [59]. Afterward we consider the non-static spherically symmetric case. Two possible modes of evolution are proposed to be the simplest one. One is the homologous condition as defined in [56], however, as was shown later on in [61], it may be useful to relax this last condition to enlarge the set of possible solutions, by adopting the so-called quasi-homologous condition. As another example of symmetry, we consider fluids endowed with hyperbolical symmetry. Exact solutions for static fluid distributions satisfying the condition of minimal complexity are presented [62]. An extension of the complexity factor as defined in [55], to the vacuum solutions of the Einstein equations represented by the Bondi metric is discussed [60]. A complexity hierarchy is established in this case, ranging from the Minkowski spacetime (the simplest one) to gravitationally radiating systems (the most complex). Finally we propose a list of questions which, we believe, deserve to be treated in the future
General relativity; Relativistic fluids; Dissipative systems.
## I Introduction
In recent decades many efforts have been made towards a rigorous definition of complexity in different branches of science (see [6; 20; 22; 27; 31; 36; 82; 86; 88; 97; 98; 103] and references therein). However, despite all the work done so far, there is not yet a consensus on a precise definition.
The reason behind such interest stems from the fact that at least at an intuitive level, complexity, no matter how we define it, is a physical concept deeply intertwined with fundamental aspects of the system. In other words, we expect that a suitable definition of complexity of the system could allow us to infer relevant conclusions about its behavior.
Therefore, it is of utmost relevance to provide a precise definition of an observable quantity which allows measurement of such an important property of the system. Thus, when dealing with a situation that intuitively is judged as "complex", we have to be able to quantify this complexity by defining an observable measuring it.
Among the many definitions that have been proposed so far, most of them resort to concepts such as information and entropy (see for example [88; 22]), and are based on the intuitive idea that complexity should, somehow, measure a basic property related to the structural characteristics of the system.
This Chapter is devoted to the review of the concept of complexity introduced in [55] for self-gravitating systems, and its applications under variety of circumstances.
An extension of the definition of complexity based on the work developed by Lopez-Ruiz and collaborators [22; 88] has already been proposed for self-gravitating systems in [23; 28; 29; 104].
However, such a poposal suffers from two drawbacks, the most important of which is the fact that it only involves the energy density, ignoring the role of stresses. This motivated the introduction of a quite different definition which was proposed in [55] for the static spherically symmetric case, and extended further in [56] to the general full dynamic case.
The definition given in [55], although intuitively associated with the very concept of "structure" within the fluid distribution, is not related (at least directly) to information or disequilibrium; rather it stems from the basic assumption that the simplest system (or at least one of them) is represented by the homogeneous fluid with isotropic pressure. Having assumed this conjecture for a vanishing complexity system, the very definition of complexity emerges in the development of the fundamental theory of self-gravitating compact objects, in the context of general relativity.
The variable responsible for measuring complexity, which we call the complexity factor, appears in the orthogonal splitting of the Riemann tensor, and the justification for such a proposition, roughly speaking, is as follows (details are given in the next section).
Once the point of view that for a static fluid distribution, the simplest system is represented by a homogeneous (in the energy density), locally isotropic fluid (principal stresses equal), then it is reasonable to assign zero value of the complexity factor to such a distribution.
Next, let us recall the concept of Tolman mass [120], which may be interpreted as the "active" gravitational mass, and may be expressed, for an arbitrary distribution, through its value for the zero-complexity case plus two terms depending on the energy density inhomogeneity and pressure anisotropy, respectively. These latter terms in turn may be expressed through a single scalar function that we call the complexity factor. It obviously vanishes when the fluid is homogeneous in the energy density, and isotropic in pressure, but also may vanish when the two terms containing density inhomogeneity and anisotropic pressure cancel each other out. Thus, as in [88], vanishing complexity may correspond to very different systems.
When dealing with time-dependent systems, we face two different problems; on the one hand, we have to generalize the concept of complexity of the structure of the fluid distribution to time-dependent dissipative fluids, and on the other hand we also have to evaluate the complexity of the patterns of evolution and propose what we consider to be the simplest of them.
In [56] it was shown that is reasonable to assume that the complexity factor for the structure of the fluid distribution is the same scalar function as for the static case, which now includes the dissipative variables. As for the simplest pattern of evolution, it was shown that the homologous condition characterizes the simplest possible mode. However, as was shown later on, it may be useful to relax this last condition to enlarge the set of possible solutions, by adopting the so-called quasi-homologous condition [61].
The axially symmetric static case has been considered in [59], while some particular cases of cylindrically symmetric fluid distributions have been studied in [106; 107]. Further applications of the concept of complexity as defined in [55] may be found in [21; 62; 77; 108; 112]. Always within the context of general relativity, exact solutions for static fluid distributions endowed with hyperbolical symmetry and satisfying the condition of minimal complexity were presented in [25; 62; 140], while dynamic solutions endowed with hyperbolical symmetry and satisfying the condition of minimal complexity were obtained in [63], such solutions evolve in the so-called quasi-homologous regime.
The concept of complexity as defined in [55] has also been extended to other theories of gravity in [1; 2; 3; 4; 95; 109; 110; 111; 113; 123; 130; 131; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 212; 213; 214; 215; 216; 217; 218; 219; 22; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 282; 284; 286; 287; 289; 288; 289; 291; 285; 289; 286; 287; 288; 289; 292; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 319; 320; 321; 322; 324; 325; 326; 327; 328; 329; 333; 340; 351; 352; 353; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 377; 378; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 388; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 400; 401; 402; 403; 404; 405; 406; 407; 408; 409; 410; 411; 412; 413; 414; 415; 416; 417; 418; 419; 420; 421; 422; 43; 425; 426; 427; 43; 43, 444; 45; 45; 46, 47; 48; 49, 42; 43, 44; 45; 46, 48; 49, 43; 47, 49, 44; 48; 49, 44; 49, 44; 41, 42; 43, 44; 45; 46, 49, 44; 47, 48; 49, 45; 49, 46, 47; 48, 49, 45; 49, 47, 48; 49, 49, 40; 41, 42; 43, 44; 45, 46, 49, 48; 49, 40; 42, 43, 44; 46, 49, 45; 47, 48; 49, 49, 40; 43, 44; 49, 41, 42; 44, 45; 46, 49, 46, 47; 49, 48; 49, 40; 44, 42; 44, 43; 44, 45; 46, 49, 41, 43; 44, 45; 47, 48; 49, 45; 49, 46, 47; 49, 48; 49, 40; 44, 43; 405, 49, 41, 42; 44, 45; 46, 49, 42; 45, 46, 47; 49, 48; 49, 40; 44, 44; 42, 45; 47, 48; 49, 49, 41, 42; 45, 49, 43; 46, 49, 44; 47, 48; 49, 42; 45, 49, 44; 48, 49, 45; 49, 40; 44, 45; 49, 46, 47; 49, 48, 49, 40; 40, 41, 42; 43, 44; 44, 45; 46, 49, 42; 45, 49, 41, 43; 46, 47, 48; 49, 42; 47, 49, 43; 48, 49, 45; 49, 44; 40, 44; 44, 45; 49, 40; 410, 42; 44, 45; 46, 49, 46, 47; 48, 49, 45; 49, 40; 42, 45; 49, 46, 47; 49, 48, 49, 40; 43, 44; 44, 47, 49, 45; 49, 46, 47; 49, 48, 49, 40; 44, 49, 41, 42; 45, 49, 42; 46, 49, 43; 47, 48; 49, 45; 49, 46, 47; 49, 48, 49, 49, 40; 49, 41, 42; 45, 49, 42; 46, 49, 43; 47, 48, 49, 45; 49, 46, 47; 49, 49, 48, 49, 40; 49, 41, 42; 49, 43; 49, 44, 45; 49, 46, 47; 49, 48, 49, 49, 40; 44, 45; 49, 41, 42; 45, 49, 46, 47; 49, 48, 49, 49, 40; 49, 42; 49, 43, 44; 49, 45; 49, 46, 47; 49, 48, 49, 49, 40; 49, 49, 42; 49, 43, 44; 49, 45; 49, 46, 47; 49, 48, 49, 40; 49, 49, 40; 49, 41, 42; 49, 43, 44; 49, 45; 49, 46, 47; 49, 48, 49, 49, 40; 49, 42; 49, 45; 49, 46, 47; 49, 48, 49, 49, 40; 49, 42; 49, 43, 44; 49, 45; 49, 46, 47; 49, 48, 49, 49, 49, 40; 49, 41, 42; 49, 45; 49, 46, 47; 49, 49, 48; 49, 40; 49, 42; 49, 49, 43, 49, 45; 49, 46, 49, 47; 49, 48, 49, 49, 40; 49, 49, 45; 49, 49, 40; 49, 49, 41, 42; 49, 49, 45; 49, 46, 47; 49, 4
where \(\mu,P_{r},P_{\perp},\Pi_{\mu\nu},u^{\mu}\) denote the energy density, the radial pressure, the tangential pressure, the anisotropic tensor and the four-velocity respectively.
The vector \(s^{\mu}\) is defined by
\[s^{\mu}=(0,e^{-\frac{\lambda}{2}},0,0), \tag{4}\]
whereas the four-velocity vector is given by:
\[u^{\mu}=(e^{-\frac{\pi}{2}},0,0,0), \tag{5}\]
with the properties \(s^{\mu}u_{\mu}=0\), \(s^{\mu}s_{\mu}=-1\).
From (5) we can calculate the four acceleration, \(a^{\alpha}=u^{\alpha}_{;\beta}u^{\beta}\), whose only non-vanishing component is:
\[a_{1}=-\frac{\nu^{\prime}}{2}, \tag{6}\]
where prime denotes derivative with respect to \(r\).
At the exterior of the fluid distribution the spacetime is described by the Schwarzschild metric
\[ds^{2}=\left(1-\frac{2M}{r}\right)dt^{2}-\frac{dr^{2}}{\left(1-\frac{2M}{r} \right)}-r^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi^{2}\right). \tag{7}\]
In order to match smoothly the two metrics above on the boundary surface \(r=r_{\Sigma}=constant\), we require the continuity of the first and the second fundamental forms across that surface, producing
\[e^{\nu_{\Sigma}}=1-\frac{2M}{r_{\Sigma}}, \tag{8}\]
\[e^{-\lambda_{\Sigma}}=1-\frac{2M}{r_{\Sigma}}, \tag{9}\]
\[[P_{r}]_{\Sigma}=0, \tag{10}\]
where, from now on, subscript \(\Sigma\) indicates that the quantity is evaluated on the boundary surface \(\Sigma\).
Eqs. (8), (9), and (10) are the necessary and sufficient conditions for a smooth matching of the two metrics (1) and (7) on \(\Sigma\).
Next, let us recall that the Tolman mass for a spherically symmetric static distribution of matter is given by [120]
\[m_{T}=4\pi\int_{0}^{r_{\Sigma}}r^{2}e^{(\nu+\lambda)/2}\left(T_{0}^{0}-T_{1}^ {1}-2T_{2}^{2}\right)dr, \tag{11}\]
which we shall extend for any sphere of radius \(r\), completely inside \(\Sigma\), as
\[m_{T}=4\pi\int_{0}^{r}\vec{r}^{2}e^{(\nu+\lambda)/2}\left(T_{0}^{0}-T_{1}^{1}- 2T_{2}^{2}\right)d\vec{r}.\]
This extension of the global concept of energy to a local level is suggested by the conspicuous role played by \(m_{T}\) as the "active gravitational mass".
Indeed, as it follows from (6), the gravitational acceleration (\(a=-s^{\nu}a_{\nu}\)) of a test particle, instantaneously at rest in a static gravitational field, is given by
\[a=\frac{e^{-\lambda/2}\,\nu^{\prime}}{2}=\frac{e^{-\nu/2}m_{T}}{r^{2}}. \tag{12}\]
Another expression for \(m_{T}\), which appears to be more suitable for the discussion is (see [40; 41] for details, but notice slight changes in notation)
\[m_{T}=(m_{T})_{\Sigma}\left(\frac{r}{r_{\Sigma}}\right)^{3}-r^{3}\int_{r}^{r_{ \Sigma}}e^{(\nu+\lambda)/2}\left[\frac{8\pi}{\tilde{r}}\left(P_{\perp}-P_{r} \right)+\frac{1}{\tilde{r}^{4}}\int_{0}^{\tilde{r}}4\pi\tilde{r}^{3}\mu^{ \prime}d\tilde{r}\right]d\tilde{r}, \tag{13}\]
The important point to keep in mind here is that the second integral in (13) describes the contribution of density inhomogeneity and local anisotropy of pressure to the Tolman mass.
Next, using the orthogonal splitting of the Riemann tensor first considered by Bel [12] (see also [32]), let us introduce the tensor
\[Y_{\alpha\beta}=R_{\alpha\gamma\beta\delta}u^{\gamma}u^{\delta}, \tag{14}\]
which can be splitted in terms of its trace and the corresponding trace-free tensor, as (see [46] for details)
\[TrY\equiv Y_{T}=4\pi(\rho+3P_{r}-2\Pi), \tag{15}\]
and
\[Y_{<\alpha\beta>}=Y_{TF}(s_{\alpha}s_{\beta}+\frac{h_{\alpha\beta}}{3}), \tag{16}\]
with
\[Y_{TF}\equiv(4\pi\Pi+E)=8\pi\Pi-\frac{4\pi}{r^{3}}\int_{0}^{r}\tilde{r}^{3} \mu^{\prime}d\tilde{r}, \tag{17}\]
where \(E\) is the scalar defining the electric part of the Weyl tensor \(E_{\alpha\beta}=C_{\alpha\gamma\beta\delta}u^{\gamma}u^{\delta}\), (the magnetic part vanishes identically), which may be written as
\[E_{\alpha\beta}=E(s_{\alpha}s_{\beta}+\frac{1}{3}h_{\alpha\beta}), \tag{18}\]
with
\[E=-\frac{e^{-\lambda}}{4}\left[\nu^{\prime\prime}+\frac{{\nu^{\prime}}^{2}- \lambda^{\prime}\nu^{\prime}}{2}-\frac{\nu^{\prime}-\lambda^{\prime}}{r}+\frac {2(1-e^{\lambda})}{r^{2}}\right], \tag{19}\]
satisfying the following properties:
\[E^{\alpha}_{\ \alpha}=0,\quad E_{\alpha\gamma}=E_{(\alpha\gamma)},\quad E_{ \alpha\gamma}u^{\gamma}=0. \tag{20}\]
Using (17) in (13) we get
\[m_{T}=(m_{T})_{\Sigma}\left(\frac{r}{r_{\Sigma}}\right)^{3}+r^{3}\int_{r}^{r_{ \Sigma}}\frac{e^{(\nu+\lambda)/2}}{\tilde{r}}Y_{TF}d\tilde{r}. \tag{21}\]
Thus we see that this single scalar function, \(Y_{TF}\), encompasses all the modifications produced by the energy density inhomogeneity and the anisotropy of the pressure, on the active gravitational (Tolman) mass. More specifically, it describes how these two factors modify the value of the Tolman mass, with respect to its value for the homogeneous isotropic fluid.
Then, following our starting conjecture stating that at least one of the simplest configurations corresponds to an incompressible fluid (a constant energy density) with isotropic pressure, and noticing that for such a system, the scalar \(Y_{TF}\) vanishes, it appears well justified to identify the complexity factor with \(Y_{TF}\).
The following remarks are in order at this point:
* The scalar \(Y_{TF}\) belongs to a family of functions known as structure scalars, defined in [46] from a detailed analysis of the orthogonal splitting of the Riemann tensor.
* As is apparent from (12) the Toman mass is a measure of the strength of the gravitational interaction.
* The complexity factor defined as above, not only vanishes for the homogeneous, isotropic fluid, where the two terms in (17) vanish identically, but also for all configurations where the two terms in (17) cancel each other, implying that there are a wealth of configurations satisfying the vanishing complexity condition.
* It is worth noticing that whereas the contribution of the pressure anisotropy to \(Y_{TF}\) is local, the contribution of the density energy inhomogeneity is not.
* For a charged fluid \(Y_{TF}\) includes contributions from the electric charge as well (see eq.(25) in [49]).
In the next section, we shall present two examples of inhomogeneous and anisotropic fluid configurations, satisfying the vanishing complexity factor condition.
### Fluid distributions with vanishing complexity factor
The vanishing complexity factor condition, according to (17) reads
\[\Pi=\frac{1}{2r^{3}}\int_{0}^{r}\tilde{r}^{3}\mu^{\prime}d\tilde{r}. \tag{22}\]
However, since the Einstein equations for a spherically symmetric static, anisotropic fluid form a system of three ordinary differential equations for five unknown functions \((\nu,\lambda,\mu,P_{r},P_{\perp})\), the condition \(Y_{TF}=0\) is not enough to close the system and we need still one condition in order to solve it. Just for the sake of illustration, we shall propose two examples in the next subsections.
#### ii.1.1 The Gokhroo and Mehra ansatz
A family of anisotropic spheres has been found in [34], which leads to physically satisfactory models for compact objects.
These models are obtained from an ansatz on the form of the metric function \(\lambda\) which reads
\[e^{-\lambda}=1-\alpha r^{2}+\frac{3K\alpha r^{4}}{5r_{\Sigma}^{2}}, \tag{23}\]
producing,
\[\mu=\mu_{0}\left(1-\frac{Kr^{2}}{r_{\Sigma}^{2}}\right), \tag{24}\]
and
\[m(r)=\frac{4\pi\mu_{0}r^{3}}{3}\left(1-\frac{3Kr^{2}}{5r_{\Sigma}^{2}}\right), \tag{25}\]
where \(K\) is a constant in the range \((0,1)\), \(\alpha=\frac{8\pi\mu_{0}}{3}\), and \(m(r)\) is the mass function, defined by
\[1-e^{-\lambda}=\frac{2m}{r}, \tag{26}\]
or, equivalently
\[m=4\pi\int_{0}^{r}\tilde{r}^{2}\mu d\tilde{r}. \tag{27}\]
Then using the field equations it can be shown that the line element becomes (see [55] for details)
\[ds^{2}=-e^{\int(2z(r)-2/r)dr}dt^{2}+\frac{z^{2}(r)e^{\int(\frac{4}{r^{2}s(r)}+2z(r ))dr}}{r^{6}(-2\int\frac{z(r)(1+\frac{\Omega(r)r^{2}}{2r})e^{\int(\frac{4}{r^{2} s(r)}+2z(r))dr}}{r^{5}}dr+C)}dr^{2}+r^{2}d\theta^{2}+r^{2}sin^{2}\theta d\phi^{2}. \tag{28}\]
where \(C\) is a constant of integration, and
\[e^{\nu(r)}=e^{\int(2z(r)-2/r)dr}. \tag{29}\]
For the physical variables we have
\[4\pi P_{r}=\frac{z(r-2m)+m/r-1}{r^{2}}, \tag{30}\]
\[4\pi\mu=\frac{m^{\prime}}{r^{2}}, \tag{31}\]
and
\[4\pi P_{\perp}=(1-\frac{2m}{r})(z^{\prime}+z^{2}-\frac{z}{r}+\frac{1}{r^{2}})+ z(\frac{m}{r^{2}}-\frac{m^{\prime}}{r}). \tag{32}\]
The so obtained solution is regular at the origin, and satisfies the conditions \(\mu>0\), \(\mu>P_{r},P_{\perp}\).
Also, to avoid singular behaviour of physical variables on the boundary of the source (\(\Sigma\)), the solution should satisfy the Darmois conditions on the boundary (8), (9), (10).
#### ii.2.2 The polytrope with vanishing complexity factor
The polytropic equation of state plays an important role in the study of self-gravitating systems, both, in Newtonian and general relativistic astrophysics. In the isotropic pressure case such an equation of state is sufficient to integrate the field equations. However in case of an anisotropic fluid additional information is required. The study of polytropes for anisotropic matter has been considered in detail in [51; 52; 54].
Once the polytropic equation of state is assumed, in the case of anisotropic matter, we still need an additional condition in order to solve the corresponding system of equations. We propose to assume the vanishing complexity factor condition as such a complementary information.
Thus our model is characterized by
\[P_{r}=K\mu^{\gamma}=K\mu^{1+1/n};\qquad Y_{TF}=0, \tag{33}\]
where constants \(K\), \(\gamma\), and \(n\) are usually called polytropic constant, polytropic exponent, and polytropic index, respectively.
It is worth mentioning that the generalization of the Newtonian polytrope to the general relativistic case admits two possibilities. One is the equation (33), the other is
\[P_{r}=K\mu_{b}^{\gamma}=K\mu_{b}^{1+1/n} \tag{34}\]
where \(\mu_{b}\) denotes the baryonic (rest) mass density. The treatment of this latter case has been described in detail in [52].
From the polytropic equation of state (33) we obtain two equations which read:
\[\xi^{2}\frac{d\Psi}{d\xi}\left[\frac{1-2(n+1)\alpha v/\xi}{1+\alpha\Psi} \right]+v+\alpha\xi^{3}\Psi^{n+1}+\frac{2\Pi\Psi^{-n}\xi}{P_{rc}(n+1)}\left[ \frac{1-2\alpha(n+1)v/\xi}{1+\alpha\Psi}\right]=0, \tag{35}\]
and
\[\frac{dv}{d\xi}=\xi^{2}\Psi^{n}, \tag{36}\]
where
\[\alpha=P_{rc}/\mu_{c},\quad r=\xi/A,\quad A^{2}=4\pi\mu_{c}/\alpha(n+1), \tag{37}\]
\[\Psi^{n}=\mu/\mu_{c},\quad v(\xi)=m(r)A^{3}/(4\pi\mu_{c}), \tag{38}\]
where subscript \(c\) indicates that the quantity is evaluated at the center. At the boundary surface \(r=r_{\Sigma}\) (\(\xi=\xi_{\Sigma}\)) we have \(\Psi(\xi_{\Sigma})=0\) (see [54] for details).
Equations (35), (36), form a system of two first order ordinary differential equations for the three unknown functions: \(\Psi,v,\Pi\), depending on a duplet of parameters \(n,\alpha\). In order to proceed further with the modeling of a compact object, we shall assume the vanishing complexity factor condition, which with the notation above,
reads
\[\frac{6\Pi}{n\mu_{c}}+\frac{2\xi}{n\mu_{c}}\frac{d\Pi}{d\xi}=\Psi^{n-1}\xi\frac{d \Psi}{d\xi}. \tag{39}\]
Now we have a system of three ordinary differential equations (35), (36), (39) for the three unknown functions \(\Psi,v,\Pi\), which may be integrated for any arbitrary duplet of values of the parameters \(n,\alpha\), only constrained by the physical conditions (see [52] for details)
\[\mu>0,\qquad\alpha\Psi\leq 1,\qquad\frac{3v}{\xi^{3}\Psi^{n}}+\alpha\Psi-1\leq 1. \tag{40}\]
The equations equivalent to (35) and (39), for the equation of state (34) read
\[\xi^{2}\frac{d\Psi_{b}}{d\xi}\left[\frac{1-2(n+1)\alpha v/\xi}{1+\alpha\Psi_{b }}\right]+v+\alpha\xi^{3}\Psi_{b}^{n+1}+\frac{2\Pi\Psi_{b}^{-n}\xi}{P_{rc}(n+1 )}\left[\frac{1-2\alpha(n+1)v/\xi}{1+\alpha\Psi_{b}}\right]=0, \tag{41}\]
\[\frac{6\Pi}{n\mu_{bc}}+\frac{2\xi}{n\mu_{bc}}\frac{d\Pi}{d\xi}=\Psi_{b}^{n-1} \xi\frac{d\Psi_{b}}{d\xi}\left[1+K(n+1)\mu_{bc}^{1/n}\Psi_{b}\right], \tag{42}\]
with \(\Psi_{b}^{n}=\mu_{b}/\mu_{bc}\).
Further research on the application of the concept of complexity in the study of polytropes may be found in [76; 77; 78; 79; 80; 141].
So far we have been concerned with spherically symmetric systems, we shall next consider the axially symmetric (static) case.
### The static axially-symmetric case
Let us now extend the concept of complexity as defined for the spherically symmetric case to the most general axially symmetric static fluid distributions.
The reason to undertake such a task is based on the well known fact that (close to the horizon) there is a bifurcation between any finite perturbation of Schwarzschild spacetime and any Weyl solution, even when the latter is characterized by parameters arbitrarily close to those corresponding to spherical symmetry (see [12; 26; 44; 45; 73; 13] and references therein for a discussion on this point). This fact in turn is related to the well known result, usually referred to as Israel theorem [66], stating that the only regular static and asymptotically flat vacuum spacetime possessing a regular horizon is the Schwarzchild solution, while all the others Weyl exterior solutions exhibit singularities in the curvature invariants (as the boundary of the source approaches the horizon).
Thus, even though observational evidence suggest that deviations from spherical symmetry in compact self-gravitating objects (white dwarfs, neutron stars), are likely to be incidental rather than basic features of these systems, it is clear that for very compact objects, deviations from spherical symmetry (no matter how small) should be studied resorting to exact solution of the Einstein equations, instead as perturbations of spherically symmetric systems.
As shown in the previous sections, the scalar intended to measure the degree of complexity for a spherically symmetric fluid distribution (the complexity factor), is identified as one of the scalar functions (structure scalars) which appears in the orthogonal splitting of the Riemann tensor. More specifically, it is related to one of the scalar functions appearing in the splitting of the electric part of the Riemann tensor.
In spite of the fact that in the axially symmetric case the situation is much more complicated, and the number of structure scalars is larger than in the spherically symmetric case, we shall proceed in a similar way to fix the general criterium to define the variable(s) measuring the complexity of the fluid distribution.
Thus, we start by asking ourselves the same question as in the previous case, namely: which is the simplest fluid configuration? As in the spherically symmetric case we shall assume that such a configuration corresponds to the incompressible (constant energy density), isotropic (in the pressure) spheroid. From this simple assumption, we shall see that as the obvious candidates to measure the degree of complexity of the fluid distribution, appear three of the eight structure scalars corresponding to the axially symmetric static fluid distribution. Explicit forms of these structure scalars as well as some useful differential equations relating the inhomogeneities of the energy density to some of the structure scalars were already found in [53].
As in the spherically symmetric case, the vanishing of the three complexity factors corresponds not only to the incompressible, isotropic spheroid, but also to a large family of solutions where the density inhomogeneity terms cancel the pressure anisotropic terms in the equations relating these variables to the complexity factors. Some of these solutions will be exhibited.
Thus, let us consider static and axially symmetric sources. For such a system the line element may be written in "Weyl spherical coordinates" (please notice that in this section we are using signature \(+2\) instead of
as in the previous case, which leads to some changes in the sign of some variables), as
\[ds^{2}=-A^{2}dt^{2}+B^{2}\left(dr^{2}+r^{2}d\theta^{2}\right)+D^{2}d\phi^{2}, \tag{43}\]
where the coordinates \(t\) and \(\phi\) are adapted to the two Killing vectors admitted by our line element, and therefore the metric functions depend only on \(r\) and \(\theta\).
We recall that, unlike the vacuum case, the assumption of the Weyl gauge ( \(R_{3}^{3}+R_{0}^{0}=0\), where \(R_{\beta}^{\alpha}\) denotes the Ricci tensor), reducing the number of independent metric functions to two, cannot be used without loss of generality in the interior, which explains why our line element is described in terms of three independent functions.
In a purely locally Minkowski frame (hereafter referred to as l.M.f.) where the first derivatives of the metric vanish (locally) [18], the most general energy-momentum tensor is given by:
\[\widehat{T}_{\alpha\beta}=\left(\begin{array}{cccc}\mu&0&0&0\\ 0&P_{xx}&P_{xy}&0\\ 0&P_{yx}&P_{yy}&0\\ 0&0&0&P_{zz}\end{array}\right), \tag{44}\]
where \(\mu,P_{xy},P_{xx},P_{yy},P_{zz}\) denote the energy density and different stresses, respectively, as measured by our locally defined Minkowskian observer.
Also observe that \(P_{xy}=P_{yx}\) and, in general \(P_{xx}\neq P_{yy}\neq P_{zz}\).
Then transforming back to our coordinates, we obtain the components of the energy momentum tensor in terms of the physical variables as defined in the l.M.f.
\[T_{\alpha\beta} = (\mu+P_{zz})V_{\alpha}V_{\beta}+P_{zz}g_{\alpha\beta}+(P_{xx}-P_ {zz})K_{\alpha}K_{\beta} \tag{45}\] \[+ (P_{yy}-P_{zz})L_{\alpha}L_{\beta}+2P_{xy}K_{(\alpha}L_{\beta)},\]
with
\[V_{\alpha}=(-A,0,0,0);\quad K_{\alpha}=(0,B,0,0);\] \[L_{\alpha}=(0,0,Br,0);\quad S_{\alpha}=(0,0,0,D), \tag{46}\]
where we are considering observers at rest with respect to the fluid distribution.
Alternatively we may write the energy momentum tensor in the "canonical" form
\[T_{\alpha\beta} = (\mu+P)V_{\alpha}V_{\beta}+Pg_{\alpha\beta}+\Pi_{\alpha\beta}, \tag{47}\]
with
\[\Pi_{\alpha\beta} = (P_{xx}-P_{zz})\left(K_{\alpha}K_{\beta}-\frac{h_{\alpha\beta}}{ 3}\right) \tag{48}\] \[+ (P_{yy}-P_{zz})\left(L_{\alpha}L_{\beta}-\frac{h_{\alpha\beta}}{ 3}\right)+2P_{xy}K_{(\alpha}L_{\beta)},\]
\[P=\frac{P_{xx}+P_{yy}+P_{zz}}{3},\quad h_{\mu\nu}=g_{\mu\nu}+V_{\nu}V_{\mu}. \tag{49}\]
The anisotropic tensor may also be written as
\[\Pi_{\alpha\beta}=\frac{1}{3}(2\Pi_{I}+\Pi_{II})\left(K_{\alpha}K_{\beta}- \frac{h_{\alpha\beta}}{3}\right)+\frac{1}{3}(2\Pi_{II}+\Pi_{I})\left(L_{\alpha }L_{\beta}-\frac{h_{\alpha\beta}}{3}\right)+\Pi_{KL}\left(K_{\alpha}L_{\beta}+ K_{\beta}L_{\alpha}\right), \tag{50}\]
with
\[\Pi_{KL}=K^{\alpha}L^{\beta}T_{\alpha\beta}, \tag{51}\]
\[\Pi_{I}=\left(2K^{\alpha}K^{\beta}-L^{\alpha}L^{\beta}-S^{\alpha}S^{\beta} \right)T_{\alpha\beta}, \tag{52}\]
\[\Pi_{II}=\left(2L^{\alpha}L^{\beta}-K^{\alpha}K^{\beta}-S^{\alpha}S^{\beta} \right)T_{\alpha\beta}. \tag{53}\]
The relationships between the above scalars and the variables \(P_{xy},P_{xx},P_{yy},P_{zz}\) are (besides (49)),
\[\Pi_{2}\equiv\frac{1}{3}(2\Pi_{I}+\Pi_{II})=P_{xx}-P_{zz}, \tag{54}\]
\[\Pi_{3}\equiv\frac{1}{3}(2\Pi_{II}+\Pi_{I})=P_{yy}-P_{zz}, \tag{55}\]
\[\Pi_{KL}=P_{xy}. \tag{56}\]
or, inversely:
\[P_{zz}=P-\frac{1}{3}(\Pi_{2}+\Pi_{3}), \tag{57}\]
\[P_{xx}=P+\frac{1}{3}(2\Pi_{2}-\Pi_{3}), \tag{58}\]
\[P_{yy}=P+\frac{1}{3}(2\Pi_{3}-\Pi_{2}). \tag{59}\]
The explicit form of the Einstein equations as well as the conservation equations, for the line element (43) and the energy-momentum tensor (47), are given in the Appendix VI
### The structure scalars
The structure scalars for our problem were calculated in [53]. For their definition we need first to obtain the electric part of the Weyl tensor (the magnetic part vanishes identically), whose components can be obtained directly from its definition,
\[E_{\mu\nu}=C_{\mu\alpha\nu\beta}\,V^{\alpha}\,V^{\beta}, \tag{60}\]
where \(C_{\mu\alpha\nu\beta}\) denotes the Weyl tensor.
Equivalently, the electric part of the Weyl tensor may also be written as
\[E_{\alpha\beta} = {\cal E}_{1}\left(K_{\alpha}L_{\beta}+L_{\alpha}K_{\beta}\right) +{\cal E}_{2}\left(K_{\alpha}K_{\beta}-\frac{1}{3}h_{\alpha\beta}\right) \tag{61}\] \[+ {\cal E}_{3}\left(L_{\alpha}L_{\beta}-\frac{1}{3}h_{\alpha\beta} \right),\]
where explicit expressions for the three scalars \({\cal E}_{1}\), \({\cal E}_{2}\), \({\cal E}_{3}\) are given in the Appendix VII.
Next, let us calculate the electric part of the Riemann tensor (the magnetic part vanishes identically), which is defined by
\[Y^{\rho}_{\beta}=V^{\alpha}V^{\mu}R^{\rho}_{\alpha\beta\mu}. \tag{62}\]
After some lengthy calculations we find;
\[Y_{\alpha\beta} = Y_{TF_{1}}\left(K_{\alpha}L_{\beta}+K_{\beta}L_{\alpha}\right)+ Y_{TF_{2}}\left(K_{\alpha}K_{\beta}-\frac{1}{3}h_{\alpha\beta}\right) \tag{63}\] \[+ Y_{TF_{3}}\left(L_{\alpha}L_{\beta}-\frac{1}{3}h_{\alpha\beta} \right)+\frac{1}{3}Y_{T}h_{\alpha\beta},\]
where
\[Y_{T}=4\pi(\mu+3P), \tag{64}\]
\[Y_{TF_{1}}={\cal E}_{1}-4\pi\Pi_{KL}, \tag{65}\] \[Y_{TF_{2}}={\cal E}_{2}-4\pi\Pi_{2}, \tag{66}\]
\[Y_{TF_{3}}={\cal E}_{3}-4\pi\Pi_{3}. \tag{67}\]
Finally, we shall find the tensor associated with the double dual of Riemann tensor, defined as
\[X_{\alpha\beta}={}^{*}R^{*}_{\alpha\gamma\beta\delta}V^{\gamma}V^{\delta}= \frac{1}{2}\eta_{\alpha\gamma}{}^{\epsilon\rho}R^{*}_{\epsilon\rho\delta}V^{ \gamma}V^{\delta}, \tag{68}\]
with \(R^{*}_{\alpha\beta\gamma\delta}=\frac{1}{2}\eta_{\epsilon\rho\gamma\delta}R_{ \alpha\beta}{}^{\epsilon\rho}\), where \(\eta_{\epsilon\rho\gamma\delta}\) denotes the permutation symbol.
Thus, we find
\[X_{\alpha\beta} = X_{TF_{1}}\left(K_{\alpha}L_{\beta}+K_{\beta}L_{\alpha}\right) +X_{TF_{2}}\left(K_{\alpha}K_{\beta}-\frac{1}{3}h_{\alpha\beta}\right) \tag{69}\] \[+ X_{TF_{3}}\left(L_{\alpha}L_{\beta}-\frac{1}{3}h_{\alpha\beta} \right)+\frac{1}{3}X_{T}h_{\alpha\beta},\]
where
\[X_{T}=8\pi\mu, \tag{70}\]
\[X_{TF_{1}}=-({\cal E}_{1}+4\pi\Pi_{KL}), \tag{71}\]
\[X_{TF_{2}}=-\left({\cal E}_{2}+4\pi\Pi_{2}\right), \tag{72}\]
\[X_{TF_{3}}=-\left({\cal E}_{3}+4\pi\Pi_{3}\right). \tag{73}\]
The scalars \(Y_{T}\), \(Y_{TF1}\), \(Y_{TF2}\),\(Y_{TF3}\), \(X_{T}\), \(X_{TF1}\), \(X_{TF2}\), \(X_{TF3}\), are the structure scalars for our system.
Next, we shall need two differential equations which relate the spatial derivatives of the physical variables and the Weyl tensor, obtained from Bianchi identities, they have been found before for the spherically symmetric and the cylindrically symmetric cases (see [46], [50] and references therein). For our case they have been calculated in [53], and read
\[\frac{{\cal E}_{1\theta}}{r}+\frac{1}{3}(2{\cal E}_{2}-{\cal E}_{3 })^{\prime}+\frac{{\cal E}_{1}}{r}\left(\frac{2B_{\theta}}{B}+\frac{D_{\theta }}{D}\right) \tag{74}\] \[+ {\cal E}_{2}\left(\frac{B^{\prime}}{B}+\frac{D^{\prime}}{D}+\frac {1}{r}\right)-{\cal E}_{3}\left(\frac{B^{\prime}}{B}+\frac{1}{r}\right)\] \[= \frac{4\pi}{3}\left(2\mu+3P\right)^{\prime}+4\pi\left[\mu+P+\frac {1}{3}(2\Pi_{2}-\Pi_{3})\right]\frac{A^{\prime}}{A}\] \[+ 4\pi\Pi_{KL}\frac{A_{\theta}}{Ar},\]
\[{\cal E}^{\prime}_{1}+\frac{1}{3r}(2{\cal E}_{3}-{\cal E}_{2})_{ \theta}+{\cal E}_{1}\left(\frac{2B^{\prime}}{B}+\frac{D^{\prime}}{D}+\frac{2} {r}\right)-\frac{{\cal E}_{2}B_{\theta}}{Br} \tag{75}\] \[+ \frac{{\cal E}_{3}}{r}\left(\frac{B_{\theta}}{B}+\frac{D_{\theta }}{D}\right)=\frac{4\pi}{3r}\left(2\mu+3P\right)_{\theta}\] \[+ 4\pi\left[\mu+P+\frac{1}{3}(2\Pi_{3}-\Pi_{2})\frac{A_{\theta}}{ Ar}\right]+4\pi\Pi_{KL}\frac{A^{\prime}}{A},\]
which, using (64)-(67) and (70)-73), may be written in terms of structure scalars, producing
\[\frac{8\pi\mu^{\prime}}{3} = \frac{1}{r}\left[Y_{TF1\theta}+8\pi\Pi_{KL\theta}+\left(Y_{TF1}+8 \pi\Pi_{KL}\right)(\ln B^{2}D)_{\theta}\right] \tag{76}\] \[+ \left[\frac{2}{3}(Y^{\prime}_{TF2}+8\pi\Pi^{\prime}_{2})+\left(Y_ {TF2}+8\pi\Pi_{2}\right)(\ln BDr)^{\prime}\right]\] \[- \left[\frac{1}{3}(Y^{\prime}_{TF3}+8\pi\Pi^{\prime}_{3})+\left(Y_ {TF3}+8\pi\Pi_{3}\right)(\ln Br)^{\prime}\right],\]
\[\frac{8\pi\mu_{\theta}}{3r} = -\frac{1}{r}\left[\frac{1}{3}(Y_{TF2\theta}+8\pi\Pi_{2\theta})+(Y_{ TF2}+8\pi\Pi_{2})(\ln B)_{\theta}\right]\] \[+ \frac{1}{r}\left[\frac{2}{3}(Y_{TF3\theta}+8\pi\Pi_{3\theta})+(Y_ {TF3}+8\pi\Pi_{3})(\ln BD)_{\theta}\right]\] \[+ \left[Y_{TF1}^{\prime}+8\pi\Pi_{KL}^{\prime}+(Y_{TF1}+8\pi\Pi_{ KL})(\ln B^{2}Dr^{2})^{\prime}\right],\]
where prime and subscript \(\theta\) denote derivatives with respect to \(r\) and \(\theta\) respectively.
We have now available all the elements necessary to identify the complexity factors for the fluid distribution under consideration. For doing so we recall our basic ansatz consisting in assuming that the simplest possible fluid (or at least one of them) is the incompressible (constant energy density) fluid with isotropic pressure.
Now, in [53] it has been shown that the necessary and sufficient conditions for the vanishing of the (invariantly defined) spatial derivatives of the energy density are \(X_{TF1}=X_{TF2}=X_{TF3}=0\). In other words
\[X_{TF1}=X_{TF2}=X_{TF3}=0\Leftrightarrow\mu^{\prime}=\mu_{\theta}=0. \tag{77}\]
Therefore the homogeneous energy-density condition implies \(X_{TF1}=X_{TF2}=X_{TF3}=0\), which in turn produces
\[Y_{TF1}=-8\pi\Pi_{KL};\quad Y_{TF2}=-8\pi\Pi_{2};\quad Y_{TF3}=-8\pi\Pi_{3}. \tag{78}\]
From the above it follows that the isotropic pressure condition would imply \(Y_{TF1}=Y_{TF2}=Y_{TF3}=0\).
In other words, following the rationale exposed in the spherically symmetric case, it is reasonable to identify the three structure scalars \(Y_{TF}\) (more precisely, their absolute values) with the complexity factors. As in the previous case, we notice that they vanish for the incompressible (constant energy density) fluid with isotropic pressure, but may also vanish for inhomogeneous, anisotropic fluids, provided these two factors combine in such a way that they cancel the three complexity factors.
In the next subsections, just to illustrate the way by means of which such models may be obtained, we shall present two solutions with vanishing complexity factors.
### The incompressible, isotropic spheroid
The first solution we shall present corresponds to the case where the complexity factors vanish because the energy density is homogeneous and the pressure is isotropic. This solution was previously obtained and analyzed in [53]. Here we just present it without details.
Let us first notice that, from (77), (71)-(73) and \(P_{xx}=P_{yy}=P_{zz}=P\), \(P_{xy}=0\), \(\mu=\mu_{0}=constant\), it follows that such a solution is also conformally flat.
Next, for simplicity we shall assume the boundary surface \(\Sigma\) to be defined by the equation:
\[r=r_{1}=constant, \tag{79}\]
which is not the most general form of a possible boundary surface.
Then, from the above and (228) and (231) it follows that
\[P\stackrel{{\Sigma}}{{=}}0, \tag{80}\]
where \(\stackrel{{\Sigma}}{{=}}\) means that both sides of the equation are evaluated on \(\Sigma\)
Under the conditions above, (233) and (234) can be integrated to obtain:
\[P+\mu_{0}=\frac{\zeta}{A}, \tag{81}\]
and
\[P+\mu_{0}=\frac{\xi(r)}{A}, \tag{82}\]
where \(\xi\) is an arbitrary function of its argument. Using boundary conditions (80) in (81) (82) it follows that:
\[A(r_{1},\theta)=const.=\frac{\alpha}{\mu_{0}},\qquad\zeta=constant. \tag{83}\]
Finally, the metric for this model can be written as follows
\[ds^{2}=\frac{1}{(\gamma r^{2}+\delta+br\cos\theta)^{2}}\left[-(\alpha r^{2}+ \beta+ar\cos\theta)^{2}dt^{2}+dr^{2}+r^{2}d\theta^{2}+r^{2}\sin^{2}\theta d \phi^{2}\right], \tag{84}\]
from which, the physical variables can be easily calculated, producing
\[8\pi\mu=12\gamma\delta-3b^{2}, \tag{85}\]
\[8\pi P=(3b^{2}-12\gamma\delta)\left[1-\frac{\alpha r_{1}^{2}+\beta}{\gamma r_ {1}^{2}+\delta}\frac{\gamma r^{2}+\delta+br\cos\theta}{\alpha r^{2}+\beta+ ar\cos\theta}\right], \tag{86}\]
where \(b,\gamma,\delta\) are constants, and
\[\zeta=\mu_{0}\frac{\alpha r_{1}^{2}+\beta}{\gamma r_{1}^{2}+\delta},\quad a=\frac {\alpha r_{1}^{2}+\beta}{\gamma r_{1}^{2}+\delta}b, \tag{87}\]
in order to satisfy the junction condition (80).
It is important to stress the fact that this solution cannot be matched to any Weyl exterior, except in the spherically symmetric case, even though it has a surface of vanishing pressure (see [53] for details). As shown in [53] this result is a consequence of the energy density homogeneity and the pressure isotropy. So, to find matchable solutions we should relax these two conditions. Also, it is worth mentioning that the above result is in agreement with previous works indicating that static, perfect fluid (isotropic in pressure) sources are spherical (see [93] and references therein).
### Anisotropic inhomogeneous spheroids
In order to obtain a metric smoothly matchable to any Weyl space-time we shall consider a solution with vanishing complexity factors but inhomogeneous energy density and anisotropic pressure (see details in [59])
The metric variables of the solution are
\[A(r,\theta) = \frac{a_{1}r\sin\theta}{b_{1}r^{2}+b_{2}}, \tag{88}\] \[B(r,\theta) = \frac{1}{b_{1}r^{2}+b_{2}},\] (89) \[D(r,\theta) = \frac{b_{1}r^{2}-b_{2}}{b_{1}r^{2}+b_{2}}F\left(\frac{r\cos \theta}{b_{1}r^{2}-b_{2}}\right). \tag{90}\]
It is a simple matter to check that the vanishing complexity factors conditions (246)-(248) in Appendix VIII, are satisfied for (88)-(90).
From the above, using the Einstein equations (227)-(231), one obtains for the physical variables.
\[8\pi\mu=12b_{1}b_{2}\] \[-\frac{(b_{1}r^{2}+b_{2})^{2}}{(b_{1}r^{2}-b_{2})^{2}}\left[\frac {4b_{1}b_{2}r^{2}cos^{2}\theta}{(b_{1}r^{2}-b_{2})^{2}}+1\right]\frac{F_{zz}}{F}, \tag{91}\] \[8\pi P=-12b_{1}b_{2}\] \[+\frac{(b_{1}r^{2}+b_{2})^{2}}{3(b_{1}r^{2}-b_{2})^{2}}\left[ \frac{4b_{1}b_{2}r^{2}cos^{2}\theta}{(b_{1}r^{2}-b_{2})^{2}}+1\right]\frac{F_{ zz}}{F},\] (92) \[8\pi\Pi_{2}\equiv 8\pi(P_{xx}-P_{zz})=\frac{F_{zz}}{4F}\frac{(b_{1} r^{2}+b_{2})^{2}}{(b_{1}r^{2}-b_{2})^{2}}\sin^{2}\theta,\] (93) \[8\pi\Pi_{3}\equiv 8\pi(P_{yy}-P_{zz})=\frac{F_{zz}}{4F}\frac{(b_{1} r^{2}+b_{2})^{4}cos^{2}\theta}{(b_{1}r^{2}-b_{2})^{4}},\] (94) \[8\pi\Pi_{KL}\equiv 8\pi P_{xy}=-\frac{F_{zz}}{2F}\frac{(b_{1} r^{2}+b_{2})^{3}}{(b_{1}r^{2}-b_{2})^{3}}\sin 2\theta, \tag{95}\]
where \(a_{1},b_{1},b_{2}\) are constant, and
\[F(z)\equiv F\left(\frac{r\cos\theta}{b_{1}r^{2}-b_{2}}\right). \tag{96}\]
It is not difficult to find a range of values of the parameters, for which the physical behavior of physical variables is acceptable and the metric may be matched smoothly on the boundary surface to a Weyl solution.
### The static hyperbolically symmetric case
Motivated by a new version of the Schwarzschild black hole proposed in [57; 58], there has been a renewed interest in self-gravitating systems admitting hyperbolical symmetry.
In this picture, the space-time outside the horizon is represented by the usual Schwarzschild metric, whereas the region inner to the horizon is described by the line element
\[ds^{2} = \left(\frac{2M}{R}-1\right)dt^{2}-\frac{dr^{2}}{\left(\frac{2M}{ R}-1\right)}-R^{2}d\Omega^{2},\] \[d\Omega^{2} = d\theta^{2}+\sinh^{2}\theta d\phi^{2}, \tag{97}\]
which is a static solution with the \((\theta,\phi)\) space describing a positive Gaussian curvature, admitting the four Killing vectors
\[\chi_{(\mathbf{0})}=\partial_{\mathbf{t}},\quad\chi_{(\mathbf{2 })}=-\cos\phi\partial_{\theta}+\coth\theta\sin\phi\partial_{\phi}\] \[\chi_{(\mathbf{1})}=\partial_{\phi}\quad\chi_{(\mathbf{3})}=\sin \phi\partial_{\theta}+\coth\theta\cos\phi\partial_{\phi}. \tag{98}\]
A solution to the Einstein equations of the form given by (97), defined by the hyperbolic symmetry (98), was first considered by Harrison [39], and has been more recently the subject of research in different contexts (see [33; 75; 87; 89; 90; 101] and references therein).
Our purpose here is to present some exact solutions to Einstein equations endowed with the symmetry given by (98), satisfying the vanishing complexity factor condition and which might serve as the source of (97), see [62] for details.
Thus, let us consider hyperbolically symmetric distributions of static fluid, which for the sake of completeness we assume to be locally anisotropic and which may be (or may be not) bounded from the exterior by a surface \(\Sigma^{e}\) whose equation is \(r=r_{\Sigma^{e}}=\text{constant}\). On the other hand as it appears from the study of such fluids (see [58] for details) the fluid distribution cannot fill the central region, in which case we may assume that such a region is represented by an empty vacuole, implying that the fluid distribution is also bounded from the inside by a surface \(\Sigma^{i}\) whose equation is \(r=r_{\Sigma^{i}}=\text{constant}\).
The line element is given in polar coordinates by
\[ds^{2}=e^{\nu}dt^{2}-e^{\lambda}dr^{2}-r^{2}\left(d\theta^{2}+\sinh^{2}\theta d \phi^{2}\right), \tag{99}\]
where, due to the imposed symmetry, \(\nu(r)\) and \(\lambda(r)\) are exclusively functions of \(r\). We number the coordinates: \(x^{0}=t\); \(x^{1}=r\); \(x^{2}=\theta\); \(x^{3}=\phi\).
The metric (99) has to satisfy Einstein field equations
\[G_{\mu}^{\nu}=8\pi T_{\mu}^{\nu}. \tag{100}\]
We may write for the energy momentum tensor
\[T_{\alpha\beta}\ =\ (\mu+P)V_{\alpha}V_{\beta}-Pg_{\alpha\beta}+\Pi_{\alpha\beta}, \tag{101}\]
or
\[T_{\alpha\beta}=(\mu+P_{zz})V_{\alpha}V_{\beta}-P_{zz}g_{\alpha\beta}+(P_{xx}-P _{zz})K_{\alpha}K_{\beta}. \tag{102}\]
Since we choose the fluid to be comoving in our coordinates, then
\[V^{\alpha}=(e^{-\nu/2},0,0,0);\quad V_{\alpha}=(e^{\nu/2},0,0,0), \tag{103}\]
and
\[K_{\alpha}=(0,-e^{\lambda/2},0,0), \tag{104}\]
It would be useful to express the anisotropic tensor in the form
\[\Pi_{\alpha\beta}=\Pi\left(K_{\alpha}K_{\beta}+\frac{h_{\alpha\beta}}{3} \right), \tag{105}\]
with \(h_{\mu\nu}=g_{\mu\nu}-V_{\nu}V_{\mu}\),
\[\Pi=P_{xx}-P_{zz}, \tag{106}\]
and
\[P=\frac{P_{xx}+2P_{zz}}{3}. \tag{107}\]
Since the Lie derivative and the partial derivative commute, then
\[{\cal L}_{\chi}G_{\alpha\beta}=8\pi{\cal L}_{\chi}T_{\alpha\beta}=0, \tag{108}\]
for any \(\chi\) defined by (98), implying that all physical variables only depend on \(r\).
If the fluid is bounded from the exterior by a hypersurface \(\Sigma^{e}\) described by the equation \(r=r_{\Sigma^{e}}=constant\), then the smooth matching of (97) and (99) on \(\Sigma^{e}\) requires the fulfillment of the Darmois conditions, imposing the continuity of the first and the second fundamental forms, which imply
\[e^{\nu_{\Sigma^{e}}}=\frac{2M}{r_{\Sigma^{e}}}-1,\qquad e^{\lambda_{\Sigma^{ e}}}=\frac{1}{\frac{2M}{r_{\Sigma^{e}}}-1},\qquad P_{xx}(r_{\Sigma^{e}})=0, \tag{109}\]
and the continuity of the mass function \(m(r)\) defined below. If we assume that the central region is surrounded by an empty cavity whose delimiting surface is \(r=r_{\Sigma^{i}}=constant\), then the fulfillment of Darmois conditions on \(\Sigma^{i}\) implies
\[e^{\nu_{\Sigma^{i}}}=1,\qquad e^{\lambda_{\Sigma^{i}}}=1,\qquad P_{xx}(r_{ \Sigma^{i}})=0, \tag{110}\]
and \(m(r_{\Sigma^{i}})=0\).
The non-vanishing components of the Einstein equations for the metric (99) and the energy momentum tensor (102) are
\[8\pi\mu = -\frac{(e^{-\lambda}+1)}{r^{2}}+\frac{\lambda^{\prime}}{r}e^{- \lambda}, \tag{111}\] \[8\pi P_{r} = \frac{(e^{-\lambda}+1)}{r^{2}}+\frac{\nu^{\prime}}{r}e^{-\lambda},\] (112) \[8\pi P_{\perp} = \frac{e^{-\lambda}}{2}\left(\nu^{\prime\prime}+\frac{\nu^{\prime 2 }}{2}-\frac{\lambda^{\prime}\nu^{\prime}}{2}+\frac{\nu^{\prime}}{r}-\frac{ \lambda^{\prime}}{r}\right), \tag{113}\]
where we have used the standard notation \(P_{xx}\equiv P_{r}\) and \(P_{zz}=P_{yy}\equiv P_{\perp}\), and primes denote derivatives with respect to \(r\).
It is worth stressing the differences between these equations and the corresponding to the spherically symmetric case.
From the equations above or using the conservation laws \(T_{\beta;\alpha}^{\alpha}=0\) we obtain, besides the identity \(\dot{\mu}=0\) (where dot denotes derivative with respect to \(t\)), the corresponding hydrostatic equilibrium equation (the generalized Tolman-Oppenheimer-Volkoff equation)
\[P_{r}^{\prime}+(\mu+P_{r})\frac{\nu^{\prime}}{2}+\frac{2}{r}\Pi\ =\ 0. \tag{114}\]
Let us now define the mass function \(m=m(r)\). For doing so, let us notice that using (97) we have that outside the fluid distribution (but inside the horizon)
\[M=-\left(\frac{R}{2}\right)R_{232}^{3}, \tag{115}\]
where the Riemann tensor component \(R_{232}^{3}\), has been calculated with (97).
Then generalizing the above definition of mass for the interior of the fluid distribution we may write
\[m(r)=-\left(\frac{r}{2}\right)R_{232}^{3}=\frac{r(1+e^{-\lambda})}{2} \tag{116}\]
where now the Riemann tensor component is calculated with (99).
Feeding back (116) into (111) we obtain
\[m^{\prime}(r)=-4\pi r^{2}\mu\Rightarrow m=-4\pi\int_{0}^{r}\mu r^{2}dr. \tag{117}\]
Since \(m\) as defined by (116) is a positive quantity, then \(\mu\) should be negative and therefore the weak energy condition is violated, a result already obtained in [89]. However it is important to stress that our definition of mass function differs from the one introduced in [89]. In particular our \(m\) is positive defined whereas the expression used in [89] is negative (for the hyperbolically symmetric fluid).
The following comments are in order at this point.
* If the energy density is regular everywhere then the mass function must vanish at the center as \(m\sim r^{3}\), this implies (as it follows from (116)) that the fluid cannot fill the space in the neighborhood of the center, i.e. there is a cavity around the center which may be, either empty, or filled with a fluid distribution non endowed with hyperbolical symmetry. Thus the hyperbolically symmetric fluid spans from a minimal value of the coordinate \(r\) until its external boundary. For the extreme case \(\mu=constant\), this minimal value \(r_{min.}\) is defined by \(-\frac{8\pi}{3}\mu r_{min.}^{2}>1\). Obviously, if the energy density is singular in the neighborhood of the center, then this region must also be excluded by physical reasons.
From the above it follows that, strictly speaking, we should write instead of (117)
\[m=4\pi\int_{r_{min}}^{r}|\mu|r^{2}dr, \tag{118}\]
where due to the fact that \(\mu\) is negative, we have replaced it by \(-|\mu|\) (as we shall do from now on).
The situation described above is fully consistent with the results obtained in [58] where it was shown that test particles cannot reach the center for any finite value of its energy.
Next, using (112) and (116) we obtain
\[\nu^{\prime}=2\frac{4\pi r^{3}P_{r}-m}{r(2m-r)}, \tag{119}\]
from which we may write (114) as
\[P_{r}^{\prime}+(P_{r}-|\mu|)\frac{4\pi r^{3}P_{r}-m}{r(2m-r)}+\frac{2}{r}\Pi = 0. \tag{120}\]
This is the hydrostatic equilibrium equation for our fluid. Let us analyze in some detail the physical meaning of its different terms.
The first term is just the gradient of pressure, which is usually negative and opposing gravity. The second term describes the gravitational "force" and contains two different contributions: on the one hand the term \(P_{r}-|\mu|\) which we expect to be negative (or zero for the stiff equation of state) and is usually interpreted as the "passive gravitational mass density" (p.g.m.d.), and on the other hand the term \(4\pi r^{3}P_{r}-m\) that is proportional to the "active gravitational mass" (a.g.m.), and which is negative if \(4\pi r^{3}P_{r}<m\). Finally the third term describes the effect of the pressure anisotropy, whose sign depends on the difference between principal stresses. Two important remarks are in order at this point:
* It is worth stressing that while the self-regenerative pressure effect (described by the \(4\pi r^{3}P_{r}\) term in (120)) has the same sign as in the spherically symmetric case, the mass function contribution in the second term has the opposite sign with respect to the latter case. This of course is due to the fact that the energy density is negative.
* If, both, the p.g.m.d. and the a.g.m. are negative, the final effect of the gravitational interaction would be as usual, to oppose the negative pressure gradient. However, because of the equivalence principle, a negative p.g.m.d. implies a negative inertial mass, which in turn implies that the hydrostatic force term (the pressure gradient and the anisotropic term), and the gravitational force term, switch their roles with respect to the positive energy density case.
In this case the complexity factor \(Y_{TF}\) is given by
\[Y_{TF}=4\pi\Pi+\mathcal{E}, \tag{121}\]
or
\[Y_{TF}=8\pi\Pi+\frac{4\pi}{r^{3}}\int_{0}^{r}\tilde{r}^{3}|\mu|^{\prime}d \tilde{r}, \tag{122}\]
where \(\mathcal{E}\) is the scalar defining the electric part of the Weyl tensor (the magnetic part vanishes identically as in the spherically symmetric case).
Also, as in the spherically symetric case \(Y_{TF}\) encompasses the influence of the local anisotropy of pressure and density inhomogeneity on the Tolman mass. Or, in other words, \(Y_{TF}\) describes how these two factors modify the value of the Tolman mass, with respect to its value for the homogeneous isotropic fluid.
Indeed, in this case it can be shown that the following expressions for the Tolman mass can be obtained (see [62] for details)
\[m_{T}=\frac{(cosh\pi-1)}{4}e^{(\nu-\lambda)/2}r^{2}\nu^{\prime}, \tag{123}\]
and
\[m_{T} = (m_{T})_{\Sigma^{e}}\left(\frac{r}{r_{\Sigma^{e}}}\right)^{3} \tag{124}\] \[+ \frac{(cosh\pi-1)}{2}r^{3}\int_{r}^{r_{\Sigma^{e}}}\frac{e^{(\nu+ \lambda)/2}}{\tilde{r}}Y_{TF}d\tilde{r}.\]
We shall next present two models endowed with hyperbolical symmetry and satisfying the vanishing complexity factor condition.
#### iv.2.1 A model with vanishing complexity factor and vanishing radial pressure
Since the vanishing complexity factor is not enough to close the full system of Einstein equations we have to impose an additional restriction in order to obtain a specific model. Here we shall assume (besides the vanishing complexity factor), the condition \(P_{r}=0\).
Thus, assuming \(P_{r}=0\), we obtain from (112)
\[\nu^{\prime}=-\frac{2g}{(2g-1)r}, \tag{125}\]
where \(g\) is defined by
\[e^{-\lambda}=2g-1. \tag{126}\]
Next, imposing \(Y_{TF}=0\) in (124) it follows that
\[m_{T}=(m_{T})_{\Sigma^{e}}\frac{r^{3}}{r_{\Sigma}^{3}}. \tag{127}\]
The combination of (123), (125), (126) and (127) produces
\[e^{\nu}=\frac{4(m_{T}^{2})_{\Sigma^{e}}r^{4}}{r_{\Sigma}^{6}(cosh\pi-1)^{2}} \frac{(2g-1)}{g^{2}}. \tag{128}\]
On the other hand the condition \(Y_{TF}=0\) may be written as
\[g^{\prime}r(1-g)+g(5g-2)=0, \tag{129}\]
whose solution reads
\[C_{2}r^{10}=\frac{g^{5}}{(5g-2)^{3}}, \tag{130}\]
where \(C_{2}\) is a constant of integration.
Then for the physical variables we obtain
\[|\mu| = \frac{3}{4\pi r^{2}}\frac{g(2g-1)}{(g-1)}, \tag{131}\] \[P_{\perp} = \frac{3}{8\pi r^{2}}\frac{g^{2}}{(g-1)}. \tag{132}\]
In this case, the fluid distribution is restricted by a minimal value of the \(r\) coordinate, satisfying \(g(r_{min})>1\). The specific value of \(r_{min}\) is obtained from (130). For \(0<r<r_{min}\) we may assume, as in precedent models, an empty cavity surrounding the center. Also as in precedent models, the discontinuity of the mass function across \(\Sigma^{i}\) implies that a thin shell appears on it. Finally, since the radial pressure is assumed to be zero, both the a.g.m. and the p.g.m.d. are negative.
#### ii.2.2 A model with the stiff equation of state and vanishing complexity factor
Finally we shall consider a solution satisfying the so called stiff equation of state proposed by Zeldovich [137] and which is thought to be suitable to describe ultradense matter, and the vanishing complexity factor.
In its original form the stiff equation of state assumes that energy density equals pressure (in relativistic units). In our case we shall assume
\[|\mu|=P_{r}. \tag{133}\]
then (120) becomes
\[P_{r}^{\prime}+\frac{2}{r}\Pi = 0. \tag{134}\]
Using this latter condition in (122) with \(Y_{TF}=0\), and feeding back the resulting expression into (134) one obtains
\[P_{r}^{\prime\prime}+\frac{3}{r}P_{r}^{\prime}=0, \tag{135}\]
whose solution reads
\[P_{r}=\frac{b}{r^{2}}-a, \tag{136}\]
where \(a\) and \(b\) are two positive constants of integration.
Then from (116) and (117) it follows at once
\[m=4\pi r\left(b-\frac{ar^{2}}{3}\right), \tag{137}\]
from which we easily obtain \(\lambda\). Finally, feeding back these expressions into (119) we may obtain \(\nu\).
Assuming the fluid distribution to be bounded from the exterior by the surface \(\Sigma^{e}\) described by \(r=r_{\Sigma^{e}}=constant\), then we may write
\[P_{r}=b\left(\frac{1}{r^{2}}-\frac{1}{r_{\Sigma^{e}}^{2}}\right). \tag{138}\]
So far we have only considered static configurations. In the next section we shall tackle the problem of defining complexity for dynamic dissipative fluids.
## III Dynamic spherically symmetric fluids
As mentioned before, when dealing with non-static dissipative fluids we encounter two additional problems to define complexity. On the one hand we have to include the dissipative flux in the variable measuring the complexity of the structure. On the other hand we have also to describe the complexity of the pattern of evolution.
In other words, in the dynamic case we have to ask ourselves two questions instead of one, namely, what is the simplest fluid? and, what is the simplest mode of evolution of the fluid? These two questions, although related in some way, are completely different. A "simple" fluid may evolve exhibiting a very complex pattern of evolution, while a fluid with a high degree of complexity may evolve through a simple pattern of evolution.
This problem was studied in detail in [56], here we shall sketch the main steps leading to the appropriate definition of complexity for dynamic fluids and discuss about the answer to the question, what is the simplest mode of evolution of the fluid? Afterwards we shall present some solutions.
We shall restrict the discussion to the spherically symmetric case.
### The dynamic spherically symmetric case
So, let us consider a spherically symmetric distribution of collapsing fluid, which may be bounded by a spherical surface \(\Sigma\), or not. The fluid is assumed to be locally anisotropic (principal stresses unequal) and undergoing dissipation in the form of heat flow (diffusion approximation).
Choosing comoving coordinates, the general interior metric can be written
\[ds^{2}=-A^{2}dt^{2}+B^{2}dr^{2}+R^{2}(d\theta^{2}+\sin^{2}\theta d\phi^{2}), \tag{139}\]
where \(A\), \(B\) and \(R\) are functions of \(t\) and \(r\) and are assumed positive. We number the coordinates \(x^{0}=t\), \(x^{1}=r\), \(x^{2}=\theta\) and \(x^{3}=\phi\). Observe that \(A\) and \(B\) are dimensionless, whereas \(R\) has the same dimension as \(r\).
The energy-momentum tensor \(T_{\alpha\beta}\) of the fluid distribution has the form
\[T_{\alpha\beta} = (\mu+P_{\perp})V_{\alpha}V_{\beta}+P_{\perp}g_{\alpha\beta}+(P_{ r}-P_{\perp})\chi_{\alpha}\chi_{\beta} \tag{140}\] \[+ q_{\alpha}V_{\beta}+V_{\alpha}q_{\beta},\]
where \(\mu\) is the energy density, \(P_{r}\) the radial pressure, \(P_{\perp}\) the tangential pressure, \(q^{\alpha}\) the heat flux, \(V^{\alpha}\) the four velocity of the fluid, and \(\chi^{\alpha}\) a unit four vector along the radial direction. These quantities satisfy
\[V^{\alpha}V_{\alpha}=-1,\ \ V^{\alpha}q_{\alpha}=0,\ \ \chi^{\alpha}\chi_{ \alpha}=1,\ \ \chi^{\alpha}V_{\alpha}=0. \tag{141}\]
Or in the equivalent (canonical) form
\[T_{\alpha\beta}=\mu V_{\alpha}V_{\beta}+Ph_{\alpha\beta}+\Pi_{\alpha\beta}+q \left(V_{\alpha}\chi_{\beta}+\chi_{\alpha}V_{\beta}\right) \tag{142}\]
with
\[P=\frac{P_{r}+2P_{\perp}}{3},\qquad h_{\alpha\beta}=g_{\alpha\beta}+V_{\alpha} V_{\beta},\]
\[\Pi_{\alpha\beta}=\Pi\left(\chi_{\alpha}\chi_{\beta}-\frac{1}{3}h_{\alpha \beta}\right),\qquad\Pi=P_{r}-P_{\perp},\]
where \(q\) is a function of \(t\) and \(r\).
Since we are considering comoving observers, we have
\[V^{\alpha} = A^{-1}\delta^{\alpha}_{0},\ \ q^{\alpha}=qB^{-1}\delta^{\alpha}_{1},\ \ \chi^{\alpha}=B^{-1}\delta^{\alpha}_{1}, \tag{143}\]
It is worth noticing that, both, bulk and shear viscosity could be easily introduced to the system through a redefinition of the radial and tangential pressures, \(P_{r}\) and \(P_{\perp}\). Also, dissipation in the free streaming approximation could be introduced by redefining \(\mu,P_{r}\) and \(q\).
The Einstein equations for (139) and (142), are explicitly written in Appendix IX.
The acceleration \(a_{\alpha}\) and the expansion \(\Theta\) of the fluid are given by
\[a_{\alpha}=V_{\alpha;\beta}V^{\beta},\ \ \Theta=V^{\alpha}_{\ ;\ \alpha}. \tag{144}\]
and its shear \(\sigma_{\alpha\beta}\) by
\[\sigma_{\alpha\beta}=V_{(\alpha;\beta)}+a_{(\alpha}V_{\beta)}-\frac{1}{3} \Theta h_{\alpha\beta}, \tag{145}\]
from which we easily obtain
\[a_{1}=\frac{A^{\prime}}{A},\ \ a=\sqrt{a^{\alpha}a_{\alpha}}=\frac{A^{\prime}}{AB}, \tag{146}\]
\[\Theta=\frac{1}{A}\left(\frac{\dot{B}}{B}+2\frac{\dot{R}}{R}\right), \tag{147}\]
\[\sigma_{11}=\frac{2}{3}B^{2}\sigma,\ \ \sigma_{22}=\frac{\sigma_{33}}{\sin^{2} \theta}=-\frac{1}{3}R^{2}\sigma, \tag{148}\]
where
\[\sigma^{\alpha\beta}\sigma_{\alpha\beta}=\frac{2}{3}\sigma^{2}, \tag{149}\]
with
\[\sigma=\frac{1}{A}\left(\frac{\dot{B}}{B}-\frac{\dot{R}}{R}\right), \tag{150}\]
where the prime stands for \(r\) differentiation and the dot stands for differentiation with respect to \(t\).
Next, the mass function \(m(t,r)\) introduced by Misner and Sharp [94] reads
\[m=\frac{R^{3}}{2}R_{23} \tag{151}\]
and introducing the proper time derivative \(D_{T}\) given by
\[D_{T}=\frac{1}{A}\frac{\partial}{\partial t}, \tag{152}\]
we can define the velocity \(U\) of the collapsing fluid as the variation of the areal radius with respect to proper time, i.e.
\[U=D_{T}R, \tag{153}\]
where \(R\) defines the areal radius of a spherical surface inside the fluid distribution (as measured from its area).
Then (151) can be rewritten as
\[E\equiv\frac{R^{\prime}}{B}=\left(1+U^{2}-\frac{2m}{R}\right)^{1/2}. \tag{154}\]
Using (154) we can express (254) as
\[4\pi q=E\left[\frac{1}{3}D_{R}(\Theta-\sigma)-\frac{\sigma}{R}\right], \tag{155}\]
where \(D_{R}\) denotes the proper radial derivative,
\[D_{R}=\frac{1}{R^{\prime}}\frac{\partial}{\partial r}. \tag{156}\]
Using (250)-(252) with (156) we obtain from (151)
\[D_{R}m=4\pi\left(\mu+q\frac{U}{E}\right)R^{2}, \tag{157}\]
which implies
\[m=4\pi\int_{0}^{r}\left(\mu+q\frac{U}{E}\right)R^{2}R^{\prime}dr, \tag{158}\]
satisfying the regular condition \(m(t,0)=0\).
Integrating (158) we find
\[\frac{3m}{R^{3}}=4\pi\mu-\frac{4\pi}{R^{3}}\int_{0}^{r}R^{3}\left(D_{R}\mu-3q \frac{U}{RE}\right)R^{\prime}dr. \tag{159}\]
### Defining complexity for the dynamic fluid
As we have already mentioned, in the dynamic case the definition of a quantity measuring the complexity of the system poses two additional problems with respect to the static case.
On the one hand, the definition of the complexity of the structure of the fluid, which in this case also involves dissipative variables, and on the other hand the problem of defining the complexity of the pattern of evolution of the system.
For the static fluid distribution it was assumed that the scalar function \(Y_{TF}\) is an appropriate measure of the complexity of the fluid, and therefore was identified as the complexity factor.
We shall assume in the dynamic case that \(Y_{TF}\) still measures the complexity of the system, in what corresponds to the structure of the object, and we shall adopt initially an assumption about the simplest possible pattern of evolution. Specifically, we shall assume that the simplest evolution pattern (one of them at least) is described by the homologous evolution. However, as we shall see below, this last condition might be too stringent, ruling out many interesting scenarios from the astrophysical point of view and therefore we shall consider also other possible (less restrictive) mode of evolution which also could be used to describe the simplest mode of evolution, and which we call quasi-homologous.
In order to provide the necessary mathematical expressions for carrying out our task, let us start by finding the expression for the Weyl tensor.
As is well known, in the spherically symmetric case the Weyl tensor (\(C^{\rho}_{\alpha\beta\mu}\)) is defined by its "electric" part \(E_{\gamma\nu}\) alone, since its "magnetic" part vanishes, with
\[E_{\alpha\beta}=C_{\alpha\mu\beta\nu}V^{\mu}V^{\nu}, \tag{160}\]
where the electric part of Weyl tensor may also be written as
\[E_{\alpha\beta}=\mathcal{E}(\chi_{\alpha}\chi_{\beta}-\frac{1}{3}h_{\alpha \beta}). \tag{161}\]
with
\[\mathcal{E} = \frac{1}{2A^{2}}\left[\frac{\ddot{R}}{R}-\frac{\ddot{B}}{B}- \left(\frac{\dot{R}}{R}-\frac{\dot{B}}{B}\right)\left(\frac{\dot{A}}{A}+\frac {\dot{R}}{R}\right)\right]+\frac{1}{2B^{2}}\left[\frac{A^{\prime\prime}}{A}- \frac{R^{\prime\prime}}{R}+\left(\frac{B^{\prime}}{B}+\frac{R^{\prime}}{R} \right)\left(\frac{R^{\prime}}{R}-\frac{A^{\prime}}{A}\right)\right]-\frac{1} {2R^{2}}. \tag{162}\]
Then, proceeding as in the previous cases (see [56] for details) we obtain
\[Y_{T}=4\pi(\mu+3P_{r}-2\Pi),\qquad Y_{TF}=\mathcal{E}-4\pi\Pi. \tag{163}\]
Next, using (250), (252), (253) with (151) and (162) we obtain
\[\frac{3m}{R^{3}}=4\pi\left(\mu-\Pi\right)-\mathcal{E}, \tag{164}\]
which combined with (159) and (163) produces
\[Y_{TF}=-8\pi\Pi+\frac{4\pi}{R^{3}}\int_{0}^{r}R^{3}\left(D_{R}\mu-3q\frac{U}{ RE}\right)R^{\prime}dr. \tag{165}\]
Again, we notice that due to a different signature, the sign of \(Y_{TF}\) in the above equation differs from the sign of the \(Y_{TF}\) used in [55] for the static case.
Thus the scalar \(Y_{TF}\) may be expressed through the Weyl tensor and the anisotropy of pressure or in terms of the anisotropy of pressure, the density inhomogeneity and the dissipative variables.
Once the complexity factor for the structure of the fluid distribution has been established, it remains to elucidate what is the simplest pattern of evolution. Based on purely intuitive thoughts we shall first identify the homologous evolution as the simplest mode of evolution.
In order to obtain a mathematical description of the homologous evolution, we shall proceed as follows
First of all observe that we can write (155) as
\[D_{R}\left(\frac{U}{R}\right)=\frac{4\pi}{E}q+\frac{\sigma}{R}, \tag{166}\]
which after integration becomes
\[U=\tilde{a}(t)R+R\int_{0}^{r}\left(\frac{4\pi}{E}q+\frac{\sigma}{R}\right)R^{ \prime}dr, \tag{167}\]
where \(\tilde{a}\) is an integration function, or,
\[U=\frac{U_{\Sigma}}{R_{\Sigma}}R-R\int_{r}^{r_{\Sigma}}\left(\frac{4\pi}{E}q+ \frac{\sigma}{R}\right)R^{\prime}dr. \tag{168}\]
If the integrand in the above equations vanishes we have from (167) or (168)
\[U=\tilde{a}(t)R. \tag{169}\]
This relationship is characteristic of the homologous evolution in Newtonian hydrodynamics [38; 81; 105]. In our case this may occur if the fluid is shear-free and non dissipative, or if the two terms in the integral cancel each other.
In [56], the term "homologous evolution" was used to characterize relativistic systems satisfying, besides (169), the condition
\[\frac{R_{I}}{R_{II}}=\text{constant}, \tag{170}\]
where \(R_{I}\) and \(R_{II}\) denote the areal radii of two concentric shells \((I,II)\) described by \(r=r_{I}=\text{constant}\), and \(r=r_{II}=\text{constant}\), respectively.
Now, it is very important to be aware of the fact that conditions (169) and (170, in the general relativistic case, are different and more specifically, that (169) does not imply (170).
Indeed, (169) implies that for the two shells of fluids \(I,II\) we have
\[\frac{U_{I}}{U_{II}}=\frac{A_{II}\dot{R}_{I}}{A_{I}\dot{R}_{II}}=\frac{R_{I}}{ R_{II}}, \tag{171}\]
that implies (170) only if \(A=A(t)\), which by a simple coordinate transformation becomes \(A=\text{constant}\). Thus in the non-relativistic regime, (170) always follows from the condition that the radial velocity is proportional to the radial distance, whereas in the relativistic regime the condition (169) implies (170), only if the fluid is geodesic.
We shall define quasi-homologous evolution as that restricted only by condition (169), implying
\[\frac{4\pi}{R^{\prime}}Bq+\frac{\sigma}{R}=0. \tag{172}\]
Let us first consider the homologous evolution (both (169) and (170) are satisfied).
From the equation (170) it follows that \(R\) is a separable function, i.e. we can write
\[R=R_{1}(t)R_{2}(r). \tag{173}\]
To summarize, the homologous condition implies (173), and
\[\frac{4\pi}{R^{\prime}}Bq+\frac{\sigma}{R}=0. \tag{174}\]
Feeding back this last expression into (254), we obtain
\[(\Theta-\sigma)^{\prime}=0, \tag{175}\]
whereas, using (147) and (150) we get
\[(\Theta-\sigma)^{\prime}=\left(\frac{3}{A}\frac{\dot{R}}{R}\right)^{\prime}=0. \tag{176}\]
Then using (173) it follows at once that
\[A^{\prime}=0, \tag{177}\]
implying that the fluid is geodesic, as it follows from (146). Also, by reparametrizing the coordinate \(r\), we may put, without loss of generality, \(A=1\).
The inverse is in general untrue, e.g. the Lemaitre-Tolman- Bondi case, unless we assume that \((\Theta-\sigma)^{\prime}\) is of class \(C^{\omega}\).
In the non-dissipative case, the homologous condition not only implies that the fluid is geodesic, but also that it is shear-free, as it follows at once from (174).
An important point to mention here is that as it has been shown in [47], an initially shear-free geodesic fluid remains shear-free during the evolution iff \(Y_{TF}=0\). This implies that a system which starts its evolution from the rest (\(\sigma=0\)), will remain shear-free if the fluid is geodesic (or equivalently, homologous) and \(Y_{TF}=0\). This is an additional argument supporting our choice of \(Y_{TF}\) as the complexity factor.
If we impose the homologous condition, the equation (259) in Appendix X becomes
\[D_{T}U=-\frac{m}{R^{2}}-4\pi P_{r}R, \tag{178}\]
or in terms of \(Y_{TF}\)
\[\frac{3D_{T}U}{R}=-4\pi\left(\mu+3P_{r}-2\Pi\right)+Y_{TF}, \tag{179}\]
where (163) has been used.
Next from the field equations we obtain
\[4\pi\left(\mu+3P_{r}-2\Pi\right)=-\frac{2\ddot{R}}{R}-\frac{\ddot{B}}{B}, \tag{180}\]
and from the definition of \(U\)
\[\frac{3D_{T}U}{R}=\frac{3\ddot{R}}{R}, \tag{181}\]
feeding back the two equations above into (179), it follows that
\[\frac{\ddot{R}}{R}-\frac{\ddot{B}}{B}=Y_{TF}. \tag{182}\]
Now, the vanishing complexity factor condition (\(Y_{TF}=0\)), produces after the integration of (182)
\[B=R_{1}(t)\left(b_{1}(r)\int\frac{dt}{R_{1}(t)^{2}}+b_{2}(r)\right), \tag{183}\]
where \(b_{1}(r)\) and \(b_{2}(r)\) are two functions of integration, or
\[B=R_{1}(t)R_{2}^{\prime}(r)\left(\ddot{b}_{1}(r)\int\frac{dt}{R_{1}(t)^{2}}+ \ddot{b}_{2}(r)\right), \tag{184}\]
with \(b_{1}(r)=\ddot{b}_{1}(r)R_{2}^{\prime}\) and \(b_{2}(r)=\ddot{b}_{2}(r)R_{2}^{\prime}\).
Then introducing the variable
\[Z=\ddot{b}_{1}(r)\int\frac{dt}{R_{1}(t)^{2}}+\ddot{b}_{2}(r), \tag{185}\]
we may write
\[B=ZR^{\prime}. \tag{186}\]
Let us now use the expressions above to analyze first the non-dissipative case.
Thus, if we further assume the fluid to be non-dissipative, recalling that in this case the homologous condition implies the vanishing of the shear, we obtain because of (150)
\[\frac{\ddot{R}}{R}-\frac{\ddot{B}}{B}=0\quad\Rightarrow Y_{TF}=0. \tag{187}\]
In other words, in this particular case, the homologous condition already implies the vanishing complexity factor condition.
More so, since the fluid is shear-free, we have because of (150) and (183)
\[b_{1}(r)=0\Rightarrow B=b_{2}(r)R_{1}(t)=\ddot{b}_{2}(r)R_{1}(t)R_{2}^{\prime}. \tag{188}\]
Then reparametrising \(r\) as \(\ddot{b}_{2}(r)dr\Rightarrow dr\), we may put without loss of generality \(B=R_{1}(t)R_{2}(r)^{\prime}\), or equivalently \(Z=1\), implying that all non-dissipative configurations evolving homologously (and thereby satisfying \(Y_{TF}=0\)), belong to what are known as "Euclidean stars" [48], characterized by the condition \(Z=1\Rightarrow B=R^{\prime}\). However among all possible solutions satisfying the "Euclidean condition", only one evolves homologously and satisfies the condition \(Y_{TF}=0\).
Indeed, from the field equations (252) and (253) we may write
\[8\pi(P_{r}-P_{\perp})=\frac{\dot{Z}\dot{R}}{ZR}+\frac{1}{Z^{2}R^{2}}\left( \frac{Z^{\prime}R}{ZR^{\prime}}+1-Z^{2}\right). \tag{189}\]
Since in this case we have \(Z=1\) then \(\Pi=P_{r}-P_{\perp}=0\) which implies because of the \(Y_{TF}=0\) condition, that \(\mu^{\prime}=0\).
However it is known that a shear-free, geodesic (non-dissipative) fluid with isotropic pressure is necessarily dust with homogeneous energy density and vanishing Weyl tensor (see [42; 46]). It goes without saying that this kind of system represents the simplest conceivable configuration (Friedman-Robertson-Walker).
Thus for the non-dissipative case, the homologous condition implies \(Y_{TF}=0\) and produces the simplest configuration. This configuration is the only one evolving homologously and satisfying \(Y_{TF}=0\).
Of course, solutions satisfying \(Y_{TF}=0\) but not evolving homologously do exist. They only require \(8\pi\Pi=\frac{4\pi}{R^{3}}\int_{0}^{\tau}R^{3}\mu^{\prime}dr\). In such a case the solutions are shearing, and neither conformally flat nor geodesic.
Based on all the precedent comments, it seems reasonable to consider the homologous condition as a good candidate to describe the simplest mode of evolution.
### The dissipative case
In the dissipative case, we may obtain from (150) and (187),
\[\dot{\sigma}=-Y_{TF}+\left(\frac{\dot{R}}{R}\right)^{2}-\left(\frac{\dot{B}}{ B}\right)^{2}. \tag{190}\]
Then, taking the \(t\)-derivative of (174) and using (190) we obtain
\[Y_{TF}\frac{R^{\prime}}{R}=4\pi Bq\left(\frac{\dot{q}}{q}+2\frac{\dot{B}}{B} +\frac{\dot{R}}{R}\right). \tag{191}\]
If we assume \(Y_{TF}=0\), then we obtain
\[q=\frac{f(r)}{B^{2}R}, \tag{192}\]
implying
\[\dot{q}=-q(\Theta+\sigma), \tag{193}\]
where \(f\) is an arbitrary integration function. Solutions of this kind might be found by using the general methods presented in [70; 71; 72; 118; 119].
In the dissipative case we need to provide a transport equation to describe the evolution and distribution of temperature. Assuming a causal dissipative theory (e.g. the Israel- Stewart theory [67; 68; 69] ), the transport equation for the heat flux reads
\[\tau h^{\alpha\beta}V^{\gamma}q_{\beta;\gamma}+q^{\alpha}=-\kappa h^{\alpha \beta}\left(T_{,\beta}+Ta_{\beta}\right)-\frac{1}{2}\kappa T^{2}\left(\frac{ \tau V^{\beta}}{\kappa T^{2}}\right)_{;\beta}q^{\alpha}, \tag{194}\]
where \(\kappa\) denotes the thermal conductivity, and \(T\) and \(\tau\) denote temperature and relaxation time, respectively.
In the spherically symmetric case under consideration, the transport equation has only one independent component, which may be obtained from (194) by contracting with the unit spacelike vector \(K^{\alpha}\), producing
\[\tau V^{\alpha}q_{,\alpha}+q=-\kappa\left(K^{\alpha}T_{,\alpha}+Ta\right)- \frac{1}{2}\kappa T^{2}\left(\frac{\tau V^{\alpha}}{\kappa T^{2}}\right)_{; \alpha}q. \tag{195}\]
Sometimes, it is possible to simplify the equation above, assuming the so-called truncated transport equation when the last term in (194) may be neglected [121], producing
\[\tau V^{\alpha}q_{,\alpha}+q=-\kappa\left(K^{\alpha}T_{,\alpha}+Ta\right). \tag{196}\]
Now, in a dissipative process, it appears reasonable to consider the stationary state, prevailing once the system has relaxed and transient phenomena have vanished, as an example of the simplest dissipative regime. Thus, if we assume the stationary state (neglecting the relaxation time), then the transport equation (194) reads
\[q=-\frac{\kappa T^{\prime}}{B}. \tag{197}\]
Combining the above equation with (192) we obtain
\[T^{\prime}=-\frac{f(r)}{\kappa BR}. \tag{198}\]
At this point, however, neither can we provide solid arguments to support further the assumption about the vanishing of the relaxation time as an indicator of minimum complexity of the dissipative regime, nor can we prove that exact solutions of this kind exist.
So far we have assumed the homologous condition in order to describe the simplest mode of evolution, however as indicated in the lines above, the resulting models are perhaps too restrictive, and it could be wise to consider less stringent conditions. One possible example could be the quasi-homologous condition (174). In [61; 64; 65] the reader may find a long list of exact solutions satisfying the vanishing complexity factor and the quasi-homologous condition for spherically symmetric and hyperbolically symmetric dissipative fluids.
Finally we shall tackle the problem of defining complexity of vacuum solutions to Einstein equations. We shall restrict our discussion to the Bondi metric.
## IV Complexity of the Bondi space-time
Let us now consider the extension of our concept of complexity to vacuum spacetimes. More specifically we shall consider the Bondi metric [17], which encompasses a vast numbers of spacetimes, including the Minkowski spacetime, the static Weyl metrics, non-radiative non-static metrics and gravitationally radiating metrics. Furthermore, the Bondi approach has the virtue of providing a clear and precise criterion for the existence of gravitational radiation. Namely, if the news function is zero over a time interval, then there is no radiation over that interval.
As we have seen, in the case of fluid distributions, the variable(s) measuring the complexity of the fluid (the complexity factor(s)) appear in the trace free part of the orthogonal splitting of the electric Riemann tensor. In vacuum the Riemann tensor and the Weyl tensor are the same, so if we extrapolate to the vacuum case the same definition of complexity as for the fluid self-gravitating system, we shall need the scalar functions defining the electric part of the Weyl tensor for the Bondi metric.
The general form of an axially and reflection symmetric asymptotically flat metric given by Bondi [17] is
\[ds^{2} = \left(\frac{V}{r}e^{2\beta}-U^{2}r^{2}e^{2\gamma}\right)du^{2}+2e ^{2\beta}dudr+2Ur^{2}e^{2\gamma}dud\theta \tag{199}\] \[- r^{2}\left(e^{2\gamma}d\theta^{2}+e^{-2\gamma}\sin^{2}\theta d \phi^{2}\right),\]
where \(V,\beta,U\) and \(\gamma\) are functions of \(u,r\) and \(\theta\).
We number the coordinates \(x^{0,1,2,3}=u,r,\theta,\phi\) respectively. \(u\) is a timelike coordinate (\(g_{uu}>0\) ) converging to the retarded time as \(r\rightarrow\infty\). The hypersurfaces \(u=constant\) define null surfaces (their normal vectors are null vectors), which at null infinity (\(r\rightarrow\infty\)) coincides with the Minkowski null light cone open to the future. \(r\) is a null coordinate (\(g_{rr}=0\)) and \(\theta\) and \(\phi\) are two angle coordinates (see [17] for details).
Regularity conditions in the neighborhood of the polar axis (\(\sin\theta=0\)), imply that as \(\sin\theta\to 0\)
\[V,\beta,U/\sin\theta,\gamma/\sin^{2}\theta, \tag{200}\]
each equals a function of \(\cos\theta\) regular on the polar axis.
The four metric functions are assumed to be expanded in series of \(1/r\), then using the field equations, Bondi gets
\[\gamma=cr^{-1}+\left(C-\frac{1}{6}c^{3}\right)r^{-3}+..., \tag{201}\]
\[U=-\left(c_{\theta}+2c\cot\theta\right)r^{-2}+\left[2N+3cc_{ \theta}+4c^{2}\cot\theta\right]r^{-3}..., \tag{202}\]
\[V=r-2M-\left(N_{\theta}+N\cot\theta-c_{\theta}^{2}-4cc_{\theta} \cot\theta-\right. \tag{203}\] \[\left.\frac{1}{2}c^{2}(1+8\cot^{2}\theta)\right)r^{-1}..., \tag{204}\]
\[\beta=-\frac{1}{4}c^{2}r^{-2}+..., \tag{205}\]
where \(c\), \(C\), \(N\) and \(M\) are functions of \(u\) and \(\theta\) satisfying the constraint
\[4C_{u}=2c^{2}c_{u}+2cM+N\cot\theta-N_{\theta}, \tag{206}\]
and letters as subscripts denote derivatives. The three functions \(c,M\) and \(N\) are further related by the supplementary conditions
\[M_{u}=-c_{u}^{2}+\frac{1}{2}\left(c_{\theta\theta}+3c_{\theta}\cot\theta-2c \right)_{u}, \tag{207}\]
\[-3N_{u}=M_{\theta}+3cc_{u\theta}+4cc_{u}\cot\theta+c_{u}c_{\theta}. \tag{208}\]
In the static case \(M\) equals the mass of the system and is called by Bondi the "mass aspect", whereas \(N\) and \(C\) are closely related to the dipole and quadrupole moments respectively.
Next, Bondi defines the mass \(m(u)\) of the system as
\[m(u)=\frac{1}{2}\int_{0}^{\pi}M\sin\theta d\theta, \tag{209}\]
which by virtue of (207) and (200) yields
\[m_{u}=-\frac{1}{2}\int_{0}^{\pi}c_{u}^{2}\sin\theta d\theta. \tag{210}\]
Arriving at this point let us summarize the main conclusions emerging from Bondi's approach.
1. If \(\gamma,M\) and \(N\) are known for some \(u=a\)(constant) and \(c_{u}\) (the news function) is known for all \(u\) in the interval \(a\leq u\leq b\), then the system is fully determined in that interval. In other words, whatever happens at the source, leading to changes in the field, it can only do so by affecting \(c_{u}\) and vice versa. In the light of this comment the relationship between news function and the occurrence of radiation becomes clear.
2. As it follows from (210), the mass of a system is constant if and only if there is no news.
Now, for an observer at rest in the frame of (199), the four-velocity vector has components
\[V^{\alpha}=\left(\frac{1}{A},0,0,0\right), \tag{211}\]
with
\[A\equiv\left(\frac{V}{r}e^{2\beta}-U^{2}r^{2}e^{2\gamma}\right)^{1/2}. \tag{212}\]
Next, let us introduce the unit, spacelike vectors \(\mathbf{K}\), \(\mathbf{L}\), \(\mathbf{S}\), with components
\[K^{\alpha}=\left(\frac{1}{A},-e^{-2\beta}A,0,0\right)\quad L^{\alpha}=\left(0,Ure^{\gamma}e^{-2\beta},-\frac{e^{-\gamma}}{r},0\right) \tag{213}\]
\[S^{\alpha}=\left(0,0,0,-\frac{e^{\gamma}}{r\sin\theta}\right), \tag{214}\]
For the observer defined by (211) the vorticity vector may be written as (see [43] for details)
\[\omega^{\alpha}=\left(0,0,0,\omega^{\phi}\right). \tag{215}\]
The explicit expressions for \(\omega^{\phi}\) and its absolute value \(\Omega\equiv\left(-\omega_{\alpha}\omega^{\alpha}\right)^{1/2}\) are given in the Appendix XIII.
The electric and magnetic parts of Weyl tensor, \(E_{\alpha\beta}\) and \(H_{\alpha\beta}\), respectively, are formed from the Weyl tensor \(C_{\alpha\beta\gamma\delta}\) and its dual \(\tilde{C}_{\alpha\beta\gamma\delta}\) by contraction with the four velocity vector given by (211)
\[E_{\alpha\beta}=C_{\alpha\gamma\beta\delta}V^{\gamma}V^{\delta}, \tag{216}\]
\[H_{\alpha\beta} = \tilde{C}_{\alpha\gamma\beta\delta}V^{\gamma}V^{\delta}=\frac{1} {2}\epsilon_{\alpha\gamma\epsilon\delta}{C^{\epsilon\delta}}_{\beta\rho}V^{ \gamma}V^{\rho}, \tag{217}\] \[\epsilon_{\alpha\beta\gamma\delta}\equiv\sqrt{-g}\ \eta_{\alpha\beta\gamma\delta},\]
where \(\eta_{\alpha\beta\gamma\delta}\) is the permutation symbol.
The electric part of the Weyl tensor has only three independent non-vanishing components, whereas only two components define the magnetic part. Thus we may write
\[E_{\alpha\beta} = \mathcal{E}_{1}\left(K_{\alpha}L_{\beta}+L_{\alpha}K_{\beta} \right)+\mathcal{E}_{2}\left(K_{\alpha}K_{\beta}+\frac{1}{3}h_{\alpha\beta}\right) \tag{218}\] \[+ \mathcal{E}_{3}\left(L_{\alpha}L_{\beta}+\frac{1}{3}h_{\alpha\beta }\right),\]
and
\[H_{\alpha\beta}=H_{1}(S_{\alpha}K_{\beta}+S_{\beta}K_{\alpha})+H_{2}(S_{\alpha }L_{\beta}+S_{\beta}L_{\alpha}). \tag{219}\]
with \(h_{\mu\nu}=g_{\mu\nu}-V_{\nu}V_{\mu}\), and
\[\mathcal{E}_{1}=L^{\alpha}K^{\beta}E_{\alpha\beta}, \tag{220}\]
\[\mathcal{E}_{2}=(2K^{\alpha}K^{\beta}+L^{\alpha}L^{\beta})E_{\alpha\beta}, \tag{221}\]
\[\mathcal{E}_{3}=(2L^{\alpha}L^{\beta}+K^{\alpha}K^{\beta})E_{\alpha\beta}, \tag{222}\]
these three scalars represent the complexity factors of our solutions.
For the magnetic part we have
\[H_{2}=S^{\alpha}L^{\beta}H_{\alpha\beta}, \tag{223}\]
\[H_{1}=S^{\alpha}K^{\beta}H_{\alpha\beta}. \tag{224}\]
Explicit expressions for these scalars are given in the Appendixes XI and XII.
In [43] it was obtained that if we put \(H_{\beta}^{\alpha}=0\) then the field is non-radiative and up to order \(1/r^{3}\) in \(\gamma\), the
metric is static, and the mass, the "dipole" (\(N\)) and the "quadrupole" (\(C\)) moments correspond to a static situation. However, the time dependence might enter through coefficients of higher order in \(\gamma\), giving rise to what Bondi calls "non-natural-non-radiative moving system" (NNNRS). In this latter case, the system keeps the first three moments independent of time, but allows for time dependence of higher moments. This class of solutions is characterized by \(M_{\theta}=0\).
A second family of time dependent non-radiative solutions exists for which \(M_{\theta}\neq 0\). These are called natural non-radiative moving system" (NNRS), and their magnetic Weyl tensor is non-vanishing.
Let us now discuss the hierarchy of different spacetimes belonging to the Bondi family, according to their complexity.
The simplest spacetime corresponds to the vanishing of the three complexity factors, and this is just Minkowski.
Indeed, as it was shown in [43], if we assume \(E_{\beta}^{\alpha}=0\) and use regularity conditions, we find that the spacetime must be Minkowski, giving further support to the conjecture that there are no purely magnetic vacuum spacetimes [19].
At the other end (maximal complexity) we have a gravitationally radiating system which requires all three complexity factors to be different from zero.
Indeed, if we assume that \({\cal E}_{1}=0\), then it follows at once from (261) that \(c_{u}=0\) (otherwise \(c_{u}\) would be a non-regular function of \(\theta\) on the symmetry axis). Thus \({\cal E}_{1}=0\) implies that the system is non-radiative.
If instead we assume that \({\cal E}_{2}=0\), then from the first order in (262) we obtain that \(c_{uu}=0\), this implies that either \(c_{u}=0\) or \(c_{u}\sim u\). Bondi refers to this latter case as "mass loss without radiative Riemann tensor" and dismisses it as being of little physical significance. As a matter of fact, in this latter case the system would be radiating "forever", which according to (210) requires an unbounded source, incompatible with an asymptotically flat spacetime. Thus in this case too, we have \(c_{u}=0\), and the system is non-radiative.
Finally, if we assume \({\cal E}_{3}=0\) it follows at once from the first order in (263), that \(c_{uu}=0\), leading to \(c_{u}=0\), according to the argument above.
Thus, a radiative system requires all three complexity factors to be nonvanishing, implying a maximal complexity.
In the middle of the two extreme cases we have, on the one hand the spherically symmetric spacetime (Schwarzschild), characterized by a single complexity factor (the same applies for any static metric), \({\cal E}_{1}={\cal E}_{3}=0\), and \({\cal E}_{2}=\frac{3M}{r^{3}}\). On the other hand, we have the non-static non-radiative case.
Let us now analyze in detail this latter case. This group of solutions encompasses two subclasses, which using Bondi notation are:
1. Natural-non-radiative systems (NNRS) characterized by \(M_{\theta}\neq 0\).
2. Non-natural-non-radiative systems (NNNRS) characterized by \(M_{\theta}=0\).
Let us first consider the NNNRS subcase. Using (261) we obtain \({\cal E}_{1}=0\), (up to order \(1/r^{3}\)), while the first non-vanishing terms in \({\cal E}_{2}\) and \({\cal E}_{3}\) are respectively, \(3M\) and \(0\), where (VIII), (207), (208), (262) and (263) have been used.
Thus, the NNNRS are characterized by only one non-vanishing complexity factor (\({\cal E}_{2}\)). Furthermore, as it follows from (267) the vorticity of the congruence of observers at rest with respect to the frame of (199) vanishes, and the field is purely electric. However as mentioned before we cannot conclude that the field is static, since the \(u\) dependence might appear through coefficients of higher order in \(\gamma\).
Let us now consider the "natural-non-radiative system" (NNRS). In this subcase, using (261) we obtain \({\cal E}_{1}=0\), (up to order \(1/r^{3}\)) as for the NNNRS subcase, while the first non-vanishing term in \({\cal E}_{2}\) and \({\cal E}_{3}\) (up to order \(1/r^{3}\)) are respectively \(3M+\frac{M_{\theta}}{4}-\frac{M_{\theta}\cot\theta}{4}\) and \(\frac{M_{\theta}}{2}-\frac{M_{\theta}\cot\theta}{2}\).
Also, up to the same order, it follows from (264) and (265) that \(H_{1}=0\) for both subcases, while the corresponding term in \(H_{2}\) is (for NNRS)
\[-\frac{1}{4}(M_{\theta\theta}-M_{\theta}\cot\theta), \tag{225}\]
which of course vanishes for the NNNRS subcase.
It should be observed that if we assume \({\cal E}_{3}=0\) or \(H_{2}=0\) then it follows at once from the above that
\[M_{\theta\theta}-M_{\theta}\cot\theta=0\Rightarrow M=a\cos\theta,\quad a= constant. \tag{226}\]
But this implies because of (209) that the Bondi mass function of the system vanishes. Therefore, the only physically meaningful NNRS requires \({\cal E}_{3}\neq 0\), \(\Omega\neq 0\) and \(H_{2}\neq 0\) implying that the complexity is characterized by two complexity factors (\({\cal E}_{2}\), \({\cal E}_{3}\)).
Thus a hierarchy of spacetimes according to their complexity has been established. This allows us to discriminate between two classes of spacetimes that depend on time but are not radiative (vanishing of the news function). These two classes were called by Bondi [17] natural and non-natural non-radiative moving systems, and are characterized by different forms of the mass aspect. They exhibit different degree of complexity.
Unfortunately, though, up to the leading order of the complexity factors analyzed here, it is impossible to discriminate between different radiative systems according to their complexity. Higher order terms would be necessary for that purpose, although it is not clear at this point if it is possible to establish such a hierarchy of radiative systems after all.
The simplest system (Minkowski) is characterized by the vanishing of all the complexity factors. Next, the static case (including Schwarzchild) is described by a single complexity factor.
The time dependent non-radiative solutions split in two subgroups depending on the form of the mass aspect \(M\). If \(M_{\theta}=0\) which corresponds to the NNNRS the complexity is similar to the static case. Also, in this case, as in the static situation, the vorticity vanishes and the field is purely electric. This result could suggest that in fact NNNRS are just static, and no time dependence appears in the coefficients of higher order in \(\gamma\). On the contrary for the NNRS there are two complexity factors, the vorticity is non-vanishing and the field is not purely electric.
All these results are summarized in Tables 1 and 2. Thus, NNNRS and NNNS are clearly differentiated through their degree of complexity, as measured by the complexity factors considered here.
The fact that radiative systems necessarily decay into NNNS, NNNRS or static systems, since the Bondi mass function must be finite, suggests that higher degrees of complexity might be associated with stronger stability. Of course a proof of this conjecture requires a much more detailed analysis.
It is also worth mentioning the conspicuous link between vorticity and complexity factors. Indeed vorticity appears only in NNNS and radiative systems, which are the most complex systems, while it is absent in the simplest systems (Minkowski, static, NNNRS). In the radiative case there are contributions at order \(\mathcal{O}(r^{-1})\) related to the news function, and at order \(\mathcal{O}(r^{-2})\), while for the NNNS there are only contributions at order \(\mathcal{O}(r^{-2})\), these describe the effect of the tail of the wave.
## V Conclusions
We have discussed about the concept of complexity of self-gravitating relativistic fluid distributions. For the static case we were concerned exclusively with the notion of complexity of the structure of the fluid. However in the dynamic case we have also tackled the question about the complexity of the pattern of evolution. These two questions, although related, refer to different aspects of the definition of complexity. However, it is remarkable that in the non-dissipative case the homologous condition implies the vanishing complexity factor.
As a measure of complexity of the structure of the fluid (the complexity factor) we have chosen the scalar function(s) \(Y_{TF}\) defining the trace-free part of the electric Riemann tensor.
\begin{table}
\begin{tabular}{c c c c c c} \multicolumn{5}{c}{Magnetic parts and vorticity} \\ \hline \hline \(Mag.Weyl;\Omega\diagdown\)\(spacectimes\) & Minkowski & Static & NNNRS & NNNRS & Radiative \\ \hline \(H_{1}\) & 0 & 0 & 0 & 0 & \(H_{1}^{(n)}\neq 0\), \(n\geq 1\) \\ \hline \(H_{2}\) & 0 & 0 & 0 & \(H_{2}^{(3)}=-\frac{1}{4}(M_{\theta\theta}-M_{\theta}\cot\theta)\) & \(H_{2}^{(n)}\neq 0\), \(n\geq 1\) \\ \hline \(\Omega\) & 0 & 0 & 0 & \(\Omega^{(2)}=M_{\theta}\) & \(\Omega^{(n)}\neq 0\), \(n\geq 1\) \\ \hline \end{tabular}
\end{table}
Table 2: The magnetic parts of the Weyl tensor and the vorticity for different spacetimes of the Bondi metric
\begin{table}
\begin{tabular}{c c c c c c} \multicolumn{5}{c}{COMPLEXITY HIERARCHY} \\ \hline \hline \(complex.fac.\diagdown\)\(spacectimes\) & Minkowski & Static & NNNRS & NNNRS & Radiative \\ \hline \(\mathcal{E}_{1}\) & 0 & 0 & 0 & 0 & \(\mathcal{E}_{1}^{(n)}\neq 0\), \(n\geq 1\) \\ \hline \(\mathcal{E}_{2}\) & 0 & \(\mathcal{E}_{2}^{(3)}=3M\) & \(\mathcal{E}_{2}^{(3)}=3M\) & \(\mathcal{E}_{2}^{(3)}=3M+\frac{M_{\theta\theta}}{4}-\frac{M_{\theta}\cot\theta} {4}\) & \(\mathcal{E}_{2}^{(n)}\neq 0\), \(n\geq 1\) \\ \hline \(\mathcal{E}_{3}\) & 0 & 0 & 0 & \(\mathcal{E}_{3}^{(3)}=\frac{1}{4}(M_{\theta\theta}-M_{\theta}\cot\theta)\) & \(\mathcal{E}_{3}^{(n)}\neq 0\), \(n\geq 1\) \\ \hline \end{tabular}
\end{table}
Table 1: Complexity factors for different spacetimes of the Bondi metric
Next, we discussed about the complexity of the pattern of evolution. Two possibilities appear as the more obvious candidates: the homologous condition and the quasi-homologous condition. The latter, being less restricted than the former allows to consider larger number of models.
All this having been said, we are well aware of the fact that the above mentioned candidate for measuring the complexity of a self-gravitating system is by no means unique and many different alternatives may be proposed. In the same line of arguments, the simplest patterns of evolution assumed so far are the homologous and the quasi-homologous regimes. But, it is not clear whether or not other patterns of evolution could also fit the role of the simplest pattern of evolution.
Additionally, we believe that alternative definitions of complexity for vacuum space-time is worth considering. Finally, new exact solutions to the field equations in the context of the Einstein theory or any alternative one, would serve as a test-bed for the definition of complexity.
These remarks suggest a list of questions and open issues which we believe that deserve to be considered further.
* Are there alternative definitions of complexity, different from the one proposed in [55]?
* Are there other ways to extend the definition of complexity for vacuum space-time?
* Besides the homologous and the quasi-homologous regime, could we define another pattern of evolution that could qualify as the simplest one?
* Do physically meaningful dissipative models satisfying (192) exist?
* If the answer to the above question is positive then, is there a unique solution or are there a large number of them?
* What is the physical meaning of such solution(s)?
* Is it physically reasonable to neglect transient effects when considering the simplest dissipative system, and assume that the relaxation time vanishes?
* To summarize the four points above: is there a specific dissipative regime that could be considered as the simplest one?
* Can we relate the complexity factor(s) in the nonspherically symmetric case to the active gravitational mass, as in the spherically symmetric case?
* Can we single out a specific family of exact axially symmetric static solutions satisfying the vanishing complexity factor(s) condition?
* Can any of the above solutions be matched smoothly to any vacuum Weyl solution?
* The definition of complexity proposed in [55] is not directly related to entropy or disequilibrium, although it is possible that such a link might exist after all. If so, how could such relationship be brought out?
* What is the simplest mode of evolution of axially-symmetric fluids? How is such a mode related to the emission (or not) of gravitational radiation?
* Could it be possible to provide a definition of the arrow of time in terms of the complexity factor?
* How is the complexity factor related to physical relevant properties of the source, in terms of stability or maximal degree of compactness?
* How does the complexity factor evolve? Do physically meaningful systems prefer vanishing complexity factors?
* As we have seen the FRW model satisfies the vanishing complexity factor and evolves homologously. Should any other, physically sound, cosmological model have a vanishing complexity factor? Should it evolve in the homologous or quasi-homologous regime?
* The complexity factor for a charged fluid is known, but what is the complexity factor for a different type of field (e.g., scalar field?).
* How should we define the complexity factor in the context of other alternative theories of gravity that have not been considered so far?
* How can we find new solutions satisfying the vanishing complexity factor? Could we use the general methods described in [70; 71; 72; 118; 119] to obtain such solutions?
* What relevant physical features share solutions satisfying the vanishing complexity factor?
* Is there a link between the concept of complexity and some kind of symmetry (e.g. motions, conformal motions, affine collineations, curvature collineations, matter collineations, etc.)?
* We have extended the concept of complexity, adopted for fluids, to the vacuum case for the Bondi metric, the three complexity factors corresponding to the three scalars defining the electric part of the Weyl tensor. Could it be possible to further refine the scheme proposed here so as to discriminate between different radiative systems according to their complexity? Or, in other words, among radiative systems is there a simplest one?
* Also in the vacuum case, could it be possible to discriminate between different static spacetimes of the Weyl family?
###### Acknowledgements.
This work was partially supported by Grant PID2021-122938NB-I00 funded by MCIN/AEI/ 10.13039/501100011033 and by ERDF A way of making Europe. I also wish to thank Universitat de les Illes Balears for financial support and hospitality.
## VI The Einstein and the conservation equations for the axially symmetric static case
For the line element (43) and the energy momentum (47), the Einstein equations read:
\[8\pi\mu=-\frac{1}{B^{2}}\left\{\frac{B^{\prime\prime}}{B}+\frac{D^{\prime \prime}}{D}+\frac{1}{r}\left(\frac{B^{\prime}}{B}+\frac{D^{\prime}}{D}\right)- \left(\frac{B^{\prime}}{B}\right)^{2}+\frac{1}{r^{2}}\left[\frac{B_{\theta \theta}}{B}+\frac{D_{\theta\theta}}{D}-\left(\frac{B_{\theta}}{B}\right)^{2} \right]\right\}, \tag{227}\]
\[8\pi P_{xx}=\frac{1}{B^{2}}\left[\frac{A^{\prime}B^{\prime}}{AB}+\frac{A^{ \prime}D^{\prime}}{AD}+\frac{B^{\prime}D^{\prime}}{BD}+\frac{1}{r}\left( \frac{A^{\prime}}{A}+\frac{D^{\prime}}{D}\right)+\frac{1}{r^{2}}\left(\frac{ A_{\theta\theta}}{A}+\frac{D_{\theta\theta}}{D}-\frac{A_{\theta}B_{\theta}}{AB}+ \frac{A_{\theta}D_{\theta}}{AD}-\frac{B_{\theta}D_{\theta}}{BD}\right)\right], \tag{228}\]
\[8\pi P_{yy}=\frac{1}{B^{2}}\left[\frac{A^{\prime\prime}}{A}+\frac{D^{\prime \prime}}{D}-\frac{A^{\prime}B^{\prime}}{AB}+\frac{A^{\prime}D^{\prime}}{AD}- \frac{B^{\prime}D^{\prime}}{BD}+\frac{1}{r^{2}}\left(\frac{A_{\theta B}B_{ \theta}}{AB}+\frac{A_{\theta}D_{\theta}}{AD}+\frac{B_{\theta}D_{\theta}}{BD} \right)\right], \tag{229}\]
\[8\pi P_{zz}=\frac{1}{B^{2}}\left\{\frac{A^{\prime\prime}}{A}+\frac{B^{\prime \prime}}{B}-\left(\frac{B^{\prime}}{B}\right)^{2}+\frac{1}{r}\left(\frac{A^{ \prime}}{A}+\frac{B^{\prime}}{B}\right)+\frac{1}{r^{2}}\left[\frac{A_{\theta \theta}}{A}+\frac{B_{\theta\theta}}{B}-\left(\frac{B_{\theta}}{B}\right)^{2} \right]\right\}, \tag{230}\]
\[8\pi P_{xy}=\frac{1}{B^{2}}\left\{\frac{1}{r}\left[-\frac{A^{\prime}_{\theta} }{A}-\frac{D^{\prime}_{\theta}}{D}+\frac{B_{\theta}}{B}\left(\frac{A^{\prime} }{A}+\frac{D^{\prime}}{D}\right)+\frac{B^{\prime}}{B}\frac{A_{\theta}}{A}+ \frac{B^{\prime}}{B}\frac{D_{\theta}}{D}\right]+\frac{1}{r^{2}}\left(\frac{A_ {\theta}}{A}+\frac{D_{\theta}}{D}\right)\right\}. \tag{231}\]
The nonvanishing components of the conservation equations \(T^{\alpha\beta}_{;\beta}=0\) yield: the trivial equation
\[\dot{\mu}=0, \tag{232}\]
where the overdot denotes derivative with respect to \(t\), and the two hydrostatic equilibrium equations
\[\left[P+\frac{1}{3}(2\Pi_{2}-\Pi_{3})\right]^{\prime}+\frac{A^{ \prime}}{A}\left[\mu+P+\frac{1}{3}(2\Pi_{2}-\Pi_{3})\right]+\frac{B^{\prime}} {B}(\Pi_{2}-\Pi_{3})+\frac{D^{\prime}}{D}\Pi_{2}\] \[+\frac{1}{r}\left[\left(\frac{A_{\theta}}{A}+2\frac{B_{\theta}}{ B}+\frac{D_{\theta}}{D}\right)\Pi_{KL}+\Pi_{Kl\theta}+\Pi_{2}-\Pi_{3} \right]=0, \tag{233}\]
\[\left[P+\frac{1}{3}(2\Pi_{3}-\Pi_{2})\right]_{\theta}+\frac{A_{ \theta}}{A}\left[\mu+P+\frac{1}{3}(2\Pi_{3}-\Pi_{2})\right]+\frac{B_{\theta}}{B} (\Pi_{3}-\Pi_{2})\] \[+\frac{D_{\theta}}{D}\Pi_{3}+r\left[\left(\frac{A^{\prime}}{A}+2 \frac{B^{\prime}}{B}+\frac{D^{\prime}}{D}\right)\Pi_{KL}+\Pi^{\prime}_{KL} \right]+2\Pi_{KL}=0. \tag{234}\]
## VII Expression for the components of the electric Weyl tensor
There are four nonvanishing components as calculated from (60), however they are not independent since they satisfy the relationship:
\[E_{11}+\frac{1}{r^{2}}E_{22}+\frac{B^{2}}{D^{2}}E_{33}=0, \tag{235}\]
implying that the Weyl tensor may be expressed through three independent scalar functions \({\cal E}_{1},{\cal E}_{2},{\cal E}_{3}\).
These four components are
\[E_{11} = \frac{1}{6}\left[\frac{2A^{\prime\prime}}{A}-\frac{B^{\prime \prime}}{B}-\frac{D^{\prime\prime}}{D}-\frac{3A^{\prime}B^{\prime}}{AB}-\frac {A^{\prime}D^{\prime}}{AD}+\left(\frac{B^{\prime}}{B}\right)^{2}+\frac{3B^{ \prime}D^{\prime}}{BD}+\frac{1}{r}\left(2\frac{D^{\prime}}{D}-\frac{B^{ \prime}}{B}-\frac{A^{\prime}}{A}\right)\right] \tag{236}\] \[+ \frac{1}{6r^{2}}\left[-\frac{A_{\theta\theta}}{A}-\frac{B_{ \theta\theta}}{B}+\frac{2D_{\theta\theta}}{D}+\frac{3A_{\theta}B_{\theta}}{AB} -\frac{A_{\theta}D_{\theta}}{AD}+\left(\frac{B_{\theta}}{B}\right)^{2}-\frac{3 B_{\theta}D_{\theta}}{BD}\right],\]
\[E_{22} = -\frac{r^{2}}{6}\left[\frac{A^{\prime\prime}}{A}+\frac{B^{\prime \prime}}{B}-\frac{2D^{\prime\prime}}{D}-\frac{3A^{\prime}B^{\prime}}{AB}+\frac {A^{\prime}D^{\prime}}{AD}-\left(\frac{B^{\prime}}{B}\right)^{2}+\frac{3B^{ \prime}D^{\prime}}{BD}+\frac{1}{r}\left(\frac{D^{\prime}}{D}+\frac{B^{\prime}} {B}-\frac{2A^{\prime}}{A}\right)\right] \tag{237}\] \[- \frac{1}{6}\left[-\frac{2A_{\theta\theta}}{A}+\frac{B_{\theta \theta}}{B}+\frac{D_{\theta\theta}}{D}+\frac{3A_{\theta}B_{\theta}}{AB}+\frac {A_{\theta}D_{\theta}}{AD}-\left(\frac{B_{\theta}}{B}\right)^{2}-\frac{3B_{ \theta}D_{\theta}}{BD}\right],\]
\[E_{33} = -\frac{D^{2}}{6B^{2}}\left[\frac{A^{\prime\prime}}{A}-\frac{2B^{ \prime\prime}}{B}+\frac{D^{\prime\prime}}{D}-\frac{2A^{\prime}D^{\prime}}{AD}+ 2\left(\frac{B^{\prime}}{B}\right)^{2}+\frac{1}{r}\left(\frac{D^{\prime}}{D}- \frac{2B^{\prime}}{B}+\frac{A^{\prime}}{A}\right)\right] \tag{238}\] \[- \frac{D^{2}}{6B^{2}r^{2}}\left[\frac{A_{\theta\theta}}{A}-\frac{2 B_{\theta\theta}}{B}+\frac{D_{\theta\theta}}{D}-\frac{2A_{\theta}D_{\theta}}{ AD}+2\left(\frac{B_{\theta}}{B}\right)^{2}\right],\]
\[E_{12}=\frac{1}{2}\left[\frac{A^{\prime}_{\theta}}{A}-\frac{D^{\prime}_{\theta} }{D}+\frac{B_{\theta}}{B}\frac{D^{\prime}}{D}-\frac{A^{\prime}B_{\theta}}{AB} -\frac{B^{\prime}A_{\theta}}{AB}+\frac{D_{\theta}}{D}\frac{B^{\prime}}{B}- \frac{1}{r}\left(\frac{A_{\theta}}{A}-\frac{D_{\theta}}{D}\right)\right]. \tag{239}\]
For the three scalars \({\cal E}_{1}\), \({\cal E}_{2}\), \({\cal E}_{3}\) we obtain
\[{\cal E}_{1}=\frac{1}{2B^{2}}\left[\frac{1}{r}\left(\frac{A^{ \prime}_{\theta}}{A}-\frac{D^{\prime}_{\theta}}{D}-\frac{B_{\theta}}{B}\frac {A^{\prime}}{A}+\frac{D^{\prime}}{D}\frac{B_{\theta}}{B}-\frac{B^{\prime}}{B} \frac{A_{\theta}}{A}+\frac{D_{\theta}}{D}\frac{B^{\prime}}{B}\right)+\frac{1} {r^{2}}\left(\frac{D_{\theta}}{D}-\frac{A_{\theta}}{A}\right)\right], \tag{240}\] \[{\cal E}_{2} = -\frac{1}{2B^{2}}\left[-\frac{A^{\prime\prime}}{A}+\frac{B^{ \prime\prime}}{B}+\frac{A^{\prime}B^{\prime}}{AB}+\frac{A^{\prime}D^{\prime}}{ AD}-\left(\frac{B^{\prime}}{B}\right)^{2}-\frac{B^{\prime}D^{\prime}}{BD}+\frac{1}{r} \left(\frac{B^{\prime}}{B}-\frac{D^{\prime}}{D}\right)\right]\] (241) \[- \frac{1}{2B^{2}r^{2}}\left[\frac{B_{\theta\theta}}{B}-\frac{D_{ \theta\theta}}{D}-\frac{A_{\theta}B_{\theta}}{AB}+\frac{A_{\theta}D_{\theta}}{ AD}-\left(\frac{B_{\theta}}{B}\right)^{2}+\frac{B_{\theta}D_{\theta}}{BD}\right],\]
\[{\cal E}_{3} = -\frac{1}{2B^{2}}\left[\frac{B^{\prime\prime}}{B}-\frac{D^{\prime \prime}}{D}-\frac{A^{\prime}B^{\prime}}{AB}+\frac{A^{\prime}D^{\prime}}{AD}- \left(\frac{B^{\prime}}{B}\right)^{2}+\frac{B^{\prime}D^{\prime}}{BD}+\frac{1}{ r}\left(\frac{B^{\prime}}{B}-\frac{A^{\prime}}{A}\right)\right] \tag{242}\] \[- \frac{1}{2B^{2}r^{2}}\left[\frac{B_{\theta\theta}}{B}-\frac{A_{ \theta\theta}}{A}+\frac{A_{\theta}B_{\theta}}{AB}+\frac{A_{\theta}D_{\theta}}{ AD}-\left(\frac{B_{\theta}}{B}\right)^{2}-\frac{B_{\theta}D_{\theta}}{BD}\right].\]
or using Einstein equations
\[{\cal E}_{1}=\frac{E_{12}}{B^{2}r}=4\pi\Pi_{KL}+\frac{1}{B^{2}r}\left[\frac{A _{\theta}^{\prime}}{A}-\frac{A^{\prime}B_{\theta}}{AB}-\frac{A_{\theta}}{A} \left(\frac{B^{\prime}}{B}+\frac{1}{r}\right)\right], \tag{243}\]
\[{\cal E}_{2} = -\frac{2E_{33}}{D^{2}}-\frac{E_{22}}{B^{2}r^{2}}=4\pi(\mu+3P+\Pi _{2})-\frac{A^{\prime}}{B^{2}A}\left(\frac{2D^{\prime}}{D}+\frac{B^{\prime}}{ B}+\frac{1}{r}\right) \tag{244}\] \[+ \frac{A_{\theta}}{AB^{2}r^{2}}\left(\frac{B_{\theta}}{B}-\frac{2D _{\theta}}{D}\right)-\frac{1}{B^{2}r^{2}}\frac{A_{\theta\theta}}{A},\]
\[{\cal E}_{3}=-\frac{E_{33}}{D^{2}}+\frac{E_{22}}{B^{2}r^{2}}=4\pi\Pi_{3}-\frac {A^{\prime}}{B^{2}A}\left(\frac{D^{\prime}}{D}-\frac{B^{\prime}}{B}-\frac{1}{ r}\right)-\frac{A_{\theta}}{AB^{2}r^{2}}\left(\frac{D_{\theta}}{D}+\frac{B_{ \theta}}{B}\right)+\frac{1}{B^{2}r^{2}}\frac{A_{\theta\theta}}{A}. \tag{245}\]
## VIII Vanishing complexity factor conditions
\[Y_{TF_{1}} = \frac{1}{B^{2}r}\left[\frac{A_{\theta}^{\prime}}{A}-\frac{A^{ \prime}B_{\theta}}{AB}-\frac{A_{\theta}}{A}\left(\frac{B^{\prime}}{B}+\frac{1 }{r}\right)\right]=0, \tag{246}\]
\[Y_{TF_{2}} = \frac{A^{\prime\prime}}{B^{2}A}-\frac{A^{\prime}}{B^{2}A}\left( \frac{D^{\prime}}{D}+\frac{B^{\prime}}{B}\right)+\frac{A_{\theta}}{AB^{2}r^{2} }\left(\frac{B_{\theta}}{B}-\frac{D_{\theta}}{D}\right)=0, \tag{247}\]
\[Y_{TF_{3}} = -\frac{A^{\prime}}{B^{2}A}\left(\frac{D^{\prime}}{D}-\frac{B^{ \prime}}{B}-\frac{1}{r}\right)-\frac{A_{\theta}}{AB^{2}r^{2}}\left(\frac{D_{ \theta}}{D}+\frac{B_{\theta}}{B}\right)+\frac{1}{B^{2}r^{2}}\frac{A_{\theta \theta}}{A} \tag{248}\] \[= 0.\]
## IX Einstein equations for the dynamical spherically symmetric case
Einstein's field equations
\[G_{\alpha\beta}=8\pi T_{\alpha\beta}, \tag{249}\]
for the interior spacetime (139) read
\[8\pi T_{00}=8\pi\mu A^{2}=\left(2\frac{\dot{B}}{B}+\frac{\dot{R}}{R}\right)\frac{ \dot{R}}{R}-\left(\frac{A}{B}\right)^{2}\left[2\frac{R^{\prime\prime}}{R}+ \left(\frac{R^{\prime}}{R}\right)^{2}-2\frac{B^{\prime}}{B}\frac{R^{\prime}}{R} -\left(\frac{B}{R}\right)^{2}\right], \tag{250}\]
\[8\pi T_{01}=-8\pi qAB=-2\left(\frac{\dot{R}^{\prime}}{R}-\frac{\dot{B}}{B} \frac{R^{\prime}}{R}-\frac{\dot{R}}{R}\frac{A^{\prime}}{A}\right), \tag{251}\]
\[8\pi T_{11}=8\pi P_{r}B^{2}=-\left(\frac{B}{A}\right)^{2}\left[2\frac{\ddot{R }}{R}-\left(2\frac{\dot{A}}{A}-\frac{\dot{R}}{R}\right)\frac{\dot{R}}{R} \right]+\left(2\frac{A^{\prime}}{A}+\frac{R^{\prime}}{R}\right)\frac{R^{ \prime}}{R}-\left(\frac{B}{R}\right)^{2}, \tag{252}\]
\[8\pi T_{22} = \frac{8\pi}{\sin^{2}\theta}T_{33}=8\pi P_{\perp}R^{2}=-\left( \frac{R}{A}\right)^{2}\left[\frac{\ddot{B}}{B}+\frac{\ddot{R}}{R}-\frac{\dot{ A}}{A}\left(\frac{\dot{B}}{B}+\frac{\dot{R}}{R}\right)+\frac{\dot{B}}{B}\frac{ \dot{R}}{R}\right] \tag{253}\] \[+ \left(\frac{R}{B}\right)^{2}\left[\frac{A^{\prime\prime}}{A}+ \frac{R^{\prime\prime}}{R}-\frac{A^{\prime}}{A}\frac{B^{\prime}}{B}+\left( \frac{A^{\prime}}{A}-\frac{B^{\prime}}{B}\right)\frac{R^{\prime}}{R}\right].\]
The component (251) can be rewritten with (147) and (150) as
\[4\pi qB=\frac{1}{3}(\Theta-\sigma)^{\prime}-\sigma\frac{R^{\prime}}{R}. \tag{254}\]
## X Dynamical equations
The non trivial components of the Bianchi identities, \(T_{;\beta}^{\alpha\beta}=0\), from (249) yield
\[T_{;\beta}^{\alpha\beta}V_{\alpha} = -\frac{1}{A}\left[\dot{\mu}+\left(\mu+P_{r}\right)\frac{\dot{B}} {B}+2\left(\mu+P_{\perp}\right)\frac{\dot{R}}{R}\right] \tag{255}\] \[- \frac{1}{B}\left[q^{\prime}+2q\frac{(AR)^{\prime}}{AR}\right]=0,\]
\[T_{;\beta}^{\alpha\beta}\chi_{\alpha}=\frac{1}{A}\left[\dot{q}+2q \left(\frac{\dot{B}}{B}+\frac{\dot{R}}{R}\right)\right] \tag{256}\] \[+\frac{1}{B}\left[P_{r}^{\prime}+\left(\mu+P_{r}\right)\frac{A^ {\prime}}{A}+2(P_{r}-P_{\perp})\frac{R^{\prime}}{R}\right]=0,\]
or, by using (146), (147), (156) and (154), they become, respectively,
\[D_{T}\mu+\frac{1}{3}\left(3\mu+P_{r}+2P_{\perp}\right)\Theta+ \frac{2}{3}(P_{r}-P_{\perp})\sigma+ED_{R}q+2q\left(a+\frac{E}{R}\right)=0, \tag{257}\] \[D_{T}q+\frac{2}{3}q(2\Theta+\sigma)+ED_{R}P_{r}+\left(\mu+P_{r} \right)a+2(P_{r}-P_{\perp})\frac{E}{R}=0. \tag{258}\]
This last equation may be further tranformed as follows, the acceleration \(D_{T}U\) of an infalling particle can be obtained by using (146), (252), (151) and (154), producing
\[D_{T}U=-\frac{m}{R^{2}}-4\pi P_{r}R+Ea, \tag{259}\]
and then, substituting \(a\) from (259) into (258), we obtain
\[\left(\mu+P_{r}\right)D_{T}U = -\left(\mu+P_{r}\right)\left[\frac{m}{R^{2}}+4\pi P_{r}R\right]-E^{ 2}\left[D_{R}P_{r}+2(P_{r}-P_{\perp})\frac{1}{R}\right] \tag{260}\] \[- E\left[D_{T}q+2q\left(2\frac{U}{R}+\sigma\right)\right].\]
## XI The complexity factors for the Bondi metric
\[\mathcal{E}_{1}=\frac{1}{r^{2}}\left(2c_{u}\cot\theta+c_{\theta u}\right)+ \mathcal{O}(r^{-n}),\quad n\geq 4, \tag{261}\]
\[\mathcal{E}_{2}=\frac{1}{r}c_{uu}-\frac{1}{2r^{2}}\left(c_{\theta \theta u}-4Mc_{uu}+2c_{u}+c_{\theta u}\cot\theta-\frac{4c_{u}}{\sin^{2}\theta}\right)\] \[+\frac{1}{r^{3}}\left[cc_{u}+2c_{\theta}c_{\theta u}+3M+\frac{ \cot\theta}{2}\left(3c_{u}c_{\theta}+5cc_{\theta u}\right)-M_{u}c+\frac{1}{2} M_{\theta\theta}+N_{\theta u}+P_{uu}\right.\] \[-\cot\theta\left(Mc_{\theta u}+\frac{1}{2}M_{\theta}+N_{u}-Nc_{ uu}\right)-Mc_{u}\left(1-\frac{4}{\sin^{2}\theta}\right)+c_{u}\left(cc_{u}+ \frac{1}{2}c_{\theta\theta}\right)\] \[\left.+c_{uu}\left(4M^{2}+N_{\theta}\right)-c_{\theta\theta u} \left(M-\frac{3}{2}c\right)\right]\] \[+\mathcal{O}(r^{-n}),\]
\[\mathcal{E}_{3}=\frac{2}{r}c_{uu}-\frac{1}{r^{2}}\left(c_{\theta \theta u}-4Mc_{uu}+2c_{u}+c_{\theta u}\cot\theta-\frac{4c_{u}}{\sin^{2}\theta}\right)\] \[+\frac{1}{r^{3}}\left[-4cc_{u}+4c_{\theta}c_{\theta u}+\cot\theta \left(3c_{u}c_{\theta}+5cc_{\theta u}\right)-2M_{u}c+M_{\theta\theta}+2N_{ \theta u}+2P_{uu}\right.\] \[-\cot\theta\left(2Mc_{\theta u}+M_{\theta}+2N_{u}-2Nc_{uu}\right) -2Mc_{u}\left(1-\frac{4}{\sin^{2}\theta}\right)+c_{u}\left(2cc_{u}+c_{\theta \theta}\right)\] \[\left.+2c_{uu}\left(4M^{2}+N_{\theta}\right)-c_{\theta\theta u} \left(2M-3c\right)\right]\] \[+\mathcal{O}(r^{-n}).\]
\[H_{2}=-\frac{1}{r}c_{uu}-\frac{1}{r^{2}}\left[-c_{u}\left(1-\frac{2}{ \sin^{2}\theta}\right)-\frac{\cot\theta}{2}c_{\theta u}+2c_{uu}(M-c)-\frac{1}{2} c_{\theta\theta u}\right]\] \[-\frac{1}{r^{3}}\left\{-Mc_{u}\left(1-\frac{4}{\sin^{2}\theta} \right)-\frac{4cc_{u}}{\sin^{2}\theta}+\cot\theta\left[\frac{3}{2}c_{u}c_{ \theta}-N_{u}-\frac{1}{2}M_{\theta}+Nc_{uu}+\left(\frac{7}{2}c-M\right)c_{ \theta u}\right]+\left(\frac{5}{2}c-M\right)c_{\theta\theta u}\right.\] \[\left.+\frac{1}{2}c_{\theta\theta}c_{u}+2c_{\theta\theta}c_{ \theta u}+cc_{u}^{2}+\frac{1}{2}M_{\theta\theta}-cM_{u}+N_{\theta u}+P_{uu}+ c_{uu}\left(4c^{2}+4M^{2}-4Mc+N_{\theta}\right)\right\}+\mathcal{O}(r^{-n}), \quad n\geq 4 \tag{265}\]
where \(P=C-\frac{c^{3}}{6}\).
## XIII The vorticity
\[\omega^{\phi}=-\frac{e^{-2\beta}}{2r^{2}\sin\theta}\left[2\beta_{ \theta}e^{2\beta}-\frac{2e^{2\beta}A_{\theta}}{A}-\left(Ur^{2}e^{2\gamma} \right)_{r}+\frac{2Ur^{2}e^{2\gamma}}{A}A_{r}+\frac{e^{2\beta}\left(Ur^{2}e^{ 2\gamma}\right)_{u}}{A^{2}}-\frac{Ur^{2}e^{2\gamma}}{A^{2}}2\beta_{u}e^{2\beta }\right], \tag{266}\]
and for the absolute value of \(\omega^{\alpha}\) we get
\[\Omega\equiv\left(-\omega_{\alpha}\omega^{\alpha}\right)^{1/2}=\frac{e^{-2 \beta-\gamma}}{2r}\left[2\beta_{\theta}e^{2\beta}-2e^{2\beta}\frac{A_{\theta}} {A}-\left(Ur^{2}e^{2\gamma}\right)_{r}+2Ur^{2}e^{2\gamma}\frac{A_{r}}{A}+\frac {e^{2\beta}}{A^{2}}\left(Ur^{2}e^{2\gamma}\right)_{u}-2\beta_{u}\frac{e^{2\beta }}{A^{2}}Ur^{2}e^{2\gamma}\right]. \tag{267}\]
Feeding back (201-205) into (267) and keeping only the two leading terms, we obtain
\[\Omega=-\frac{1}{2r}(c_{u\theta}+2c_{u}\cot\theta)+\frac{1}{r^{2}}\left[M_{ \theta}-M(c_{u\theta}+2c_{u}\cot\theta)-cc_{u\theta}+6cc_{u}\cot\theta+2c_{u} c_{\theta}\right]. \tag{268}\]
|
2302.08002 | Deep Learning Enhanced Realized GARCH | We propose a new approach to volatility modeling by combining deep learning
(LSTM) and realized volatility measures. This LSTM-enhanced realized GARCH
framework incorporates and distills modeling advances from financial
econometrics, high frequency trading data and deep learning. Bayesian inference
via the Sequential Monte Carlo method is employed for statistical inference and
forecasting. The new framework can jointly model the returns and realized
volatility measures, has an excellent in-sample fit and superior predictive
performance compared to several benchmark models, while being able to adapt
well to the stylized facts in volatility. The performance of the new framework
is tested using a wide range of metrics, from marginal likelihood, volatility
forecasting, to tail risk forecasting and option pricing. We report on a
comprehensive empirical study using 31 widely traded stock indices over a time
period that includes COVID-19 pandemic. | Chen Liu, Chao Wang, Minh-Ngoc Tran, Robert Kohn | 2023-02-16T00:20:43Z | http://arxiv.org/abs/2302.08002v2 | # Realized recurrent conditional heteroskedasticity model for volatility modelling
###### Abstract
We propose a new approach to volatility modelling by combining deep learning (LSTM) and realized volatility measures. This LSTM-enhanced realized GARCH framework incorporates and distills modeling advances from financial econometrics, high frequency trading data and deep learning. Bayesian inference via the Sequential Monte Carlo method is employed for statistical inference and forecasting. The new framework can jointly model the returns and realized volatility measures, has an excellent in-sample fit and superior predictive performance compared to several benchmark models, while being able to adapt well to the stylized facts in volatility. The performance of the new framework is tested using a wide range of metrics, from marginal likelihood, volatility forecasting, to tail risk forecasting and option pricing. We report on a comprehensive empirical study using 31 widely traded stock indices over a time period that includes COVID-19 pandemic.
_Keywords--_ conditional heteroskedasticity, deep learning, volatility modelling, realized volatility measure.
Introduction
Volatility modeling is an active area of research in financial econometrics with implications for risk management, portfolio allocation, and option pricing. The GARCH models of Engle, 1982 and Bollerslev, 1986, and their variants, such as the exponential GARCH of Nelson, 1991 and the GJR model of Glosten et al., 1993, are widely used in traditional financial econometric literature. The GARCH framework models the current conditional variance of the return as a linear function of past conditional variances and squared returns. However, traditional GARCH-type models based on daily returns respond slowly to rapid changes in volatility, which can take several periods to reach a new level. This problem can be mitigated by incorporating more effective volatility proxies based on high-frequency return data.
In the past two decades, many ex-post estimators of asset return volatility using high-frequency data have been introduced to the literature. Examples include the realized variance of Andersen et al., 1998, realized kernel variance of Barndorff-Nielsen et al., 2008 and bipower variation of Barndorff-Nielsen et al., 2004. These estimators, collectively referred to as realized volatility measures, are more informative than daily squared returns about representing the underlying volatility level, thus providing a better tool for modeling and forecasting volatility. Engle, 2002 explored the idea of including realized volatility measures in the GARCH model and found that it significantly improves its fit to return data. Engle's model only uses a realized volatility measure as a deterministic input to the GARCH equation and pays no attention to explaining the variation in realized volatility measures which should be viewed as noisy proxies of the underlying volatility. Later, Engle et al., 2006 introduced the multiplicative error model (MEM) for jointly modeling both the return and realized volatility measure dynamics, in which each dynamic is modeled by a separate latent process. Hansen et al., 2012 proposed the realized GARCH (RealGARCH) model, which adequately models and links the volatility dynamics and realized volatility measure dynamics, and it outperforms other models empirically (Jiang et al., 2018; Li et al., 2021; Xie et al., 2020). Hansen et al., 2016 further extended the RealGARCH model to realized EGARCH which incorporates multiple realized volatility measures of volatility.
With the advent of deep learning models and advancements in computational power, neural
networks (NN) have recently been introduced in volatility modeling in the mainstream econometric literature. NNs are capable of learning complex non-linear functions and capturing long-range dependence in time series data. Liu, 2019 and Bucci, 2020 compared the predictive performance of feed-forward neural networks (FNN) and recurrent neural networks (RNN) with traditional econometric approaches, and found that deep learning models generally outperform econometric models on several stock markets. Some hybrid models, proposed by Hyup Roh, 2007 and Kim et al., 2018, add a neural network as another layer on top of an econometric model, using the volatility estimates produced by an econometric model as the input to a neural network, which then outputs the final estimate of the volatility. Although these models perform well on specific stock datasets, they are often engineering-oriented and lack interpretability in a financially or economically meaningful way, ignoring important stylized facts such as the leverage and clustering effects commonly observed in financial time series data. An exception is the FNN-GJR hybrid model of Donaldson et al., 1997 which adds a one-layer FNN component into the GJR equation in a way that retains much of the interpretable characteristics from the GJR model, while enjoying the modelling flexibility from the FNN. However, as FNNs are typically designed for cross-sectional data analysis, the FNN component in the FNN-GJR model might be inefficiently at capturing serial dependence in financial time series.
The recurrent conditional heteroscedastic (RECH) model of Nguyen et al., 2022, which can be viewed as a significant extension of the FNN-GJR hybrid model, provides a flexible framework for combining deep learning with GARCH-type models. The RECH model represents the volatility as a sum of two components. The first component is governed by a GARCH-type model that retains the characteristics and interpretability of econometric models. The second component is governed by a RNN that can capture non-linear and long-term serial dependence structure in financial time series. An attractive feature of the RECH framework is that it is easy to add advances from both the deep learning and econometric volatility modelling literatures as we show in this article.
This paper combines and extends the RealGARCH and RECH models in several important ways, and introduces a new framework for volatility modelling and forecasting, called the Realized Recurrent Conditional Heteroscedasticity (RealRECH). First, we incorporate realized volatility measures into RECH. As intraday realized volatility measures are more accurate volatility proxies
than the squared daily returns (Andersen et al., 1998; Barndorff-Nielsen et al., 2008; Barndorff-Nielsen et al., 2004), incorporating realized volatility measures can improve the predictive power of RECH. To account for the variation in realized volatility measures, we use a measurement equation as in the RealGARCH; in this sense, the RealRECH model can be viewed as an extension of RealGARCH using deep learning. Second, to improve the modeling flexibility of RECH, this paper uses a more powerful RNN architecture - the long short-term memory model (LSTM) of Hochreiter et al., 1997, rather than the basic RNN used in RECH. The LSTM architecture is one of the most advanced and sophisticated RNN techniques and has proven highly efficient for time series modeling. By incorporating LSTM into RECH, we unlock its modeling power and allow it to be able to capture complex underlying dynamics in financial volatility.
We compare the performance of the new model with several existing benchmark models on 31 stock market indices. We show that the new model substantially improves on previous approaches in both in-sample fit and out-of-sample forecasting. The code and examples reported in the paper can be found at [https://github.com/VBayesLab/RealRECH](https://github.com/VBayesLab/RealRECH).
The rest of the article is organized as follows. Section 2 briefly reviews the model components (the GARCH-type models, realized volatility measures and recurrent neural networks) before describing the proposed RealRECH model. Section 3 presents the Bayesian inference method for RealRECH and section 3 presents the empirical analysis. Section 5 concludes. The Appendix provides further implementation and empirical details.
## 2 Model Formulation
### Conditional heteroscedastic models and realized volatility measures
Let \(\mathbf{y}=\left\{y_{t},t=1,\ldots,T\right\}\) be a time series of daily returns. The key term of interest in volatility modeling is the conditional variances, \(\sigma_{t}^{2}=\operatorname{var}\left(y_{t}\mid\mathcal{F}_{t-1}\right),\) where \(\mathcal{F}_{t-1}\) denotes the \(\sigma\)-field of information up to and including time \(t-1\). We assume here that \(\mathbb{E}(y_{t}|\mathcal{F}_{t-1})=0\), but the present method is easily extended to model the conditional mean \(\mathbb{E}(y_{t}|\mathcal{F}_{t-1})\). The GARCH model expresses
the conditional variance \(\sigma_{t}^{2}\) as a linear combination of the previous squared returns and conditional variances as an ARMA(\(p\), \(q\)) model:
\[y_{t} = \sigma_{t}\epsilon_{t},\quad\epsilon_{t}\stackrel{{\text{ i.i.d}}}{{\sim}}\mathcal{N}(0,1),\quad t=1,2,\ldots,T \tag{1}\] \[\sigma_{t}^{2} = \omega+\sum_{i=1}^{p}\alpha_{i}y_{t-i}^{2}+\sum_{j=1}^{q}\beta_{j }\sigma_{t-j}^{2},\quad t=p+1,\ldots,T. \tag{2}\]
The restriction \(\omega>0,\alpha_{i},\beta_{j}\geq 0,i=1,\ldots,p,j=1,\ldots,q\) is used to ensure positivity of \(\sigma_{t}^{2}\), and \(\sum_{i=1}^{p}\alpha_{i}+\sum_{j=1}^{q}\beta_{j}<1\) is needed to ensure the stationarity of the time series \(y_{t}\). The errors \(\epsilon_{t}\) are independently and identically distributed as normal distributions with zero mean and unit variance; other distributions for \(\epsilon_{t}\) such as Student's \(t\) are also considered in the literature, e.g., Gerlach et al., 2016. For other GARCH-type models, the reader is referred to Nelson, 1991, Glosten et al., 1993 and Bollerslev, 2008.
The GARCH model relies on daily squared returns, which only contain a weak signal of the daily volatility \(\sigma_{t}^{2}\). It is widely known in the financial econometric literature that high-frequency return data, such as 5-minute data, can be used to estimate daily volatility with high accuracy. In the past twenty years, many estimators of daily volatility using high-frequency data were developed and are referred to as "realized volatility measures" (Andersen et al., 1998; Barndorff-Nielsen et al., 2008; Barndorff-Nielsen et al., 2004). As realized volatility measures are ex-post, they cannot be directly used for volatility forecasting but they are effective volatility proxies for volatility modeling. Engle, 2002 is among the first to explore this idea by incorporating the realized volatility measure of Andersen et al., 1998 into the GARCH model. Since then, many volatility models incorporating realized volatility measures have been developed, e.g., Forsberg et al., 2002, Engle et al., 2006, Corsi, 2009, Shephard et al., 2010. The realized GARCH model (RealGARCH) of Hansen et al., 2012
\[y_{t}=\sigma_{t}\epsilon_{t},\quad t=1,2,\ldots,T \tag{3a}\] \[\sigma_{t}^{2}=\omega+\gamma\text{rv}_{t-1}+\beta\sigma_{t-1}^{2}\] (3b) \[\text{rv}_{t}=\xi+\varphi\sigma_{t}^{2}+\tau\left(\epsilon_{t} \right)+u_{t} \tag{3c}\]
is an important development in this direction. Here \(\epsilon_{t}\overset{i,i,d}{\sim}N(0,1)\), \(u_{t}\overset{i,i,d}{\sim}N\left(0,\sigma_{u}^{2}\right)\), rv\({}_{t}\) is a realized volatility measure, and \(\tau(\epsilon)\) is regarded as the leverage function and used to capture the leverage effect often observed in volatility. Hansen et al., 2012 set \(\tau(\epsilon)=\tau_{1}\epsilon+\tau_{2}\left(\epsilon^{2}-1\right)\). An attractive feature of the RealGARCH model is that it contains the measurement equation (3c) that accounts for the variation in the realized volatility measure rv\({}_{t}\). It associates the observed realized volatility measure with the underlying latent volatility, in which the integrated high-frequency variance rv\({}_{t}\) is explained as a linear combination of \(\sigma_{t}^{2}\) plus a random innovation. Our article employs a simple yet effective realized volatility measure, the 5-minute realized variance (RV\({}_{5}\)) (Andersen et al., 1998). Suppose that we observe the asset price at \(n\) trading times within a trading day \(t\), \(t_{j}=t-1+j/n,j=1,\ldots,n\). Let \(\left\{P(t_{j}),j=1,...,n\right\}\) be the observed prices and \(r_{t_{j}}=\log P\left(t_{j}\right)-\log P\left(t_{j-1}\right)\) be the log-returns. The RV for the trading day \(t\) is defined as
\[\text{rv}_{t}:=\sum_{j=1}^{n}r_{t_{j}}^{2}.\]
It can be shown that (Andersen et al., 1998), as \(n\rightarrow\infty\), rv\({}_{t}\) converges in probability to the true latent variance \(\sigma_{t}^{2}\). For RV\({}_{5}\), the return \(r_{t_{j}}\) in the above equation is recorded at 5 minutes frequency. There are various definitions of realized volatility measures but there is little evidence that any outperform RV\({}_{5}\) as a volatility proxy; see Liu et al., 2015 for a detailed comparison of more than 400 realized volatility measures.
### Recurrent Neural Network
RNN is a special class of neural network designed for modeling sequential data. Let \(\left\{D_{t}=(x_{t},y_{t}),t=1,2,...\right\}\) be the data with \(x_{t}\) the input and \(y_{t}\) the output. We use \((x_{t},y_{t})\) in this section as generic notation, not necessarily applicable to the return data in other sections. The task is to model the conditional mean \(\widehat{y}_{t}=\mathbb{E}(y_{t}|x_{t},D_{1:t-1})\). The basic RNN framework is
\[h_{t}=g_{h}\left(W^{h}\left[h_{t-1},x_{t}\right]+b^{h}\right), \tag{4a}\] \[\widehat{y}_{t}=g_{y}\left(W^{y}h_{t}+b^{y}\right). \tag{4b}\]
The main feature of the RNN structure is its vector of hidden states \(h_{t}\) which is defined recurrently. At each time \(t\), two information sources are fed into \(h_{t}\): the historical information stored in \(h_{t-1}\) and the current information from the input \(x_{t}\). The functions \(g_{h}\) and \(g_{y}\) are activation functions such as \(\text{sigmf}(z):=1/(1+e^{-z})\), or \(\tanh(z):=(e^{z}-e^{-z})/(e^{z}+e^{-z})\). Finally, the \(W\) and \(b\) are trainable model parameters.
The basic RNN model in (4a)-(4b) has some limitations in terms of both modelling flexibility and training difficulty. Many sophisticated RNN structures are proposed to overcome these limitations, and the LSTM model of Hochreiter et al., 1997 stands out as one of the most successful methods. LSTM uses a gate structure to control the memory in the data. It is written as follows:
\[g_{t}^{i} =\sigma\left(W^{i}\left[h_{t-1},x_{t}\right]+b^{i}\right) \tag{5a}\] \[g_{t}^{f} =\sigma\left(W^{f}\left[h_{t-1},x_{t}\right]+b^{f}\right)\] (5b) \[g_{t}^{o} =\sigma\left(W^{o}\left[h_{t-1},x_{t}\right]+b^{o}\right)\] (5c) \[\tilde{c}_{t} =\tanh\left(W^{c}\left[h_{t-1},x_{t}\right]+b^{c}\right)\] (5d) \[c_{t} =g_{t}^{i}\cdot\tilde{c}_{t}+g_{t}^{f}\cdot c_{t-1}\] (5e) \[h_{t} =g_{t}^{o}\cdot\tanh\left(c_{t}\right)\] (5f) \[\widehat{y}_{t} =g_{y}\left(W^{y}h_{t}+b^{y}\right). \tag{5g}\]
Unlike the basic RNN that fully overwrites the memory stored in the hidden states at each step, LSTM can decide to keep, forget or update the memory via the memory cell \(c_{t}\) in (5e). This memory cell \(c_{t}\) is updated by partially forgetting the previous memory from \(c_{t-1}\) and adding new memory from \(\tilde{c}_{t}\). The extent of forgetting the history and adding new information is controlled by the forget gate \(g_{t}^{f}\) and input gate \(g_{t}^{i}\), respectively. Finally, the degree of current memory usage for final output is controlled by the output gate \(g_{t}^{o}\). See Goodfellow et al., 2016, for a book length discussion of RNN and deep learning methods in general.
The ability of LSTM to control its memory and quickly adapt to new data patterns makes it very suitable to model volatility dynamics. It is well-known that volatility dynamics has long memory as well as a clustering effect in which the volatility can be highly volatile from period to
period (Christensen et al., 2007; Ding et al., 1993; Mandelbrot, 1967). Section 2.4 describes how to incorporate LSTM into volatility modelling.
### Recurrent conditional heteroskedasticity
The recurrent conditional heteroscedasticity (RECH) framework of Nguyen et al., 2022 is a class of volatility models that incorporate an RNN within a GARCH-type model for flexible volatility modeling. Its key motivation is using an additive component governed by an RNN to capture complex serial dependence structure in the volatility dynamics that may be overlooked by the GARCH component. The general RECH framework is written as
\[y_{t} =\sigma_{t}\epsilon_{t},\quad t=1,2,\ldots,T \tag{6a}\] \[\sigma_{t}^{2} =g(\omega_{t})+\sum_{i=1}^{p}\alpha_{i}y_{t-i}^{2}+\sum_{j=1}^{q }\beta_{j}\sigma_{t-j}^{2}\] (6b) \[\omega_{t} =\text{RNN}\left(x_{t}\right). \tag{6c}\]
Nguyen et al., 2022 refer to the term \(g(\omega_{t})\) in (6b) as the RNN component, as it is driven by an RNN. \(\sum_{i=1}^{p}\alpha_{i}y_{t-i}^{2}+\sum_{j=1}^{q}\beta_{j}\sigma_{t-j}^{2}\) is the GARCH component, which is inherited from the GARCH structure. We refer to the parameters in the RNN and GARCH components as the RNN and GARCH parameters, respectively. At each step, the RNN component takes information \(x_{t}\) as input to compute the new \(\omega_{t}\). A wide range of data can be fed to the RNN as additional inputs \(x_{t}\), whose choice is discussed shortly. Finally, the function \(g\) in (6b) is a non-negative activation function, applied to the \(\omega_{t}\) to ensure the positive conditional variance. We adopt the RELU function, \(g\left(\omega_{t}\right):=\max\left\{\omega_{t},0\right\}\), for this purpose. An attractive feature of RECH is that it is easy to incorporate input \(x_{t}\) into its RNN component whenever such input is available and useful in terms of modeling and predicting the return \(y_{t}\). For example, \(x_{t}\) can be a set of relevant exogenous variables if these are available. In their article, which does not consider additional information other than the return times series itself, Nguyen et al., 2022 suggest using \(x_{t}=\left(\omega_{t-1},y_{t-1},\sigma_{t-1}^{2}\right)\). With the availability of realized volatility measures \(\left\{\text{rv}_{t}\right\}\) as an effective volatility proxy, it is natural to include them into RECH. This motivates the introduction of the new realized RECH model
presented in the next section.
### 2.4 Realized recurrent conditional heteroskedasticity
This section presents our proposed Realized Recurrent Conditional Heteroscedastic model (Real-RECH), which extends the RECH model in two important ways. First, realized volatility measures are incorporated into RECH. Second, we improve the modeling flexibility of RECH by using a more sophisticated RNN, the LSTM architecture. The RealRECH model of order \(p\) and \(q\), RealRECH\((p,q)\), is written as:
\[y_{t} =\sigma_{t}\epsilon_{t},\quad t=1,2,\ldots,T \tag{7a}\] \[\sigma_{t}^{2} =g(\omega_{t})+\sum_{i=1}^{p}\gamma_{i}\mathrm{rv}_{t-i}+\sum_{j= 1}^{q}\beta_{j}\sigma_{t-j}^{2}\] (7b) \[\mathrm{rv}_{t} =\xi+\varphi\sigma_{t}^{2}+\tau\left(\epsilon_{t}\right)+u_{t}\] (7c) \[\omega_{t} =\mathrm{LSTM}\left(x_{t}\right)\] (7d) \[x_{t} =\left(\omega_{t-1},y_{t-1},\sigma_{t-1}^{2},\mathrm{rv}_{t-1} \right). \tag{7e}\]
Compared to (6b), we replace the squared returns \(y_{t}^{2}\) with more effective volatility proxies, i.e., the realized volatility measures \(\mathrm{rv}_{t}\). We also add the realized volatility measure to the input vector \(x_{t}\) of the LSTM as in (7d). Compared to the RealGARCH model, equation (3b), that only allows a linear and short-term dependence of the true latent conditional variance \(\sigma_{t}^{2}\) on the realized volatility measure, the RealRECH model is much more flexible. The latter, via its RNN component, allows for both non-linear and long-term dependence that the previous realized volatility measures might have on \(\sigma_{t}^{2}\). Similarly to RealGARCH, RealRECH also includes the measurement equation (7c) to account for the variation in the realized volatility measures. Rather than using the linear structure as in (7c), one could easily use an RNN model to explain \(\mathrm{rv}_{t}\) based on \(\sigma_{t}^{2}\); we have tried this, however, did not observe any meaningful improvement. Another significant extension we make to RECH is that the RealRECH model replaces the basic RNN with the LSTM architecture; see Section 2.2. The ability of the LSTM to control its memory is naturally suitable for volatility modelling. Via the forget and input gates, the RNN component \(g(\omega_{t})\) can adapt quickly to changes
in the stock market. In highly volatile periods where the historical data patterns and the current data patterns are different enough, the forget gate will be activated to enable the RNN component to ignore irrelevant historical information, allowing the RNN component to quickly pick up the new patterns via the input gate. In periods with small changes in volatility, the forget gate will shut to allow for the high persistence in the volatility. We only include one realized volatility measure in the RealRECH model; however, it is easy to incorporate as many realized volatility measures as possible by using them as additional inputs into \(x_{t}\). We only consider RealRECH(1,1) in this paper, which, for ease of reading and later cross-reference, is expressed as
\[y_{t} =\sigma_{t}\epsilon_{t},\quad t=1,2,\ldots,T \tag{8a}\] \[\sigma_{t}^{2} =g(\omega_{t})+\gamma\mathrm{rv}_{t-1}+\beta\sigma_{t-1}^{2}\] (8b) \[\mathrm{rv}_{t} =\xi+\varphi\sigma_{t}^{2}+\tau\left(\epsilon_{t}\right)+u_{t}\] (8c) \[\omega_{t} =\beta_{0}+\beta_{1}h_{t}\] (8d) \[h_{t} =\mathrm{LSTM}\left(x_{t}\right)\] (8e) \[x_{t} =\left(\omega_{t-1},y_{t-1},\sigma_{t-1}^{2},\mathrm{rv}_{t-1} \right). \tag{8f}\]
## 3 Bayesian inference for the RealRECH model
### The likelihood and prior
We adopt the Bayesian inference approach for the estimation of RealRECH. Following Hansen et al., 2012, we assume Gaussian errors \(\epsilon_{t}\overset{i.i.d}{\sim}N(0,1)\) and \(u_{t}\overset{i.i.d}{\sim}N(0,\sigma_{u})\). Hence, given the observed data \((\mathbf{y}=\{y_{1},...,y_{T}\},\mathbf{r}\mathbf{v}=(\mathrm{rv}_{1},...,\mathrm{rv}_{T}))\) the log-likelihood function of the RealRECH model is:
\[\ell(\mathbf{y},\mathbf{r}\mathbf{v}|\theta)=-\frac{1}{2}\sum_{t=1}^{T}\left[\log(2\pi)+ \log\left(\sigma_{t}^{2}\right)+y_{t}^{2}/\sigma_{t}^{2}\right]-\frac{1}{2} \sum_{t-1}^{T}\left[\log(2\pi)+\log\left(\sigma_{u}^{2}\right)+u_{t}^{2}/ \sigma_{u}^{2}\right], \tag{9}\]
where \(u_{t}=\mathrm{rv}_{t}-\xi-\varphi\sigma_{t}^{2}-\tau_{1}y_{t}/\sigma_{t}-\tau _{2}\left(y_{t}^{2}/\sigma_{t}^{2}-1\right)\). Recall that the vector of model parameters \(\theta\) consists of the GARCH and RNN parameters. For example, the RealRECH(1,1) model with a single hidden state LSTM has 7 GARCH parameters, \((\beta,\gamma,\xi,\varphi,\tau_{1},\tau_{2},\sigma_{u})\), and 26 RNN parameters
including \(\beta_{0}\), \(\beta_{1}\) and \(24\) parameters within the LSTM structure.
For the prior distributions on the GARCH parameters, we use the commonly used priors in the literature; see, e.g., Gerlach et al., 2016. For the RNN parameters, we follow Nguyen et al., 2022 and use their prior set-up.
### Model estimation and prediction
For Bayesian inference and prediction in volatility modeling, the Sequential Monte Carlo (SMC) method (Chopin, 2002; Del Moral et al., 2012; Neal, 2001) is an effective approach for computing rolling-window volatility forecasts which can effectively sample from non-standard posteriors, while also providing the marginal likelihood estimate as a by-product. The SMC technique uses a set of \(M\) weighted particles initially sampled from an easy-to-sample distribution, such as the prior \(p(\theta)\), with the particles then traversed through intermediate distributions which eventually become the target distribution. See Gunawan et al., 2022 for a review of the SMC method.
For in-sample model estimation and inference, we use the likelihood annealing version of SMC that samples from the sequence of distributions
\[\pi_{k}(\theta)\propto p(\theta)p\left(\mathbf{y},\mathbf{r}\mathbf{v}\mid\theta\right)^{ \gamma_{k}},\ \ k=0,1,\ldots K; \tag{10}\]
here, \(0=\gamma_{0}<\gamma_{1}<\gamma_{2}<\ldots<\gamma_{K}=1\) are called the temperature levels. Reweighting, resampling, and a Markov transition are the three primary components of the SMC approach. Several methods exist to implement SMC in practice, and we briefly describe one of them now. The collection of weighted particles \(\left\{W_{k-1}^{j},\theta_{k-1}^{j}\right\}_{j=1}^{M}\) that approximate the intermediate distribution \(\pi_{k-1}(\theta)\) is reweighted at the start of iteration \(k\) to approximate the target \(\pi_{k}(\theta)\). The efficiency of these weighted particles is measured by the effective sample size (ESS) (Kass et al., 1998)
\[\text{ESS}=\frac{1}{\sum_{j=1}^{M}\left(W_{t}^{j}\right)^{2}}, \tag{11}\]
where \(W_{t}:=(W_{t}^{1},\ldots,W_{t}^{M})\) is the weight vector for the \(M\) particles at time \(t\). If the ESS is below a threshold, the particles are resampled to obtain equally weighted particles, and a Markov kernel
with the invariant distribution \(\pi_{k}(\theta)\) is then applied to refresh these equally weighted particles. Following Del Moral et al., 2012, we adaptively choose the tempering sequence \(\gamma_{k}\) in order to maintain an adequate particle diversity.
For out-of-sample rolling-window volatility forecasting which updates the posterior each time a new data observation arrives, we employ the data annealing SMC method that samples from the sequence
\[\pi_{t}(\theta)\propto p(\theta)p(\mathbf{y}_{1:T+t},\mathbf{r}\mathbf{v}_{1:T+t}|\theta), \;\;t=1,2,... \tag{12}\]
## 4 Empirical Analysis
This section studies the performance of the RealRECH(1,1) model and compares it to GARCH, RealGARCH and RECH using 31 stock indices including the Amsterdam Exchange Index (AEX), the Dow Jones Index (DJI), the Frankfurt Stock Exchange (GDAXI) and the Standard and Poor's 500 Index (SP500). The datasets were downloaded from the Realized Library of The Oxford-Man Institute. In the main text, we present results for the above mentioned four representative indices and the average over 31 indices to conserve space; the detailed results for all 31 indices are shown in the appendix. Given the closing prices, we compute the demeaned close-to-close return process as
\[y_{t}=100\left(\log\frac{P_{t}}{P_{t-1}}-\frac{1}{n}\sum_{i=1}^{n}\log\frac{P _{i}}{P_{i-1}}\right),\quad t=1,2,\ldots,n. \tag{13}\]
We adopt the 5 mins realized variance (Andersen et al., 2003) as the realized volatility measure, \(\mathrm{rv}_{t}\), in RealGARCH and RealRECH. As the realized volatility measures ignore the overnight variation of the prices and sometimes the variation in the first few minutes of the trading day when recorded prices may contain large errors (Shephard et al., 2010), we follow Hansen et al., 2005 and scale the realized volatility measure as
\[\widetilde{\sigma}_{t}^{2}=\hat{c}\cdot\mathrm{rv}_{t}\text{ where }\hat{c}= \frac{\sum_{i=1}^{T}y_{i}^{2}}{\sum_{i=1}^{T}\mathrm{rv}_{i}},\quad t=1,2,\ldots, \tag{14}\]
and use \(\tilde{\sigma}_{t}^{2}\) as the estimate of the latent conditional variance \(\sigma_{t}^{2}\). Our sample is from 1 January 2004 to 1 January 2022, including the COVID-19 pandemic period, which we divide it into a first
half for training and the second half for out of sample analysis. There are, on average, 2139 trading days for both. See appendix for a detailed data description.
### Parameter estimation and in-sample fit
As mentioned in Section 3, the marginal likelihood estimate is a by-product of SMC, which is useful for in-sample model comparison using a Bayes factor. Given the training data (\(\mathbf{y}=y_{1:T},\mathbf{r}\mathbf{v}=\text{rv}_{1:T}\)), let \(\widehat{p}\left(\mathbf{y},\mathbf{r}\mathbf{v}|M_{i}\right)\) be the marginal likelihood estimate of model \(M_{i}\), \(i=1,2\). The Bayes factor of model \(M_{1}\) relative to \(M_{2}\) is
\[\text{BF}_{M_{1},M2}=\frac{\widehat{p}\left(\mathbf{y},\mathbf{r}\mathbf{v}|M_{1}\right)}{ \widehat{p}\left(\mathbf{y},\mathbf{r}\mathbf{v}|M_{2}\right)}. \tag{15}\]
The higher the Bayes factor, the more decisive is the evidence that model \(M_{1}\) is superior to \(M_{2}\)(Kass et al., 1995). Table 1 summarizes the parameter estimation (for the five main parameters) and the Bayes factor (with GARCH used as the baseline model) for the four models, GARCH, RealGARCH, RECH and RealRECH. The last panel reports the results for the average over the 31 indices.
We draw the following conclusions from the estimation results. First, the marginal likelihood estimates show that the RealRECH model fits the index datasets better than the competing models for 28 of 31 indices. On average, the Bayes factors of the RealRECH model compared with the GARCH, RECH, and RealGARCH models are roughly \(e^{58}\), \(e^{30}\) and \(e^{14}\), respectively, which according to Jeffery's scale for interpreting the Bayes factor (Jeffreys, 1998), decisively support the RealRECH model. Second, the estimated posterior mean of the parameter \(\beta_{1}\) in the RealRECH model is significantly different from zero in all cases, providing evidence of the volatility effects rather than linearity, probable non-linearity and long-memory effects, that the previous conditional variance \(\sigma_{t-1}^{2}\) and realized volatility measure \(\text{rv}_{t-1}\) have on \(\sigma_{t}^{2}\). This also suggests that the RNN component \(g(\omega_{t})\) in RealRECH can detect these effects effectively. Additionally, the estimated value of the parameter \(\gamma\) (the coefficient concerning the realized volatility measure) in RealRECH is always smaller than that in RealGARCH. This is perhaps because the effect of the realized volatility measure \(\text{rv}_{t-1}\) on the underlying volatility \(\sigma_{t}^{2}\) is well captured by the RNN component, which, unlike RealGARCH, allows non-linear dependence of \(\sigma_{t}^{2}\) on \(\text{rv}_{t-1}\).
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline & & \(\alpha\) & \(\beta\) & \(\beta_{0}\) & \(\beta_{1}\) & \(\gamma\) & Mar.lik & BF \\ \hline \multirow{4}{*}{AEX} & \multirow{2}{*}{garch} & \(0.105\) & \(0.884\) & \multirow{4}{*}{-} & \multirow{4}{*}{-} & \(-\) & \(-3450.8\) & \multirow{4}{*}{1} \\ & & \((0.012)\) & \((0.013)\) & & & & \((0.183)\) & \\ & & \(0.038\) & \(0.741\) & \(0.049\) & \(0.907\) & & \(-3400.6\) & \(e^{50}\) \\ & & \((0.014)\) & \((0.061)\) & \((0.019)\) & \((0.182)\) & & \((0.280)\) & \\ & & \(0.549\) & & & & \(0.371\) & \(-3376.3\) & \\ & & & \((0.028)\) & & & \((0.028)\) & \((0.018)\) & \\ & & & \(0.527\) & \(0.112\) & \(0.801\) & \(0.341\) & \(-3370.5\) & \\ & & & \((0.021)\) & \((0.022)\) & \((0.081)\) & \((0.015)\) & \((0.220)\) & \\ \hline \multirow{4}{*}{DJI} & \multirow{2}{*}{garch} & \(0.094\) & \(0.890\) & \multirow{4}{*}{-} & \(-\) & \(-\) & \(-3027.2\) & \multirow{4}{*}{1} \\ & & \((0.011)\) & \((0.012)\) & & & & \((0.100)\) & \\ & & \(0.049\) & \(0.636\) & \(0.063\) & \(0.912\) & & \(-2991.0\) & \\ & & \((0.011)\) & \((0.063)\) & \((0.019)\) & \((0.182)\) & & \((0.785)\) & \(e^{36}\) \\ & & & \(0.634\) & & & \(0.318\) & \(-2968.4\) & \\ & & & \((0.024)\) & & & \((0.027)\) & \((0.075)\) & \\ & & & \(0.840\) & \(1.413\) & \(11.827\) & \(0.005\) & \(-2962.9\) & \\ & & - & \((0.014)\) & \((0.148)\) & \((0.577)\) & \((0.004)\) & \((0.421)\) & \\ \hline \multirow{4}{*}{GDAXI} & \multirow{2}{*}{garch} & \(0.098\) & \(0.889\) & \multirow{4}{*}{-} & \(-\) & \(-3615.5\) & \multirow{4}{*}{1} \\ & & \((0.013)\) & \((0.014)\) & - & - & \(-\) & \((0.107)\) & \\ & & \(0.049\) & \(0.666\) & \(0.079\) & \(1.000\) & & \(-3570.0\) & \\ & & \((0.013)\) & \((0.071)\) & \((0.036)\) & \((0.206)\) & & \((0.349)\) & \(e^{46}\) \\ & & realgarch & - & \(0.439\) & & & \(0.452\) & \(-3534.8\) & \\ & & & \((0.025)\) & & & \((0.031)\) & \((0.055)\) & \(e^{81}\) \\ & & & \(0.414\) & \(0.126\) & \(5.561\) & \(0.307\) & \(-3523.6\) & \\ & & & \((0.029)\) & \((0.084)\) & \((0.409)\) & \((0.022)\) & \((0.141)\) & \({\bf e^{92}}\) \\ \hline \multirow{4}{*}{SPX} & \multirow{2}{*}{garch} & \(0.091\) & \(0.895\) & \multirow{4}{*}{-} & \(-\) & \(-\) & \(-3177.1\) & \multirow{4}{*}{1} \\ & & \((0.010)\) & \((0.011)\) & & & & \((0.158)\) & \\ \cline{1-1} & & \(0.049\) & \(0.657\) & \(0.071\) & \(0.871\) & & \(-3150.4\) & \\ \cline{1-1} & & \((0.010)\) & \((0.059)\) & \((0.020)\) & \((0.193)\) & & \((0.491)\) & \\ \cline{1-1} & & & \(0.633\) & & & \(0.317\) & \(-3100.1\) & \\ \cline{1-1} & & & \((0.022)\) & & & \((0.025)\) & \((0.078)\) & \\ \cline{1-1} & & & \(0.831\) & \(1.549\) & \(11.432\) & \(0.007\) & \(-3104.7\) & \\ \cline{1-1} & & & \((0.016)\) & \((0.160)\) & \((0.660)\) & \((0.005)\) & \((0.205)\) & \(e^{72}\) \\ \hline \multirow{4}{*}{Mean} & \multirow{2}{*}{garch} & \(0.102\) & \(0.882\) & \multirow{4}{*}{-} & \(-\) & \(-3411.9\) & \multirow{4}{*}{1} \\ & & \((0.013)\) & \((0.015)\) & & & \((0.132)\) & \\ \cline{1-1} & & \(0.059\) & \(0.729\) & \(0.071\) & \(0.850\) & & \(-3384.3\) & \\ \cline{1-1} & & \((0.014)\) & \((0.057)\) & \((0.032)\) & \((0.193)\) & & \((0.226)\) & \\ \cline{1-1} & & & \(0.628\) & & & \(0.306\) & \(-3368.4\) & \\ \cline{1-1} & & & \((0.024)\) & & & \((0.025)\) & \((0.214)\) & \(e^{44}\) \\ \cline{1-1} & & & \(0.667\) & \(0.613\) & \(5.203\) & \(0.168\) & \(-3354.0\) & \\ \cline{1-1} & & & \((0.034)\) & \((0.118)\) & \((0.453)\) & \((0.021)\) & \((0.472)\) & \\ \hline \end{tabular} _Note:_ The last two columns show the natural logarithms of the estimated marginal likelihood (Mar.lik) with Monte Carlo standard errors (in parentheses) across 6 different runs of SMC and the Bayes factors (BF) that used GARCH as the baseline.
\end{table}
Table 1: In-sample analysis: Posterior means of the parameters with the posterior standard deviations (in parentheses).
### Volatility forecast error compared to ex-post realized volatility measures
We now test the predictive performance of the RealRECH model for volatility forecasting. The forecast performance is measured by mean squared error (MSE) and mean absolute deviation (MAD) computed on test data \(D_{test}\) of size \(T_{test}\)
MSE \[= T_{test}^{-1}\sum_{D_{test}}(\widehat{\sigma}_{t}-\widetilde{ \sigma_{t}})^{2}\] (16) MAE \[= T_{test}^{-1}\sum_{D_{test}}|\widehat{\sigma}_{t}-\widetilde{ \sigma_{t}}|\] (17)
where \(\widehat{\sigma}_{t}\) is the one-step-ahead rolling-window forecast of the latent \(\sigma_{t}\) and \(\widetilde{\sigma_{t}}\) is the squared root of an ex-post realized volatility measure after rescaling as in (14). We use five ex-post volatility proxies, the Realized Variance (RV), Bipower Variation (BV), Median Realized Volatility (MedRV), Realized Kernel Variance with the Non-flat Parzen kernel (RK-Parzen) and the Tukey-Hanning kernel (RSV). See Shephard et al., 2010 for details about the Realized Library.
Tables 2 and 3 summarize the forecast performance of the four models. The last column in each table shows the number of times a model has the best predictive score across the five volatility proxies. The last panel in each table shows the average scores over the 31 indices. Using the five ex-post realized volatility measures as the ground truth, the results show that the RealRECH model performs the best in terms of volatility forecasting.
### Fitness to return series and tail risk forecast
An attractive feature of GARCH-type models including RealRECH is that they model both the volatility and return processes. This feature is crucial to risk management which is one of the most important applications of volatility modeling. For risk management, the key task is to forecast the Value at Risk (VaR) and Expected Shortfall (ES). An \(\alpha\)-level VaR is the \(\alpha\)-level quantile of the distribution of the return, and the \(\alpha\)-level ES is the conditional expectation of return values that exceed the corresponding \(\alpha\)-level VaR. Both VaR and ES are used as the two key risk measures in financial regulation and recommended by the Basel Accord.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline & & RV5 & BV & MedRV & RK-Parzen & RSV & Count \\ \hline \multirow{4}{*}{AEX} & garch & 0.151 & 0.142 & 0.141 & 0.209 & 0.190 & 0.0 \\ & rech & 0.143 & 0.131 & 0.126 & 0.188 & 0.180 & 0.0 \\ & realgarch & 0.121 & 0.109 & 0.102 & 0.173 & 0.158 & 0.0 \\ & realrech & **0.116** & **0.103** & **0.094** & **0.157** & **0.152** & **5.0** \\ \hline \multirow{4}{*}{DJI} & garch & 0.203 & 0.174 & 0.177 & 0.202 & 0.274 & 0.0 \\ & rech & 0.194 & 0.176 & 0.176 & 0.193 & 0.266 & 0.0 \\ & realgarch & 0.121 & 0.124 & 0.127 & 0.119 & 0.196 & 0.0 \\ & realrech & **0.112** & **0.122** & **0.118** & **0.109** & **0.184** & **5.0** \\ \hline \multirow{4}{*}{GDAXI} & garch & 0.180 & 0.174 & 0.182 & 0.203 & 0.221 & 0.0 \\ & rech & 0.159 & 0.154 & 0.160 & 0.180 & 0.200 & 0.0 \\ & realgarch & 0.111 & 0.110 & 0.114 & 0.140 & 0.152 & 0.0 \\ & realrech & **0.106** & **0.104** & **0.105** & **0.132** & **0.146** & **5.0** \\ \hline \multirow{4}{*}{SPX} & garch & 0.183 & 0.166 & 0.180 & 0.182 & 0.243 & 0.0 \\ & rech & 0.174 & 0.163 & 0.172 & 0.172 & 0.237 & 0.0 \\ & realgarch & 0.117 & **0.119** & 0.129 & 0.119 & 0.182 & 1.0 \\ & realrech & **0.112** & 0.121 & **0.125** & **0.113** & **0.176** & **4.0** \\ \hline \multirow{4}{*}{Mean} & garch & 0.196 & 0.174 & 0.184 & 0.243 & 0.267 & 0.0 \\ & rech & 0.184 & 0.163 & 0.172 & 0.227 & 0.257 & 0.3 \\ \cline{1-1} & realgarch & 0.153 & 0.138 & 0.151 & 0.204 & 0.225 & 1.2 \\ \cline{1-1} & realrech & **0.147** & **0.133** & **0.146** & **0.192** & **0.219** & **3.5** \\ \hline \hline \end{tabular} _Note:_ The last column reports the number of times a model has the lowest MSE among the five realized volatility measures. The bottom panel reports the average across the 31 indices. The bold numbers indicate the best scores.
\end{table}
Table 2: Forecast performance: MSE for different realized volatility measures.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline & & RV5 & BV & MedRV & RK-Parzen & RSV & Count \\ \hline \multirow{4}{*}{AEX} & garch & 0.259 & 0.256 & 0.266 & 0.332 & 0.298 & 0.0 \\ & rech & 0.237 & 0.233 & 0.242 & 0.300 & 0.276 & 0.0 \\ & realgarch & 0.218 & 0.211 & 0.219 & 0.294 & 0.260 & 0.0 \\ & realrech & **0.209** & **0.202** & **0.209** & **0.274** & **0.249** & **5.0** \\ \hline \multirow{4}{*}{DJI} & garch & 0.291 & 0.273 & 0.301 & 0.308 & 0.344 & 0.0 \\ & rech & 0.267 & 0.253 & 0.278 & 0.281 & 0.323 & 0.0 \\ & realgarch & 0.218 & 0.214 & 0.239 & 0.237 & 0.279 & 0.0 \\ & realrech & **0.208** & **0.204** & **0.226** & **0.223** & **0.272** & **5.0** \\ \hline \multirow{4}{*}{GDAXI} & garch & 0.317 & 0.310 & 0.320 & 0.343 & 0.356 & 0.0 \\ & rech & 0.297 & 0.290 & 0.297 & 0.324 & 0.337 & 0.0 \\ & realgarch & 0.237 & 0.233 & 0.242 & 0.274 & 0.284 & 0.0 \\ & realrech & **0.233** & **0.229** & **0.237** & **0.266** & **0.277** & **5.0** \\ \hline \multirow{4}{*}{SPX} & garch & 0.295 & 0.286 & 0.312 & 0.305 & 0.340 & 0.0 \\ & rech & 0.265 & 0.262 & 0.285 & 0.275 & 0.315 & 0.0 \\ & realgarch & 0.222 & 0.225 & 0.247 & 0.240 & 0.277 & 0.0 \\ & realrech & **0.211** & **0.217** & **0.237** & **0.226** & **0.270** & **5.0** \\ \hline \multirow{4}{*}{Mean} & garch & 0.289 & 0.278 & 0.293 & 0.338 & 0.337 & 0.0 \\ & rech & 0.278 & 0.267 & 0.282 & 0.323 & 0.327 & 0.2 \\ \cline{1-1} & realgarch & 0.248 & 0.238 & 0.258 & 0.305 & 0.302 & 0.7 \\ \cline{1-1} & realrech & **0.241** & **0.233** & **0.254** & **0.292** & **0.294** & **4.1** \\ \hline \end{tabular} _Note:_ The last column reports the number of times a model has the lowest MAD among the five realized volatility measures. The last panel reports the average across th 31 indices. The bold numbers indicate the best scores.
\end{table}
Table 3: Forecast performance: MAD for different realized volatility measures.
To evaluate the quality of the VaR forecast, we adopt the standard quantile loss function
\[\text{Qloss}:=\sum_{y_{t}\in D_{test}}\left(\alpha-I\left(y_{t}<Q_{t}^{\alpha} \right)\right)\left(y_{t}-Q_{t}^{\alpha}\right), \tag{18}\]
where \(Q_{t}^{\alpha}\) is the forecast of \(\alpha\)-level VaR of \(y_{t}\)(Koenker et al., 1978). We note that, given the volatility \(\sigma_{t}\) and the normality assumption of the random shock \(\epsilon_{t}\) in (7a), it is straightforward to compute \(Q_{t}^{\alpha}\). The quantile loss function is strictly consistent (Fissler et al., 2016), i.e., the expected loss is lowest at the true quantile series. The most accurate VaR forecasting model should therefore minimize the quantile loss function.
There is no strictly consistent loss function for ES; however, Fissler et al., 2016 found that ES and VaR are jointly elicitable, i.e., there is a class of strictly consistent loss functions for evaluating VaR and ES forecasts jointly. Taylor, 2019 showed that the negative logarithm of the likelihood function built from the Asymmetric Laplace (AL) distribution is strictly consistent for VaR and ES considered jointly and fits into the class developed by Fissler et al., 2016. This AL based joint loss function is given as
\[\text{JointLoss}:=\frac{1}{T_{\text{test}}}\sum_{D_{\text{test}}}\left(-\log \left(\frac{\alpha-1}{\text{ES}_{t}^{\alpha}}\right)-\frac{\left(y_{t}-Q_{t}^ {\alpha}\right)\left(\alpha-I\left(y_{t}\leq Q_{t}^{\alpha}\right)\right)}{ \alpha\text{ES}_{t}^{\alpha}}\right) \tag{19}\]
with \(\text{ES}_{t}^{\alpha}\) the forecast of \(\alpha\)-level ES of \(y_{t}\).
To measure the fit of a volatility model to the return series, we also consider the Partial Predictive Score (PPS), which is one of the most commonly used metrics for evaluating predictive performance in statistical modeling. It measures the negative log-likelihood of observing the return series based on our volatility forecast. The model with the smallest PPS is preferred. The PPS score is defined as
\[\text{PPS}:=-\frac{1}{T_{\text{test}}}\sum_{D_{\text{test}}}p\left(y_{t}\mid y _{1:t-1}\right). \tag{20}\]
Table 4 reports the quantile loss, joint loss for 1% and 5% VaR and ES forecast, and PPS score. The Count panel reports the number of indices where a model achieves the best predictive score. The results show that for most of the indices, the RealRECH model produces the best VaR and ES forecasts. RealRECH is also superior to its competitors in terms of overall fit to the return
series. It reports the lowest PPS on most returns series and on average. Additionally, we find that for most of the metrics, the RECH model performs the best on more indices than RealGARCH. In contrast, in the previous section, RealGARCH dominates RECH regarding forecast error compared ex-post proxies. This suggests that ranking conditional volatility forecasts by ex-post proxies can sometimes lead to undesirable outcomes, and we should also evaluate predictive performance by economic loss.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & & Qloss\_1\% & JointLoss\_1\% & Qloss\_5\% & JointLoss\_5\% & PPS \\ \hline \multirow{4}{*}{AEX} & garch & 92.549 & 2.462 & 282.507 & 1.909 & 1.342 \\ & rech & 88.654 & 2.404 & 270.841 & 1.851 & 1.307 \\ & realgarch & 86.838 & 2.317 & 273.663 & 1.847 & 1.298 \\ & realrech & **84.982** & **2.279** & **267.902** & **1.815** & **1.288** \\ \hline \multirow{4}{*}{DJI} & garch & **80.691** & 2.340 & 250.738 & 1.767 & 1.175 \\ & rech & 81.502 & 2.243 & 247.145 & 1.706 & 1.135 \\ & realgarch & 82.620 & 2.299 & 242.673 & 1.712 & 1.142 \\ & realrech & 81.193 & **2.236** & **242.049** & **1.689** & **1.134** \\ \hline \multirow{4}{*}{GDAXI} & garch & 100.503 & 2.514 & 316.005 & 2.034 & 1.484 \\ & rech & **97.312** & **2.480** & **305.339** & **1.991** & **1.453** \\ & realgarch & 103.330 & 2.577 & 314.835 & 2.039 & 1.460 \\ & realrech & 100.372 & 2.557 & 309.042 & 2.016 & 1.453 \\ \hline \multirow{4}{*}{SPX} & garch & 83.164 & 2.424 & 253.502 & 1.803 & 1.192 \\ & rech & 81.716 & 2.327 & 247.041 & 1.735 & 1.149 \\ \cline{1-1} & realgarch & **80.571** & 2.311 & 244.235 & 1.728 & 1.136 \\ \cline{1-1} & realrech & 81.334 & **2.287** & **243.364** & **1.714** & **1.135** \\ \hline \multirow{4}{*}{Mean} & garch & 78.545 & 2.262 & 260.099 & 1.859 & 1.355 \\ & rech & 76.549 & 2.224 & 253.029 & 1.825 & 1.339 \\ \cline{1-1} & realgarch & 79.518 & 2.266 & 258.329 & 1.853 & 1.348 \\ \cline{1-1} & realrech & **76.141** & **2.223** & **248.735** & **1.812** & **1.336** \\ \hline \multirow{4}{*}{Count} & garch & 1 & 3 & 0 & 0 & 0 \\ \cline{1-1} & rech & 9 & 7 & 6 & 7 & 11 \\ \cline{1-1} & realgarch & 4 & 3 & 3 & 1 & 4 \\ \cline{1-1} & realrech & **17** & **18** & **22** & **23** & **16** \\ \hline \hline \end{tabular} _Note:_ Qloss\_1\%, JointLoss\_1\%, Qloss\_5\%, JointLoss\_5\% are the quantile loss and jointloss at 1% and 5% respectively. The last two panels report the average scores and the number of times a model has the best predictive scores across the 31 indices.
\end{table}
Table 4: Forecast performance: Tail risk forecast and Partial Predictive Score.
### Simulated option trading
Apart from risk management, options trading is also one of the most attractive applications of volatility forecasting. In a theoretical pricing model, volatility is the most difficult input for traders to predict among all the inputs required for option evaluation. At the same time, volatility often plays the most crucial role in actual trading decisions. Consider the inputs of a Black-Scholes model for European options:
1. The current price of the underlying security
2. The option's exercise price
3. The expiration time
4. The risk-free interest rate of the life of the option
5. The volatility of the underlying contract.
Volatility is the only unknown input here; hence the profitability of an options trader is greatly affected by their ability to forecast volatility. This section examines model performance based on its ability to price options correctly. We follow Engle et al., 1990 to design a hypothetical option market where each agent uses their volatility forecast and the Black-Scholes model to price options and trade with competing agents. The experiment is organized as follows:
1. An agent in the experiment trades on the options of $1 share of the S&P500 index with an at-the-money exercise price (1$) and 1-day expiration. The risk-free interest rate is set to zero.
2. Each agent \(M\) determines their call options price \[P_{t,M}=2\Phi\left(\frac{1}{2}\sigma_{t,M}\right)-1\] (21) given the volatility forecast \(\sigma_{t,M}^{2}\) and the Black-Scholes formula, with \(\Phi\) the standard normal cumulative distribution function.
3. The pair-wise trading then takes place between agents \(M_{1}\) and \(M_{2}\) at their predicted mid-price \(P_{t}\), \[P_{t}=(P_{t,M_{1}}+P_{t,M_{2}})/2.\] (22) Each agent either buys or sells a straddle (a combination of put and call options) and uses its variance forecast to determine the hedge ratio, \(\delta\). In our case, \(\delta_{\text{straddle}}=1-2\Phi\left(\frac{1}{2}\sigma_{t}\right)\). The intuition is that the agent with the higher volatility forecast will believe the straddle is underpriced from \(P_{t}\), thus buying the straddle from its counterpart and vice versa.
4. For each pair-wise trade, the daily profit of buying a straddle is then calculated as \[|r_{t}|-2P_{t}+r_{t}\left(1-2\Phi\left(\frac{1}{2}\sigma_{t,M}\right)\right),\] (23) and the daily profit of selling a straddle is \[2P_{t}-|r_{t}|-r_{t}\left(1-2\Phi\left(\frac{1}{2}\sigma_{t,M}\right)\right).\] (24) With a total of \(k\) agents, each agent conducts \(k-1\) trades per day. The daily sum of the trading profit is then divided by \(k-1\) and averaged throughout the testing period.
Table 5 reports the daily, annual profit (in cents) and Sharpe ratio of the options agents that use GARCH, RealGARCH, RECH and RealRECH as their forecast models. The simulations are repeated on 31 stocks indices and the following four scenarios:
1. All agents trade in the market.
2. Only RealGARCH, RECH, and RealRECH agents trade in the market.
3. Only RealGARCH and RealRECH agents trade against each other.
4. Only RECH and RealRECH agents trade against each other.
In scenario (1), where all agents are trading against each other, the RealRECH agent generates the highest profits and Sharpe ratio in most stock indices on average. Scenarios (2), (3) to (4) further illustrate the consistent profitability of the RealRECH agent against its two direct ancestors,
RealGARCH and RECH, in our hypothetical market. A surprising finding is that the RealGARCH performs almost as badly as the GARCH model.
### Statistical significance
The previous sections show that RealRECH outperforms the competing models in terms of in-sample fit, forecasting error, tail risk forecast and option pricing. This section tests whether these improvements are statistically significant using the Model Confidence Set (MCS) introduced by
\begin{table}
\begin{tabular}{r r r r r r r r r r} \hline \hline & & \multicolumn{3}{c}{Scenario1} & \multicolumn{3}{c}{Scenario2} & \multicolumn{3}{c}{Scenario3} & \multicolumn{2}{c}{Scenario4} \\ & & Ret. & Sharpe & Ret. & Sharpe & Ret. & Sharpe & Ret. & Sharpe \\ \hline \multirow{4}{*}{AEX} & \multirow{2}{*}{
\begin{tabular}{c} search \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realgarch \\ realarch \\ realgarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarcharch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarcharch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarcharch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch \\ realarch realarcharch \\ real
Hansen et al., 2011. Let \(\mathcal{M}\) be a set of competing models. A set of superior models (SSM) is established under the MCS procedure, which consists of a series of equal predictive accuracy tests given a specific confidence level. Let \(L_{i,t}\) be a performance loss, such as the MSE or quantile loss, incurred by model \(i\in\mathcal{M}\) at time \(t\). Define \(d_{i,j,t}=L_{i,t}-L_{j,t}\) to be the relative loss of model \(i\) compared to model \(j\) at time \(t\). The MCS test assumes that \(d_{i,j,t}\) is a stationary time series for all \(i,j\) in \(\mathcal{M}\), i.e., \(\mu_{i,j}=\mathbb{E}(d_{i,j,t})\) for all \(t\). By testing the equality of the expected loss difference \(\mu_{i,j}\), MCS determines if all models have the same level of predictive accuracy. The null hypothesis is
\[H_{0}:\mu_{i,j}=0,\quad\text{for all }i,j\in\mathcal{M}. \tag{25}\]
A model is eliminated when the null hypothesis \(H_{0}\) of equal forecasting ability is rejected. The collection of models that do not reject the null hypothesis \(H_{0}\) is then defined as the SSM. For each model \(i\in\mathcal{M}\), the MCS produces a \(p\)-value \(p_{i}\). The lower the \(p\)-value of a model, the less likely that it will be included in the SSM. See Hansen et al., 2011 for more details.
Table 6 reports the model confidence sets computed by all the predictive scores. For each model, we report the total number of times, across the 31 indices, that the model is included in the MCS and its average \(p\)-value (in parentheses). We note that a small \(p\)-value indicates that the model is unlikely to be the best model (Hansen et al., 2011). At a 75% confidence level, for 8 predictive scores, the RealRECH models are on average included in SSM for 24 indices and have the highest \(p\)-value of 1 for 20 indices. The result shows that RealRECH is the most likely to be included in the SSM and have statistically significant superiority over the other models in all the considered predictive metrics: forecasting error (MSE and MAD), fit to return series (PPS), risk forecast (quantile loss and joint loss) and option pricing.
## 5 Conclusions
The paper proposes a new version of the RECH model with two novel features: incorporating a realized volatility measure that is directly linked to conditional volatility, and employing a more robust LSTM RNN architecture. Bayesian inference and forecasting are performed using Sequential
Monte Carlo with likelihood and data annealing. The RealRECH model is evaluated on 31 major world stock markets, demonstrating improved performance in statistical criteria (in-sample fit, forecast error to realized measures) and economic criteria (tail risk forecast and option pricing) compared to standard GARCH, RECH, and RealGARCH models.
Our proposed framework could be extended in several ways. First, it is possible to include multiple realized volatility measures since Hansen et al., 2016 discovered that including multiple realized volatility measures evidently improves both the in-sample and out-of-sample fit. The multiple measurement equations then could be replaced by a single LSTM with multiple outputs which would allow us to capture the non-linear relationship between conditional volatility and realized volatility measures as well as the interactions between the realized volatility measures. Second, incorporating financial news into the input of RealRECH is an interesting topic to study, as news has been shown to have a major influence on volatility movement (Atkins et al., 2018, Xing et al., 2019, Rahimikia et al., 2021). Lastly, incorporating transfer learning would be a natural extension. Transfer learning can help mitigate the problem of insufficient training data and enable
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & GARCH & RECH & RealGARCH & RealRECH \\ \hline \multirow{2}{*}{MSE} & 1 & 5 & 18 & 24 \\ & (0.01) & (0.10) & (0.48) & (0.67) \\ \hline \multirow{2}{*}{MAD} & 1 & 4 & 11 & 26 \\ & (0.02) & (0.09) & (0.27) & (0.84) \\ \hline \multirow{2}{*}{Qloss\_1\%} & 9 & 20 & 12 & 22 \\ & (0.16) & (0.51) & (0.25) & (0.65) \\ \hline \multirow{2}{*}{Qloss\_5\%} & 1 & 12 & 6 & 25 \\ & (0.01) & (0.30) & (0.15) & (0.80) \\ \hline \multirow{2}{*}{JointLoss\_1\%} & 12 & 16 & 14 & 24 \\ & (0.25) & (0.40) & (0.30) & (0.70) \\ \hline \multirow{2}{*}{JointLoss\_5\%} & 2 & 13 & 6 & 25 \\ & (0.03) & (0.31) & (0.12) & (0.80) \\ \hline \multirow{2}{*}{PPS} & 7 & 16 & 10 & 22 \\ & (0.13) & (0.43) & (0.24) & (0.63) \\ \hline \multirow{2}{*}{OptTrading} & 5 & 13 & 6 & 22 \\ & (0.10) & (0.39) & (0.14) & (0.70) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Statistical signficance: The number of times a model is included in the set of superior models and the average \(p\)-value (in parentheses), across the 31 indices
us to use a large number of pre-trained deep learning models and traditional financial econometric models for volatility modeling. |
2302.07331 | Shannon information entropy, soliton clusters and Bose-Einstein
condensation in log gravity | We give a probabilistic interpretation of the configurational partition
function of the logarithmic sector of critical cosmological topologically
massive gravity, in which the Hurwitz numbers considered in our previous works
assume the role of probabilities in a distribution on cycles of permutations.
In particular, it is shown that the permutations are distributed according to
the Ewens sampling formula which plays a major role in the theory of partition
structures and their applications to diffusive processes of fragmentation, and
in random trees. This new probabilistic result together with the previously
established evidence of solitons in the theory provide new insights on the
instability originally observed in the theory. We argue that the unstable
propagation of a seed soliton at single particle level induces the generation
of fragments of defect soliton clusters with rooted tree configuration at
multiparticle level, providing a disordered landscape. The Shannon information
entropy of the probability distribution is then introduced as a measure of the
evolution of the unstable soliton clusters generated. Finally, based on
Feynman's path integral formalism on permutation symmetry in the
$\lambda$-transition of liquid helium, we argue that the existence of
permutation cycles in the configurational log partition function indicates the
presence of Bose-Einstein condensates in log gravity. | Yannick Mvondo-She | 2023-02-14T20:24:31Z | http://arxiv.org/abs/2302.07331v2 | # Shannon information entropy, soliton clusters and Bose-Einstein condensation in log gravity
###### Abstract
We give a probabilistic interpretation of the configurational partition function of the logarithmic sector of critical cosmological topologically massive gravity, in which the Hurwitz numbers considered in our previous works assume the role of probabilities in a distribution on cycles of permutations. In particular, it is shown that the permutations are distributed according to the Ewens sampling formula which plays a major role in the theory of partition structures and their applications to diffusive processes of fragmentation, and in random trees. This new probabilistic result together with the previously established evidence of solitons in the theory provide new insights on the instability originally observed in the theory. We argue that the unstable propagation of a seed soliton at single particle level induces the generation of fragments of defect soliton clusters with rooted tree configuration at multiparticle level, providing a disordered landscape. The Shannon information entropy of the probability distribution is then introduced as a measure of the evolution of the unstable soliton clusters generated. Finally, based on Feynman's path integral formalism on permutation symmetry in the \(\lambda\)-transition of liquid helium, we argue that the existence of permutation cycles in the configurational log partition function indicates the presence of Bose-Einstein condensates in log gravity.
###### Contents
* 1 Introduction
* 2 The log partition function
* 3 Shannon information entropy as a measure of the evolution of the unstable defect soliton clusters
* 4 Bose-Einstein condensation in configuration space
* 5 Summary
## 1 Introduction
Cosmological topologically massive gravity at the critical point (CCTMG) was first considered by Grumiller and Johansson a decade and a half ago [1]. Under a more relaxed class of boundary conditions than the standard Brown-Henneaux ones [2], this three-dimensional gravity theory in anti-de Sitter (AdS) spacetime with a negative cosmological constant and a gravitational Chern-Simons term features a new mode, the logarithmic primary which spoils the unitarity of the theory. Rather than rendering it inconsistent,
the additional mode brings new and interesting perspectives into the theory. Firstly, within the AdS/CFT framework, the appearance of the log mode at the chiral point was used to provide a conjecture for a theory of gravity holographically dual to a logarithmic conformal field theory, a type of conformal field theory useful in describing various systems, among which two-dimensional turbulence, critical polymers, percolation, or systems with (quenched) disorder [3]. Two important characteristics of LCFTs are that on one hand they exhibit a non-diagonalizable Hamiltonian due to the degeneracy of certain operators that together with their so called logarithmic partners decompose the operator spectrum into Jordan cells, and on the other hand they display logarithmic singularities in the correlation functions. The manifestation of these hallmarks of non-unitary conformal field theories in gravity theories at the chiral point lead to the conjecture of a more recent version of a class of dualities called the AdS\({}_{3}\)/LCFT\({}_{2}\) correspondence, and to the coining of such gravity theories, log gravity [4].
A significant result was obtained in the calculation of the one-loop partition function, and was found to agree with the partition function of an LCFT up to single particle [5]. Subsequently, the logarithmic contribution of the partition function (also called the log partition function) became a stimulating subject of study, and lead to several interesting outputs [6, 7, 8, 9].
The interpretation of the partition function from the point of view of solitons of integrable systems offers new explorable avenues in our attempt to better understand the log sector. On one hand, the reformulation of the log partition function as a \(\tau\)-function of the KP I integrable solitonic hierarchy describing solitons which are unstable with respect to transverse perturbations echoes the unstable aspect of CCTMG originally considered in [1]. Precisely, it brings to light the instability of solitonic structures in the theory, manifested by the appearance of a disordered ensemble of solitons. On the other hand, the log partition was also expressed in terms of the set of infinitely many symmetries of the Burgers equation, called the Burgers hierarchy. The Burgers equation is the simplest partial differential equation that combines nonlinear propagation effects as well as diffusive effects. Of considerable importance in the description of weak shock phenomena in gas dynamics, it can be linearized using a direct coordinate transformation to the diffusion equation. From the linearization, the infinite number of symmetries comes out of the Burgers equation leading to the definition of the Burgers hierarchy, also related to the \(\tau\)-function of the KP hierarchy [10, 11]. The integrability properties of the aforementioned hierarchies displayed in the log partition function therefore suggests the instance of an unstable solitonic propagation of diffusive nature in the theory. Described as a generating function of polynomials for random mappings from \(\mathbb{CP}^{1}\) to \(\mathbb{CP}^{1}\) related to integrable hierarchies, the log partition function was also defined as a multinomial expansion over rooted trees where subtrees cluster around a root according to a well constructed coalescent process. These results indicate a solitonic cluster structure of collective excitations in the theory. The evolution of the soliton clusters formed and the associated phenomena constitute the subject of this paper.
In section 2, we motivate our probabilistic study of the log partition function by showing how it naturally arises from its generation of cycle structures of permutations. The relation between permutations and integer partitions is characterized by the Hurwitz numbers, which now appear as probabilities in a probability distribution on permutations and partitions. We point out a remarkable relationship between the Hurwitz numbers and the Ewens's sampling formula [12, 13], well-known in population genetics as a one-parameter family of probability distributions on the set of all partitions of an integer \(n\). More precisely, it is the probability distribution of the number of different types of genes and their frequencies under neutral evolution, \(i.e\) in the absence of selection. Also known among probabilists and statisticians as the multivariate Ewens distribution, the Ewens sampling formula finds applications in a very broad range of areas of physical and mathematical sciences such as in the calculation of multiparticle Veneziano amplitudes [14], the unified neutral theory of biodiversity [15], nonparametric Bayesian inference [16, 17], combinatorial stochastic processes [18, 19], Macdonald polynomials [20], algebraic number theory [21, 22]. and even the Faa di Bruno's formula for the derivative of a composite function [23], whose Bell polynomial avatar became the premise of our work on the log partition function. A characteristic of the Ewens sampling formula is its description of theories in which dynamics are diffusive in nature. The evolution of the diffusive dynamics captured by the log partition function encoding the elegant Ewens distribution concurs with the representation of the partition function in terms of the Burgers hierarchy, and goes in the direction of the original observation of the unstable nature of log gravity. Very importantly, the Ewens sampling formula is also well known to govern the distribution on partitions in random processes of fragmentation and coagulation [24]. Furthermore, it is known that several models of random fragmentation involve structures such as random trees and forests [25]. From these findings, our combinatorial construction in [9] finds a natural interpretation as a tree representation of fragmentations. We argue that the unstable propagation of the soliton formed by the collection of fields at the single particle level \(n=1\) gives rise to fragments of soliton clusters at the multiparticle levels \(n\geq 2\). The
soliton clusters are rooted trees partitioned into a collection of subtrees, and the coalescing (coagulating) subtrees are solitons pinned by point defects represented by the roots as the pinning sites. The unstable defect soliton clusters thus provide a disordered landscape whose random distribution is used in the next section to study the evolution of the fragmentation.
In section 3, having established a connection between Hurwitz numbers and (Ewens) probability distribution, we use the Shannon information entropy as a measure of the evolution of the solitonic fragmentation in the theory. Since the inception of information theory, the Shannon information entropy [26] has proved to be a considerably useful tool in quantifying the average information of a process in which outcomes have corresponding probabilities that add up to a distribution. At the junction between several fields [27], it has been used in various forms. For instance, a particular realization of Shannon information entropy that provides an information measure of spatially localized physical configurations has been largely adopted in high energy physics under the name of configuration entropy, and used in connection with entropic measures of nonlinear scalar field models with spatially-localized energy solutions that include solitons and bounces in one spatial dimension and critical bubbles in three spatial dimensions [28], AdS-Schwarzschild black holes [29], the graviton Bose-Einstein condensate [30], the AdS/QCD correspondence [31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44], or also in the analysis of Korteweg-de Vries solitons in the quark-gluon plasma [45]. For applications of Shannon information entropy in other areas of science, including the analysis of heavy-ion collisions, see for instance [46]. Here, we use Shannon information entropy as a measure of the disorder (or randomness) manifested in the generation of the unstable defect soliton clusters. Our information-theoretic formulation is based on describing the statistics of the fragmentation evolution by constructing a function that measures the difference between the axiomatic maximum entropy and the actual Shannon entropy at each level \(n\). The motivation for deriving such a function is that the axiomatic maximum entropy value and the actual Shannon entropy at each level \(n\) both increase at different rates. It therefore becomes useful to compare the gap between both values by means of difference. As the later is shown to increase with \(n\), we find an interesting way of quantifying the decaying evolution of the unstable solitonic fragmentation.
In section 4, we take our probabilistic perspective of the log partition function a bit further, and discuss the appearance of a Bose-Einstein condensation in log gravity. Our motivation comes from Feynman's statistical model of collective excitations in systems of bosons, that originates from his study of the \(\lambda\)-transition in liquid helium [47]. Feynman's formulation of Bose-Einstein condensation hinges on the idea of adapting his path-integral approach to quantum mechanics based on the quantum partition function into a statistical mechanics problem, resulting in the description of Bose systems in terms of large ensembles of cycles of various lengths, within which a random number of particles is spatially arranged according to all possible permutations of \(n\)-particles. In the context of log gravity, the interpretation is that at the critical point, the dynamics of soliton propagation is reflected by the emergence of fractions (fragments) of the \(n\)-particles as condensates, arranged in terms of Feynman graphs. The random permutations take place in a rooted tree configuration space, and a dynamical pinning mechanism arises at the root which represents the pinning site, leading to the formation of defect solitonic states. These solitons pinned by a point defect (the root) are the structures that realize the Bose-Einstein condensates.
In section 5, we finally summarize our results.
## 2 The log partition function
The 1-loop partition function of log gravity originally calculated with the form [5]
\[Z_{\rm CCTMG}(q,\bar{q})=\prod_{n=2}^{\infty}\frac{1}{|1-q^{n}|^{2}}\prod_{m= 2}^{\infty}\prod_{\bar{m}=0}^{\infty}\frac{1}{1-q^{m}\bar{q}^{\bar{m}}}, \hskip 28.452756pt\mbox{with}\hskip 5.690551pt\mbox{q}={\rm e}^{2{\rm i}\pi \tau}.\bar{\mbox{q}}={\rm e}^{-2{\rm i}\pi\bar{\tau}}. \tag{1}\]
From the identification of the first product as the three-dimensional gravity partition function \(Z_{0,1}\) in [48], we have the convention
\[Z_{\rm CCTMG}(q,\bar{q})=Z_{\rm gravity}(q,\bar{q})\cdot Z_{\rm log}(q,\bar{q}), \tag{2}\]
with contributions
\[Z_{\text{gravity}}(q,\bar{q})=\prod_{n=2}^{\infty}\frac{1}{|1-q^{n}|^{2}},\quad \text{ and }\quad Z_{\text{log}}(q,\bar{q})=\prod_{m=2}^{\infty}\prod_{\bar{m}=0}^{ \infty}\frac{1}{1-q^{m}\bar{q}^{\bar{m}}}. \tag{3}\]
The log contribution of the CCTMG partition function is redefined in terms of permutation cycles by considering the relation between permutations and integer partitions as follows. We first recall that if a positive integer \(n\) is written as the sum of positive integers
\[n=\underbrace{1+\cdots+1}_{j_{1}\text{ times}}+\underbrace{2+\cdots+2}_{j_{2} \text{ times}}+\cdots+\underbrace{n}_{j_{n}\text{ times}}, \tag{4}\]
then any partition in the set of all partitions \(\mathcal{P}_{n}\) of \(n\) is determined by a sequence \(\left(j\right)_{k=1}^{n}\) of positive integers \(j_{k}\) called occupation numbers or cycle counts of the partition which satisfy the constraint
\[\sum_{k=1}^{n}kj_{k}=n. \tag{5}\]
Hence, a partition of a given integer \(n\) is simply an expression of \(n\) in terms of a sum of positive integers.
A permutation with exactly \(j_{k}\) cycles of length \(k\) turns out to be of the same type as the sequence \(\left(j\right)_{k=1}^{n}\) of cycle counts that determines a given partition. As a result, each partition corresponds to a conjugacy class of permutations whose number of elements is
\[N(n,j)=\frac{n!}{\prod_{k=1}^{n}j_{k}!(k)^{j_{k}}}. \tag{6}\]
Under the rescaling of variables with the coordinate sequence \(\left(\mathcal{G}_{k}\right)_{k=1}^{n}\) as
\[\mathcal{G}_{k}\left(q,\bar{q}\right)=\frac{1}{|1-q^{k}|^{2}}, \tag{7}\]
the cycle decomposition of any permutation \(\pi\) in the symmetric group \(S_{n}\) of all permutations on integers \(1,\ldots,n\) is generated by the partition function according to the (formal) power series expansion
\[Z_{log}\left(\mathcal{G}_{1},\ldots,\mathcal{G}_{n}\right)=\exp\left(\sum_{k=1 }^{\infty}\frac{\mathcal{G}_{k}\left(q^{2}\right)^{k}}{k}\right)=1+\sum_{n=1} ^{\infty}\frac{1}{n!}\left(\sum_{\pi\in S_{n}}\prod_{k=1}^{n}\mathcal{G}_{k}^ {j_{k}}\right)\left(q^{2}\right)^{n}. \tag{8}\]
The log partition function is furthermore expressible as a generating function of Hurwitz numbers. The Hurwitz enumeration problem, which arose in the theory of Riemann surfaces [49], consists in counting the number of ways a given permutation can be written as a product of a minimal number of transpositions that generate the full symmetric group. The outcome of the counting is a numerical sequence known as Hurwitz numbers. Also expressible as a count of ramified coverings of Riemann surfaces, in our case, the Hurwitz numbers count disconnected \(n\)-fold coverings of \(\mathbb{CP}^{1}\) by itself. Starting from the multinomial coefficient \(N(n,j)\), the log partition function reads
\[Z_{log}\left(\mathcal{G}_{1},\ldots,\mathcal{G}_{n}\right) =1+\sum_{n=1}^{\infty}\frac{1}{n!}\left(\sum_{\begin{subarray}{ c}\sum_{k=1}^{n}kj_{k}=n\\ n\geq 1\\ j_{k}\geq 0\end{subarray}}N(n,j)\prod_{k=1}^{n}\mathcal{G}_{k}^{j_{k}} \right)\left(q^{2}\right)^{n} \tag{9a}\] \[=1+\sum_{n=1}^{\infty}\frac{1}{n!}\left(\sum_{\begin{subarray}{ c}\sum_{k=1}^{n}kj_{k}=n\\ n\geq 0\end{subarray}}\frac{n!}{\prod_{k=1}^{n}j_{k}!(k)^{j_{k}}}\prod_{k=1}^{n} \mathcal{G}_{k}^{j_{k}}\right)\left(q^{2}\right)^{n} \tag{9b}\]
\[=1+\sum_{n=1}^{\infty}\left(\sum_{\begin{subarray}{c}\sum_{k=1}^{n}k_{ jk}=n\\ n\geq 1\\ j_{k}\geq 0\end{subarray}}\frac{1}{\prod_{k=1}^{n}j_{k}!(k)^{j_{k}}}\prod_{k=1}^{n} \mathcal{G}_{k}^{j_{k}}\right)\left(q^{2}\right)^{n} \tag{10a}\] \[=1+\sum_{n=1}^{\infty}\left(\sum_{\begin{subarray}{c}\sum_{k=1}^{ n}k_{jk}=n\\ n\geq 1\\ j_{k}\geq 0\end{subarray}}\left[H_{0\xrightarrow{n}0}^{\bullet}\left(\left([1]^{j_{1 }},[2]^{j_{2}},\ldots\right),\left([1]^{j_{1}},[2]^{j_{2}},\ldots\right)\right) \right]\prod_{k=1}^{n}\mathcal{G}_{k}^{j_{k}}\right)\left(q^{2}\right)^{n}. \tag{10b}\]
and under the constraint (5) the disconnected Hurwitz number expression takes the form
\[H_{0\xrightarrow{n}0}^{\bullet}\left(\left([1]^{j_{1}},[2]^{j_{2}},\ldots \right),\left([1]^{j_{1}},[2]^{j_{2}},\ldots\right)\right)=\prod_{k=1}^{n} \frac{1}{j_{k}!(k)^{j_{k}}}, \tag{11}\]
where the \(\left([1]^{j_{1}},\ldots,[k]^{j_{k}}\right)\) associated to the monomials \(\prod_{k=1}^{n}\mathcal{G}_{k}^{j_{k}}\) are such that \([k]^{j_{k}}=\overbrace{k,\ldots k}^{j_{k}\text{ times}}\). The partition function can also be expressed in terms of rooted trees as
\[Z_{log}\left(l_{1},\ldots,l_{n}\right)=1+\sum_{n=1}^{\infty}\left(\sum_{ \begin{subarray}{c}\sum_{k=1}^{n}k_{jk}=n\\ n\geq 1\\ j_{k}\geq 0\end{subarray}}\left[H_{0\xrightarrow{n}0}^{\bullet}\left(\left([1]^ {j_{1}},[2]^{j_{2}},\ldots\right),\left([1]^{j_{1}},[2]^{j_{2}},\ldots\right) \right)\right]B_{+}\left(\prod_{k=1}^{n}l_{k}^{j_{k}}\right)\right)\left(q^{2 }\right)^{n}, \tag{12}\]
in terms of the same disconnected Hurwitz numbers \(H_{0\xrightarrow{n}0}^{\bullet}\left(\left([1]^{j_{1}},[2]^{j_{2}},\ldots \right),\left([1]^{j_{1}},[2]^{j_{2}},\ldots\right)\right)\) but this time, the \(\left([1]^{j_{1}},\ldots,[k]^{j_{k}}\right)\) are associated to the trees \(B_{+}\left(\prod_{k=1}^{n}l_{k}^{j_{k}}\right)\).
Let us look at an example and consider the level \(n=4\). The integer \(4\) can be written as
\[1*1*1*1, \tag{13a}\] \[1+1+2,\] (13b) \[1+3,\] (13c) \[2+2,\] (13d) \[4, \tag{13e}\]
with as corresponding partitions the sequences
\[(4,0,0,0), \tag{14a}\] \[(2,1,0,0),\] (14b) \[(1,0,1,0),\] (14c) \[(0,2,0,0),\] (14d) \[(0,0,0,1), \tag{14e}\]
respectively, and the Hurwitz numbers take the general form
\[\frac{1}{j_{1}!j_{2}!j_{3}!j_{4}!\,(1)^{j_{1}}\left(2\right)^{j_{2}}\left(3 \right)^{j_{3}}\left(4\right)^{j_{4}}}. \tag{15}\]
Table 1 gives the Hurwitz numbers together with their associated \(\mathcal{G}_{k}\)-monomial products and rooted trees.
We would like at this stage to point out an interesting probabilistic interpretation of the above results. Via the multinomial coefficient \(N(n,j)\), the log partition function actually shows an equivalence between random mappings specified by disconnected \(n\)-coverings and expansions over rooted trees. It turns out that these random combinatorial objects are built around probability distributions where Hurwitz numbers can play a central role. The probabilistic structure of these combinatorial objects associated to Hurwitz multinomial expansions have been addressed in Pitman's elegant work [50, 51, 52, 53], as well as in [54] in connection with integrable hierarchies. Related to our work, we highlight a remarkable connection between the Hurwits numbers and the Ewens sampling formula. In 1972, Ewens [12] derived a probability distribution for vectors of the type \(j=\left(j_{k}\right)_{k=1}^{n}\), where \(j_{k}\) counts the number of selectively neutral alleles represented \(k\) times in a sample of \(n\) genes taken from a large population. Equivalently, the Ewens sampling formula can be reformulated as the distribution of the cycle counts of a permutation \(\pi\in S_{n}\), decomposed as a product of cycles. According to [55], for the uniform and random choice of a given permutation \(\pi\in S_{n}\), the distribution of the cycle counts is given by
\[\mathbb{P}\left[j=\left(j_{k}\right)_{k=1}^{n}\right]=\mathbb{1}\left\{\sum_ {k=1}^{n}kj_{k}=n\right\}\prod_{k=1}^{n}\frac{1}{j_{k}!(k)^{j_{k}}} \tag{16}\]
where the indicator \(\mathbb{1}\left\{\cdot\right\}\) is defined as
\[\mathbb{1}\left\{A\right\}=\left\{\begin{array}{ll}1&\text{if $A$ is true,}\\ 0&\text{otherwise.}\end{array}\right. \tag{17}\]
The probability distribution (16) is referred to as the Ewens Sampling Formula with parameter \(\theta=1\). The Ewens Sampling formula with parameter \(\theta\geq 0\) reads
\[\mathbb{P}\left[j=\left(j_{1},\ldots,j_{n}\right)\right]=\frac{n!}{\theta \left(\theta+1\right)\cdots\left(\theta+n-1\right)}\mathbb{1}\left\{\sum_{k=1} ^{n}kj_{k}=n\right\}\prod_{k=1}^{n}\frac{\theta}{j_{k}!(k)^{j_{k}}}. \tag{18}\]
The Ewens sampling formula is known to govern the distribution on partitions of \(\left\{1,\ldots,n\right\}\) for certain partition-valued fragmentation processes [56, 24, 57]. It is also known that the models under which the Ewens sampling formula holds exhibit a diffusion process [58].
We argue that the unstable propagation of the seed soliton composed of the collection of fields at the single particle level \(n=1\) and represented by \(\mathcal{G}_{1}\) in the log partition function, gives rise to a physical process
\begin{table}
\begin{tabular}{|c|c|c|} \hline Hurwitz numbers & \(\mathcal{G}_{k}\)-monomial products & Rooted trees \\ \hline \hline \(H_{0\xrightarrow{\ast}0}^{\bullet}\left(\left(1,1,1,1\right),\left(1,1,1,1 \right)\right)=\frac{1}{24}\) & \(\mathcal{G}_{1}^{4}\) & \\ \hline \(H_{0\xrightarrow{\ast}0}^{\bullet}\left(\left(1,1,2\right),\left(1,1,2\right) \right)=\frac{1}{4}\) & \(\mathcal{G}_{1}^{2}\mathcal{G}_{2}\) & \\ \hline \(H_{0\xrightarrow{\ast}0}^{\bullet}\left(\left(1,3\right),\left(1,3\right) \right)=\frac{1}{3}\) & \(\mathcal{G}_{1}\mathcal{G}_{3}\) & \\ \hline \(H_{0\xrightarrow{\ast}0}^{\bullet}\left(\left(2,2\right),\left(2,2\right) \right)=\frac{1}{8}\) & \(\mathcal{G}_{2}^{2}\mathcal{G}_{2}^{2}\) & \\ \hline \(H_{0\xrightarrow{\ast}0}^{\bullet}\left(\left(4\right),\left(4\right)\right)= \frac{1}{4}\) & \(\mathcal{G}_{4}\) & \\ \hline \end{tabular}
\end{table}
Table 1: \(n=4\) Hurwitz numbers, associated \(\mathcal{G}_{k}\)-monomial products and rooted trees.
of fragmentation, in which soliton clusters are produced at the multiparticle levels \(n\geq 2\). The fragmented soliton clusters are realized as rooted trees partitioned into a collection of subtrees, and the juxtaposed subtrees are solitons pinned by point defects represented by the roots as the pinning sites. Ultimately, the unstable defect soliton clusters provide a disordered landscape whose distribution is used in the next section to study the evolution of the unstable fragmentation process.
## 3 Shannon information entropy as a measure of the evolution of the unstable defect soliton clusters
Shannon's logarithmic nature of the measure of information proceeds from his 1948 celebrated paper [26]. As a statistical entropy, Shannon information entropy is a logarithmic measure of the average information content in a collection of probabilities constrained in a distribution. In other words, it is a measure of disorder, or chaoticity, or ignorance about a certain data, in the form of uncertainty. Due to its applied flavor, Shannon information entropy finds relevance in the theory of dynamical systems, and can be used to study the evolution of a quantity or of a process. In our case, we use Shannon information entropy to assess the evolution of the disorder or randomness arising in the generation of defect soliton clusters as \(n\) grows.
Let
\[\mathbb{D}_{n}=\left\{\left(p_{1},p_{2},\ldots,p_{n}\right)|p_{k}\geq 0,0\leq k \leq n,\sum_{k=1}^{n}p_{k}=1\right\},\hskip 28.452756ptn\geq 1 \tag{19}\]
denote the set of \(n\)-component discrete probability distributions. The Shannon entropy \(\mathcal{H}_{n}:\mathbb{D}_{n}\mapsto\mathbb{R}\) is defined by
\[\mathcal{H}_{n}\left(p_{1},p_{2},\ldots,p_{n}\right)=-\sum_{k=1}^{n}p_{k}\log _{2}p_{k},\hskip 28.452756ptn\geq 1, \tag{20}\]
with by convention \(0\log_{2}0=0\). Two extreme scenarios are presented below.
Suppose first, that
\[p_{k}=\left\{\begin{array}{ll}1&\mbox{ for }\ k=1,\\ 0&\mbox{ for }\ 2\leq k\leq n.\end{array}\right. \tag{21}\]
In information-theoretic language, it means that there is complete certainty about the occurrence of an event. In such case, the Shannon entropy \(\mathcal{H}_{n}\left(1,0,0,0,\ldots\right)\) is just zero.
On the other hand, suppose that
\[p_{k}=\frac{1}{n}\hskip 14.226378pt\mbox{ for }\ 1\leq k\leq n. \tag{22}\]
In this case, the events \(1,\ldots,n\) have equal likelihood, and the entropy (uncertainty) \(\mathcal{H}\left(p_{1},\ldots,P_{n}\right)=\mathcal{H}\left(\frac{1}{n},\ldots,\frac{1}{n}\right)\) is maximized over the probability vectors \(p_{1},\ldots,p_{n}\) of length \(n\).
The above extreme cases arise at the \(n=1\) (single) particle and at the \(n=2\) multi particle levels, respectively. At the single particle level, there is complete certainty about the state of the system, and the entropy is null. At the \(n=2\) multi particle level, the amount of uncertainty about the state of the system is equally distributed among the microstates of the 2-particle system, and the entropy has reached its largest possible value. The latter case features the connection between entropy and symmetry. Indeed, up to permutation of \(p_{1}\) and \(p_{2}\), the symmetry of the distribution is high since the probabilities are identical, and indicates the indistinguishability of the system that corresponds to a total loss of information, hence to a state of maximum entropy in the \(n=2\) multi-particle system.
Between the two aforementioned extreme cases, the multiparticle levels \(n\geq 3\) have distributions for which not all probabilities \(p_{k}\) are identical. This is an indication that the Shannon entropy is no longer maximum from \(n\geq 3\). We would like to assess the change in entropy, in a way that tells us something about the evolution of the diffusive instability. A naive option would be to just calculate the Shannon entropy at
each level \(n\). In so doing, we immediately notice that the Shannon entropy of the distribution increases with \(n\). However, this would not teach us much, as the maximum entropy also grows with \(n\), but at a different rate than the entropy from the actual distribution in the system. To have a good indicator of the instability evolution, we rather derive a function that measures the difference between maximum and actual entropies at each level \(n\). We derive it as follows.
We first recall that given the set \(\mathcal{P}_{n}\) of all partitions of \(n\), the Ewens sampling Formula \(\mathbb{P}\) with parameter \(\theta=1\) is a probability distribution obtained by sampling uniformly over a partition \(1^{j_{1}}2^{j_{2}}\cdots n^{j_{n}}\) of \(n\). This means that the space of probabilities \(p_{k}\) we need to consider is restricted by
\[1\leq p_{k}\leq\mathsf{p}(n), \tag{23}\]
where \(\mathsf{p}(n)\) is the number of partitions of \(n\).
The expression of the maximum entropy at level \(n\) becomes
\[\mathcal{H}_{n}^{\max}\left[(p_{k})_{k=1}^{\mathsf{p}(n)}\right]=\mathcal{H} \left(\frac{1}{\mathsf{p}(n)},\ldots,\frac{1}{\mathsf{p}(n)}\right)=\log_{2} \left[\mathsf{p}(n)\right]. \tag{24}\]
The function \(\mathcal{H}_{n}^{\Delta}\left[(p_{k})_{k\in\mathcal{P}_{n}}\right]\) which measure the difference between maximum and actual entropies at each level \(n\geq 2\) takes the form
\[\mathcal{H}_{n}^{\Delta}\left[(p_{k})_{k\in\mathcal{P}_{n}}\right] =\mathcal{H}\left(\frac{1}{\mathsf{p}(n)},\ldots,\frac{1}{\mathsf{ p}(n)}\right)-\mathcal{H}_{n}\left[(p_{k})_{k=1}^{\mathsf{p}(n)}\ |\ p_{k}\in\mathbb{P}\right] \tag{25a}\] \[=\log_{2}\left[\mathsf{p}(n)\right]-\left(-\sum_{p_{k}\in\mathbb{ P}}p_{k}\log_{2}p_{k}\right)\] (25b) \[=\log_{2}\left[\mathsf{p}(n)\right]+\log_{2}\left(\prod_{p_{k}\in \mathbb{P}}p_{k}^{p_{k}}\right)\] (25c) \[=\log_{2}\left(\mathsf{p}(n)\cdot\prod_{p_{k}\in\mathbb{P}}p_{k}^{p_{k}} \right),\hskip 28.452756ptn\geq 2. \tag{25d}\]
Table 2 gives an example of the computations of \(\mathcal{H}_{n}^{\Delta}\left[(p_{k})_{k\in\mathcal{P}_{n}}\right]\) for \(n=2.3.4\).
The null value of \(\mathcal{H}_{n}^{\Delta}\left[(p_{k})_{k\in\mathcal{P}_{n}}\right]\) at \(n=2\) indicates that the entropy is maximum at that level. In other words, the disorder associated with the fragmentation of the seed soliton whose collective behavior is represented by \(\mathcal{G}_{1}\) in the partition function into unstable defect soliton clusters represented by \(\mathcal{G}_{2}\) and \(\mathcal{G}_{1}^{2}\) is maximum. From \(n\geq 3\), the positive and increasing values of the logarithmic function \(\mathcal{H}_{n}^{\Delta}\left[(p_{k})_{k\in\mathcal{P}_{n}}\right]\) indicate that it monotonically increases with \(n\). This shows that the Shannon entropy values with probabilities constrained by the Ewens distribution slowly move away from the maximum entropy values as \(n\) increases, and that the defect soliton clusters generated slowly become less unstable.
## 4 Bose-Einstein condensation in configuration space
The Bose-Einstein condensation (BEC) in an ideal gas of bosons was predicted in the middle of the 1920s based on ideas related to the statistical description of quanta of light [59, 60], and experimentally observed for cold atomic gases in magnetic traps about seventy years later [61, 62, 63, 64]. The main idea of the Bose-Einstein condensation is that if the density of particles exceeds a certain critical value, a fraction (fragment) of the whole amount of particles clusters (condenses) in the lowest eigenstate. The approach we consider to discuss Bose-Einstein condensation in log gravity is based on a probabilistic interpretation of the phase transition which goes back to Feynman's work on \(\lambda\)-transition in liquid helium, and his proposal to write the partition function as a path integral over trajec- tories of the helium atoms, showing that a Bose-Einstein phase transition appears because of the symmetry statistics of atoms that can be permuted with each other.
The historical starting point is Feynman's formulation of a path-integral treatment of quantum mechanics within which transition probability amplitudes can be computed in terms of superpositions of classical paths, accounting for all possible trajectories that contribute to the quantum process through constructive superposition [65]. This probabilistic formulation of a quantum observable can be extended to statistical mechanics by mapping quantum particles in a many-body system to paths in space and imaginary-time, in such a way that the quantum system is described in terms of classical trajectory configurations.
Shortly after his development of the path integral formalism, Feynman adapted it to statistical mechanics by introducing a concept of permutation cycles in the partition function of a boson system. The key point underpinning Feynman's approach is that, considering the ground state wavefunction of liquid Helium as non-degenerate, real positive and without nodes. and taking into account that the interatomic interaction of Helium are repulsive in nature, the wavefunction vanishes within an interatomic range [66]. Because the Helium atom has spin zero, the wavefunction is totally symmetric with respect to the positions of the atoms. From there, Feynman argued that at low energy, the excitations are only composed of density waves (phonons) with a linear dispersion law, and that as the Bose-Einstein statistics would not allow long wavelength (low energy) excitations, any long distance movement of Helium atoms consists of a permutation of the atoms that leaves the wavefunction unchanged.
Of geometrical importance in Feynman's approach is the configuration space of physical fields, which is a gauge invariant space with the symmetric group \(S_{n}\) as a gauge group. In a configuration of particles characterized by the particles' coordinates modulo the permutation symmetry, \(i.e\)\(\{x_{1},x_{2},\ldots,x_{n}/S_{n}\}\), the relationship between the Bose-Einstein condensation and the partition function of the ideal Bose gas in the \(\lambda\)-transition can be sketched by considering the symmetrized states of a system with \(n\) identical bosons in a volume \(V\), denoted
\[\left|x\right\rangle_{s}=\frac{1}{\sqrt{n!}}\sum_{\pi\in S_{n}}\left|x_{\pi_{1 }}\cdots x_{\pi_{n}}\right\rangle. \tag{26}\]
The partition function can then be expressed as
\[Z_{n} =\text{Tr}\big{\{}e^{-\beta H}\big{\}} \tag{27a}\] \[=\frac{1}{n!}\int_{V}dx_{1}^{3}\cdots\int_{n}dx_{n}^{3}\left\langle x \right|_{s}e^{-\beta H}\left|x\right\rangle_{s}\] (27b) \[=\frac{1}{n!}\sum_{\pi}Z\left(\pi\right), \tag{27c}\]
with
\[Z\left(\pi\right)=\int_{V}dx_{1}^{3}\cdots\int_{n}dx_{n}^{3}\left\langle x_{ \pi_{1}}\cdots x_{\pi_{n}}\right|e^{-\beta H}\left|x_{1}\cdots x_{n}\right\rangle. \tag{28}\]
Taking \(H\) as the Hamiltonian of the ideal gas
\[H=\frac{1}{2m}\sum_{i=1}^{n}p_{i}^{2}, \tag{29}\]
the matrix element in Eq. (28) is considered as the diffusion kernel of \(n\) noninteractJing particles, and \(Z\left(\pi\right)\) becomes
\[Z\left(\pi\right)=\int_{V}\frac{dx_{1}^{3}}{\lambda^{3}}\cdots\int_{n}\frac{dx_{ n}^{3}}{\lambda^{3}}\prod_{i=1}^{n}e^{-\frac{\pi}{2\beta}\left(x_{i}-x_{\pi_{i}} \right)^{2}}, \tag{30}\]
where \(\lambda\) is the thermal de Broglie wavelength. The partition function \(Z_{n}\) presented as a sum over permutations can equivalently be rewritten in terms of a cycle structure where the permutation \(\pi\in j_{1},j_{2},\ldots,j_{n}\) contains \(j_{1}\) 1-cycles, \(j_{2}\) 2-cycles, etc... Then, given the configuration set \(\mathcal{C}\left(j_{1},j_{2},\ldots\right)\) of any set of non-negative integers satisfying the constraint (5), the partition function becomes
\[Z_{n}=\frac{1}{n!}\sum_{\left\{j_{1},j_{2},\ldots\right\}}\mathcal{C}\left(j_{ 1},j_{2},\ldots\right)\prod_{k}Z\left(\pi\right)^{j_{k}}. \tag{31}\]
Feynman [67] and Matsubara [68] showed that the quantity \(\mathcal{C}\left(j_{1},j_{2},\ldots\right)\) is identical to \(N(n,j)\) in Eq. (6). This establishes a connection betweeen our interpretation of the collective excitations in the log sector as a grand-canonical ensemble of permutation trees, and Feynman's argument that Bose-Einstein condensation is understood as the occurrence of large cycles of bosons permuted in imaginary time, to infer the realization of Bose-einstein condensates as fragmentation trees in log gravity.
## 5 Summary
A probabilistic stance of the log partition function of CCTMG was given in this work, shedding light on certain phenomena occurring in log gravity. Previously studied as topological invariants, the Hurwitz numbers reappear in this paper as probabilities in a multinomial distribution that governs the permutation cycles present in the rooted tree configuration space described in our earlier works.
The multinomial distribution was shown to be directly related to the Ewens sampling formula with parameter \(\theta=1\), also called the Multivariate Ewens distribution in Bayesian statistics, which beyond its extensive use in mathematical biology and statistics, generally plays a major role in the theory of partition structures and their applications in processes of fragmentation and coagulation, and in random trees. The appearance of the Ewens distribution in the theory, as well as the description of the partition function in terms of the (potential) Burgers hierarchy clearly indicate a diffusive behavior that maps out the instability originally observed in [1].
The statistical formulation of Burgers turbulence and its relation to (elastic) manifolds pinned by quenched disorder (see for instance [69] for a recent review) have previously been studied from the perspective of directed polymers [70], disordered trees and traveling waves [71], contexts within which the quenched disorder appears to be provided by the pinning of a fraction of the particles. We infer that the original unstable aspect of the theory due to the appearance of the logarithmic mode at the critical point corresponds to the instability of the seed (\(n=1\)) soliton as collective fields. The unstable (diffusive) propagation of the seed soliton engenders soliton clusters that can be arranged as random (fragments of) rooted trees characterizing the disordered landscape in the theory. The rooted tree configuration space is realized by the pinning of randomly selected point-like and line-like solitonic particles at the root which constitutes the pinning site.
Given the probabilistic distribution it falls under, as \(n\) grows, the evolution of the process influenced by the quenched disorder, \(i.e\) the random pinning was studied using Shannon information entropy. At \(n=1\), the probability of formation of the log mode together with its descendants is 100%, and its probability is \(p_{1}=1\). In information-theoretic parlance, for such an event, there is zero surprise, or zero information, and the entropy of formation of the collective fields at \(n=1\) as the seed soliton is therefore zero. From \(n=2\), there is a growing range of possibilities with particular probabilities. With these possibilities enters the notion of symmetry, related to the similarity of the probabilities. At \(n=2\), the two possible outcomes having identical probabilities indicates that the disorder is maximum. From \(n\geq 3\), the greater the number of possible outcomes, the greater the value of the entropy, however, the actual values of the entropy at each \(n\)-level tend to grow away from the corresponding \(n\)-level maximum entropy values. This analysis gives a perspective of the evolution towards stability in the generation of the defect soliton clusters.
The model of random permutations in configuration space originating from Feynman's path-integral approach to study bosonic systems collective excitations was used to infer the presence of Bose-Einstein condensates in log gravity, from the occurrence of permutation cycles in the log partition function.
Away-from-equilibrium macroscopic phenomena naturally occur in various branches of basic and applied sciences, including fluid dynamics. Burgers turbulence appeared as one of the simplest instances of a non-linear system out of equilibrium, and describes a host of seemingly unrelated phenomena. Indeed, apart from modeling turbulence in fluid dynamics, it has also been used to describe large scale pattern formation in cosmology, vortex lines in random media, and growth phenomena [72, 73]. The results obtained in this work indicates an out-of-equilibrium dynamics which corresponds to a type of shock-wave damping of the instability in a diffusive process.
AcknowledgementsThe author is grateful for the time spent at the 12th Joburg Workshop on String Theory, Gravity and Cosmology where this work started, and at the 5th Mandelstam Theoretical Physics School and Workshop, where it approached its conclusion. This work is supported by the South African Research Chairs initiative of the Department of Science and Technology and the National Research Foundation. The support of the DSI-NRF Centre of Excellence in Mathematical and Statistical Sciences (CoE-MaSS) towards this research is hereby acknowledged. Opinions expressed and conclusions arrived at, are those of the author and are not necessarily to be attributed to the CoE. |
2310.08213 | A Universal Scheme for Dynamic Partitioned Shortest Path Index | Shortest path (SP) computation is the fundamental operation in various
networks such as urban networks, logistic networks, communication networks,
social networks, etc. With the development of technology and societal
expansions, those networks tend to be massive. This, in turn, causes
deteriorated performance of SP computation, and graph partitioning is commonly
leveraged to scale up the SP algorithms. However, the partitioned shortest path
(PSP) index has never been systematically investigated and theoretically
analyzed, and there is a lack of experimental comparison among different PSP
indexes. Moreover, few studies have explored PSP index maintenance in dynamic
networks. Therefore, in this paper, we systematically analyze the dynamic PSP
index by proposing a universal scheme for it. Specifically, we first propose
two novel partitioned shortest path strategies (No-boundary and Post-boundary
strategies) to improve the performance of PSP indexes and design the
corresponding index maintenance approaches to deal with dynamic scenarios. Then
we categorize the partition methods from the perspective of partition structure
to facilitate the selection of partition methods in the PSP index. Furthermore,
we propose a universal scheme for designing the PSP index by coupling its three
dimensions (i.e. PSP strategy, partition structure, and SP algorithm). Based on
this scheme, we propose five new PSP indexes with prominent performance in
either query or update efficiency. Lastly, extensive experiments are
implemented to demonstrate the effectiveness of the proposed PSP scheme, with
valuable guidance provided on the PSP index design. | Mengxuan Zhang, Xinjie Zhou, Lei Li, Ziyi Liu, Goce Trajcevski, Yan Huang, Xiaofang Zhou | 2023-10-12T11:03:00Z | http://arxiv.org/abs/2310.08213v2 | # A Universal Scheme for Partitioned Dynamic Shortest Path Index
###### Abstract
Graph partitioning is a common solution to scale up the graph algorithms, and _shortest path (SP)_ computation is one of them. However, the existing solutions typically have a fixed partition method with a fixed path index and fixed partition structure, so it is unclear how the partition method and path index influence the pathfinding performance. Moreover, few studies have explored the index maintenance of _partitioned SP_ (_PSP_) on dynamic graphs. To provide a deeper insight into the dynamic _PSP indexes_, we systematically deliberate on the existing works and propose a _universal scheme_ to analyze this problem theoretically. Specifically, we first propose two novel _partitioned index strategies_ and one _optimization_ to improve index construction, query answering, or index maintenance of _PSP index_. Then we propose a _path-oriented_ graph partitioning classification criteria for easier partition method selection. After that, we re-couple the dimensions in our scheme (_partitioned index strategy_, _path index_, and _partition structure_) to propose five new _partitioned SP indexes_ that are more efficient either in the query or update on different networks. Finally, we demonstrate the effectiveness of our new indexes by comparing them with state-of-the-art _PSP indexes_ through comprehensive evaluations.
+
Footnote †: The first two authors contributed equally to this paper. Lei Li is the corresponding author.
## I Introduction
_Shortest Path (SP)_ query in dynamic networks is an essential building block for various applications that are ubiquitous in our daily life. Given a query pair \((s,t)\), depending on the context, it returns the minimum traveling time in a _road network_[1], the fastest connection in the _web graph_[2], or the most intimate relationships in the _social network_[3, 4, 5]. The development of urban traffic systems and the evolvement of online interactions yielded real-life networks that tend to be massive and dynamic, which brings great challenges to the scalability of the state-of-the-arts [6, 7, 8, 9, 10, 11] with either heavy memory or expensive space search overhead. Specifically, as the most efficient path algorithm, _2-Hop Labeling_[12] requires \(O(nm^{1/2})\) space (\(n\) and \(m\) are the vertex and edge numbers) to store the index, while the search-based method like _Dijkstra's_ is inefficient on large networks. Moreover, because the networks are dynamic in nature (evolving in terms of structures and edge weights), extra information and effort are required to capture (and exploit) the updates. Graph partitioning is the common solution that enables the more complicated problems like time-dependent [13, 14] and constraint paths [15, 16, 17] to construct their indexes in large network settings.
**Existing Works.** To improve the scalability, graph partitioning is introduced to decompose one large network into several smaller ones such that the index sizes and update workloads could be reduced. We can compare them briefly in terms of _index construction time_, _storage space_, _query time_, and _update time_. On one extreme are the _direct search_ algorithms [18, 19, 20, 21] that require no index-related processing and can work in the dynamic environment directly but are slow for query answering. Then the _partitioned search_ adds various information to guide and reduce search space [22, 23, 24] to improve query processing. On the other extreme are _Contraction Hierarchy (CH)_[25] and _Hub Labeling (HL)_[7, 26, 27] with larger index sizes, longer construction and maintenance time but faster query performance. Their partitioned versions [6, 13, 16, 28, 29, 30, 31] are all faster to construct with smaller sizes but longer query time. Since these works have better performance than their unpartitioned versions, they create the following misconceptions regarding the benefits of partitions on pathfinding-related problems: 1) Partition and path algorithms are irrelevant so they are two different research branches; 2) Partition-based methods are irrelevant and very different from each other; 3) Applying partition is always better.
**Motivations.** These misconceptions are not always true. Although the _partitioned shortest path (PSP) indexes_ have been widely applied in the past decade, they were not studied systematically, to the best of our knowledge. Specifically, the existing solutions just pick one partition method without discriminating their characteristics and figure out a way to make it work, and then claim benefits/superiority. Consequently, there lacks a generalized scheme to organize and compare the _PSP indexes_ insightfully and fairly. We aim to decouple the _PSP indexes_ into different influential dimensions and then propose a universal scheme to further guide the direction of designing new structures for new scenarios by re-coupling them again.
**Challenges.** However, achieving this goal brings several challenges. Firstly, it is unclear how the partition affects index construction, query processing, and index update, as nearly all the existing works only guarantee their unique structures can answer queries correctly. Therefore, we first decouple the
partitioned index strategy_ (the way to construct/query/update _PSP index_) from the _PSP index_ and then propose several _partitioned index strategies_ for different scenarios. Secondly, the graph partition methods are normally classified from different perspectives (like _vertex-cut/edge-cut_, _in-memory/streaming_, etc.), but these criteria are not path-oriented and thus cannot help to tell which partition method is suitable. Therefore, we identify the influential factors on the partitioned indexes and propose a _path-oriented_ partitioning classification. Thirdly, in terms of the _SP_ query, there is another deeper tight coupling in terms of _partition structures_ (not methods) and _SP_ algorithms such that the existing methods are fixed and limited by the initial choices. Therefore, we further decouple the _partition structure_ from the _path index_ and finally propose our universal scheme to compare the existing solutions and propose five new _PSP indexes_ for different scenarios. Lastly, as the index maintenance's efficiency was only achievable recently [8], there is little work on how the maintenance should be conducted correctly and efficiently in the partitioned environment, so we extend these maintenance methods to such environments.
**Contributions.** Our contributions are listed as follows:
* **[Dynamic _PSP Index_ Scheme]** We propose a universal scheme for _dynamic PSP index_ that decouples this problem into three crucial dimensions: _partition strategy_, _path index_, and _partition structure_ dimensions such that the _PSP index_ can be analyzed theoretically;
* **[Partitioned Index Strategies]** In addition to the traditional _Pre-Boundary_ strategy, we propose novel _No-Boundary_ and _Post-Boundary_ partitioned index strategies, and provide non-trivial correctness analysis for their index construction, query processing, and index maintenance. We also propose _Pruned-Boundary_ strategy to further optimize them;
* **[Partition Structures]** We prove the equivalence of _vertex-_ and _edge-cut_, analyze their influence factors on index performance, and propose a new _path-oriented_ partitioning classification and discuss how they affect the _SP_ index;
* **[_PSP Index_ Maintenance]** We propose the index maintenance algorithms for existing _PSP indexes_ according to different _partitioned index strategies_;
* **[Recoupled New Indexes]** We extract five new (_query-_ or _update-oriented_) _PSP indexes_ in the proposed scheme for different network structures that have better performance than the state-of-the-art.
## II Preliminary and Building Blocks
We now define the notations and formalize the problem, and then introduce the two building blocks of the partitioned indexes: _graph partition methods_ and _shortest path algorithms_.
### _Formal Definitions_
In this paper, we focus on the dynamic weighted network \(G(V,E,W)\) with \(V\) denoting the vertex set, \(E\) denoting the edge set and \(W\rightarrow\mathbf{R}^{+}\) assigning a non-negative weight \(e(u,v)\in W,\forall(u,v)\in E\). Note that \(e(u,v)\in W\) can increase or decrease in the range of \([0,\infty]\) in ad-hoc. We denote the number of vertices and edges as \(n=|V|\) and \(m=|E|\). For each \(v\in V\), we represent its neighbors as \(N(v)=\{u,|(u,v)\in E\}\) and we call the edges \((u,v),u\in N(v)\) as adjacent edge of \(v\). We associate each vertex \(v\in V\) with order \(r(v)\) indicating its importance in \(G\). A path \(p=\left\langle v_{0},v_{1},\ldots,v_{k}\right\rangle((v_{i},v_{i+1})\in E,0 \leq i<k)\) is a sequence of adjacent vertices with length of \(l(p)=\sum_{i=0}^{k-1}e(v_{i},v_{i+1})\). Given an origin vertex and destination vertex (OD pair), the shortest path between them \(p_{G}(s,t)\) is the path with the minimum length and we use \(d_{G}(s,t)=l(p_{G}(s,t))\) to denote the shortest distance. We use shortest distance index \(L(G)\) for efficient shortest distance query \(Q(s,t)\) that returns \(d_{G}(s,t)\). It should be noted the shortest path can be easily obtained so we only discuss the distance for simplicity. We aim to answer the following question:
**Partitioned Shortest Path Problem**: Given a dynamic graph \(G\), how to combine a partitioning method and index structure to make an efficient _PSP index_ in terms of index construction time, index size, query efficiency, or update efficiency?
### _Graph Partition Methods_
Graph partitioning is commonly categorized [32] by the type of graph cut: _edge-cut_ and _vertex-cut_.
**Definition 1** (Edge-Cut Partitioning).: _A graph \(G\) is decomposed into multiple disjoint subgraphs \(\{G_{i}\}\)\((i=1,2,\ldots,k)\) with \(V_{i}\cap V_{j}=\emptyset\) and \(\bigcup V_{i}=V\)._
_(**Vertex**) \(\forall v\in G_{i}\), we say \(v\) is a **boundary vertex** if there exists a neighbor of \(v\) in the another subgraph, that is \(\exists u\in N(v),u\in G_{j}(i\neq j)\). Otherwise, \(v\) is an **inner vertex**. We represent the boundary vertex set of \(G_{i}\) as \(B_{i}\) and the boundary vertex set of \(G\) as \(B=\bigcup B_{i}\)._
_(**Edge**) For \((u,v)\in E\), we say \((u,v)\) is an **inter-edge** if both its two endpoints \(u\) and \(v\) are boundary vertices from different subgraphs, i.e., \(u\in B_{i},v\in B_{j},i\neq j\). Otherwise, it is an **intra-edge**. The corresponding edges set are denoted as \(E_{inter}\) and \(E_{intra}\)._
**Definition 2** (Vertex-Cut Partitioning).: _A graph \(G\) is decomposed into multiple disjoint subgraphs \(\{G_{i}\}\)\((i=1,2,\ldots,k)\) with \(E_{i}\cap E_{j}=\emptyset\) and \(\bigcup E_{i}=E\). We denote the **cut vertex**\(X_{i}\) of subgraph \(G_{i}\) as the collection of vertex \(s_{p}\in G_{i}\) satisfying that \(s_{p}\) belongs to at least one another subgraph, i.e. \(s_{p}\in G_{j},i\neq j\)._
In terms of the shortest distance computation, both _boundary vertex_ and _cut vertex_ have the cut property:
**Definition 3**.: _(Cut Property). Given a shortest path \(p(s,t)\) with its endpoints in two different subgraphs \(s\in G_{i},t\notin G_{i}\), there is at least one vertex \(v\) which is in the boundary vertex set or cut vertex set of \(G_{i}\) on the path, that is \(\exists v\in p(s,t),v\in B_{i}\) or \(v\in X_{i}\)._
We briefly summarize existing partition methods as follows.
**1) Edge-cut Partitioning**. Firstly, the _Flow-based Partitioning_ (_KaHyPar_[33, 34], _PUNCH_[35]) utilizes _max-flow
min-cut_[36] to split a graph into two parts but cannot achieve balancedness. In particular, _PUNCH_ utilizes the _natural cut_ heuristics in road networks to achieve high partition quality. Secondly, _Multilevel Graph Partitioning_[37, 38] such as _SCOTCH_[39] and _METIS_[40] are widely used due to their high efficiency and good partition quality. Besides, it is used as the underlying partitioning algorithm of hierarchical partitions like _HiT_[41, 42], _SHARC_ and _G-Tree_[43, 44, 45]. Finally, we also investigated into many other categories of methods like _Spectral Partitioning_[37, 46], _Spectral Partitioning_[37, 46], _Graph Growing Partitioning_[47, 48]_Bubble_[49, 50, 51]), _Geometric Partitioning_[52, 53, 54, 55, 56], _Minimum k-cut_[57, 58, 59, 60, 61], _Kernighan-Lin (KL)_[62], and _Fiduccia-Mattheyses_[63]. However, their results perform poorly on the PSP algorithms.
**2) Vertex-cut Partitioning**. It is more effective for graphs with _power-law_ distribution [64, 65, 66]. The _Online Streaming Algorithms_ are widely used for its high efficiency and low memory in distributed systems such as _PowerGraph_[64], _GraphX_[65], _Chaos_[66], and _HDRF_[67]. They are super-fast but has low quality with high replication. Among them, _CLUGP_[68] performs best by transforming vertex streaming clustering to edge streaming partitioning. The _In-Memory Algorithms_ like _NE_[69, 70], _SMOG_[71], _ROAD_[72, 73] usually have much higher partition quality but slow to run. _HEP_[74] combines _NE++_ and the online _HDRF_ for both high quality and efficiency.
### _Shortest Path Algorithms_
We summarize the shortest path algorithms as follows:
**1) Direct Search** such as _Dijkstra_'s [18] and \(A^{*}\)[19] searches the graph when the index cannot be constructed due to the large graph size (_QbS_[29], _ParDist_[22]) or complicated problem (_COLA_[15]). It takes no time in index update, but queries are slowest;
**2) Contraction Hierarchy (CH)**[25] is a widely used lightweight index that contracts the vertices in a pre-defined order and preserves the shortest path information by adding shortcuts among the contracted vertex's neighbors. The _search-based CH_[25] has a small index size but takes longer to build and maintain [75, 8] while _concatenation-based CH_[76, 77] is much faster to build and maintain if the tree-width is small. _CH_'s query performance is generally around 10\(\times\) faster than the _direct search_ but much slower than the _hub labelings_. Surprisingly, no PSP index has used _CH_ as underlying index;
**3) Pruned Landmark Labeling (PLL)**. Although many _hub labeling_ methods have been proposed in the past decade, only two are widely used. Built by either _pruned search_[26] or _propagation_[78, 10, 7, 11], _PLL_ is the only index that can work on the graph with large treewidth. Therefore, graphs with this property use it as the underlying index (_e.g., QsB_[29, 30] for its landmarks, _CT-Index_[6, 79] for its core);
**4) Tree Decomposition (TD)**[80, 9, 27, 81, 82]. As another widely used hop labeling, it is much faster than _PLL_ for graphs with smaller treewidth but cannot scale to large ones. For instance, _CT-Index_[6, 79] uses it for its periphery, and _FHL_[16, 17] use it as a forest;
**5) All Pair Pre-computation** pre-computes all-pair distance with the fastest query, but incurs long index construction and large index storage. _G-Tree_[43, 44] and _ROAD_[72, 73] store the distance between boundary vertices in each partition.
## III Partitioned Index Strategies
In this section, we propose three theoretical strategies for _PSP index_ and analyze how the index could be used correctly. It should be noted that these strategies do not rely on any specific _partition method_ or _SP index_, but are general frameworks that any _PSP index_ has to be complied with.
We first define the following two types of index for _PSP index_\(L\): the _partition indexes_\(\{L_{i}\}\) for each subgraph (partition) \(G_{i}\), and the _overlay index_\(\tilde{L}\) for the overlay graph which is composed of the boundary vertices of all partitions, _i.e.,_\(L=\{L_{i}\}\cup\tilde{L}\). At first glance, the index \(L_{i}\) could be built using only the information of \(G_{i}\) itself. However, \(L_{i}\)'s correctness cannot be guaranteed because the shortest distance between boundary vertices within one partition could pass through another partition. For instance, it could be that \(p(b_{i1},b_{i2}),(b_{i1},b_{i2}\in B_{i})\) goes outside of \(G_{i}\) and passes through \(u\in G_{j},i\neq j\). As a result, \(d(b_{i1},b_{i2})\) cannot be answered correctly only with \(L_{i}\), and the error would propagate to other queries. Therefore, the traditional approach to build _PSP index_ involves precomputing the correct global distances between boundary vertices for each partition. These global distances are then utilized to construct the _partition indexes_\(\{L_{i}\}\) and _overlay index_\(\tilde{L}\). Such an approach is referred to as the _Pre-Boundary_ strategy.
### _Pre-Boundary Strategy_
In this section, we briefly introduce the index construction and query processing of conventional _pre-boundary_ strategy and then propose its index update method.
**Index Construction**. It consists of four steps (Figure 2-(a)):
_Step 1:_ Precompute the global distance between all boundary vertex pairs \((b_{i1},b_{i2}),(b_{i1},b_{i2}\in B_{i})\) for each partition \(G_{i}\) and insert shortcuts \(e(b_{i1},b_{i2})=d_{G}(b_{i1},b_{i2})\) into \(G_{i}\) to get \(G^{\prime}_{i}\);
_Step 2:_ Construct the _SP_ index \(L_{i}\) based on \(G^{\prime}_{i}\);
_Step 3:_ Construct the overlay graph \(\tilde{G}\) based on the precomputed shortcuts in _Step 1_. Specifically, as shown in Figure 1, \(\tilde{G}\) is composed of those boundary vertex pairs \(\{B_{i}\times B_{i}\}\) and the inter-edge set \(E_{inter}\), that is \(V_{\tilde{G}}=\{B_{i}\},E_{\tilde{G}}=\{(b_{i1},b_{i2})\}\cup E_{inter},(b_{i1},b _{i2}\in B_{i})\).
_Step 4:_ Construct the _SP_ index \(\tilde{L}\) for \(\tilde{G}\).
Fig. 1: Example Graph \(G\) and Overlay Graph \(\tilde{G}\)
Note that the construction of \(\tilde{L}\) (_Step 3, Step 4_) can be parallelized with \(L_{i}\) (_Step 2_) since they are independent and both rely on _Step 1_. All pairs of boundaries are independent of each other so we only need \(|B|\) times of _Dijkstra_'s so this part is \(O(nlogn+m)\). Then each partition's label \(L_{i}\) can be constructed in parallel, and \(\tilde{L}\)'s construction is also independent to them, so its complexity is the worst case of them: \(max\{O_{c}(G_{i}),O_{c}(\tilde{G})\}\), where \(O_{c}\) is the complexity of underlying index's construction time as our discussion is not fixed to any specific index structure.
**Query Processing.** To answer _SP_ query \(Q(s,t)\) with index \(L\), we divide the queries into two cases as follows:
_Case 1: Two vertices are in the same partition i.e., \(\forall s,t\in G_{i}\), \(Q(s,t)=d_{L_{i}}(s,t)\)._
_Case 2: Two vertices are in different partitions, i.e., \(\forall s\in G_{i},t\in G_{j}(i\neq j),Q(s,t)=\)_
\begin{tabular}{l l} \(\begin{cases}d_{\tilde{L}}(s,t)\\ \min\limits_{b_{\tilde{e}}\in B_{i}}\{d_{\tilde{L}}(s,b_{q})+d_{L_{j}}(b_{q}, t)\}&s\in B,t\notin B\\ \min\limits_{b_{\tilde{e}}\in B_{i}}\{d_{\tilde{L}}(s,b_{p})+d_{\tilde{L}}(b_{ p},t)\}&s\notin B,t\in B\end{cases}\) \\ \(\begin{cases}\min\limits_{b_{\tilde{e}}\in B_{i}}\{d_{\tilde{L}}(s,b_{p})+d_{ \tilde{L}}(b_{p},t)\}&s\notin B,t\in B\\ \min\limits_{b_{\tilde{e}}\in B_{i}}\{d_{\tilde{L}}(s,b_{p})+d_{\tilde{L}}(b_ {p},b_{q})+d_{L_{j}}(b_{q},t)\}&s\notin B,t\notin B\end{cases}\) \\ \end{tabular}
In summary, when \(s\) and \(t\) are in the same \(G_{i}\), we can use \(L_{i}\) to answer \(G_{i}(s,t)\); otherwise, we have to use \(L_{i},L_{j}\) and \(\tilde{L}\). The intra-query complexity is \(O_{q}(L_{i})\), where \(O_{q}\) is the index's query complexity. The inter query is made up of three parallel query sets, and the complexity is the worst of them: \(max\{B_{i}\times O_{q}(L_{i}),B_{i}\times B_{j}\times O_{q}(\tilde{L}),B_{j} \times O_{q}(L_{j})\}\).
**Index Update.** We divide index update into two scenarios:
_Scenario 1: Intra-edge weight change._ As shown in Figure 2-(a), when \(e\in E_{intra}\) (\(e\in E_{j}\)) changes, we first recalculate _Step 1_ and compare the old and new weights of \(e(b_{i1},b_{i2})\) between boundary in each partition \(G_{i}\). If there is any edge weight update, we need to update the corresponding \(L_{i}\) and \(\tilde{L}\); otherwise, we only need to update \(L_{j}\). Note that updates of \(L_{i}\) and \(\tilde{L}\) can be paralleled too.
_Scenario 2: Inter-edge weight change._ Same with _Scenario 1_, when \(e\in E_{inter}\) changes, we also recompute _Step 1_ first. If there is any \(d_{G}(b_{i1},b_{i2})\) changes, we update \(L_{i}\) and \(\tilde{L}\); otherwise, we only need to update \(\tilde{L}\).
**Lemma 1**.: _The indexes of Pre-Boundary Strategy can be correctly updated with the above strategy._
Proof.: First of all, we need to recalculate _Step 1_ to identify the affected edges \(e(b_{i1},b_{i2})\) between boundary vertex pairs in each partition. Even though this step is time-consuming, it cannot be skipped since it would be hard to identify the affected edges. For example, suppose the shortest path between the boundary pair \((b_{j1},b_{j2})\) in \(G_{j}\) passes through an edge \(e\in G_{i}\) with \(d_{\tilde{G}}(b_{j1},b_{j2})=d_{0}\). When \(e\) increases, we could update \(L_{i}\) and then \(\tilde{L}\). But \(\tilde{L}\) cannot be correctly updated since it could be that \(d_{\tilde{G}}(b_{j1},b_{j2})>d_{0}\), such that \(d_{\tilde{G}}(b_{j1},b_{j2})\) cannot be refreshed to the correct value. It is because \(d_{0}\) contains the old smaller edge weight while cannot be identified, since the shortest distance index always takes the smallest distance value. Then we could select those affected edges \(e(b_{i1},b_{i2})\) by comparing their old and new weights. Lastly, we update their corresponding partition index \(L_{i}\) and \(\tilde{L}\) in parallel.
The Step 1 boundary takes \(O(nlogn+m)\) time, while the partition and \(\tilde{G}\) index takes \(max\{O_{c}(G_{i}),O_{c}(\tilde{G})\}\) time to
Fig. 2: Illustration of Different Partitioned Index Strategies
update in parallel, where \(O_{u}\) is the update complexity for different indexes.
### **No-Boundary Strategy**
The first step of the _Pre-Boundary_ could be very time-consuming because only the index-free SP algorithms like _Dijkstra's_[18] or \(A^{*}\)[19] could be utilized. The index construction and maintenance efficiency suffer when the graph has numerous boundary vertex pairs. Nevertheless, as previously analyzed, it appears to be an essential requirement for constructing the "correct" _PSP_ index. To break such a misconception, we propose a novel _No-Boundary_ strategy which significantly reduces the index construction/maintenance time by skipping the boundary pre-computation step.
**Index Construction.** It contains three steps (Figure 2-(b)):
_Step 1:_ Construct the shortest distance index \(L_{i}\) for each \(G_{i}\);
_Step 2:_ Construct the overlay graph \(\tilde{G}\) based on \(L_{i}\);
_Step 3:_ Construct the shortest distance index \(\tilde{L}\) for \(\tilde{G}\).
Since the _partition indexes_\(\{L_{i}\}\) are constructed in parallel first and the \(\tilde{L}\) is constructed next, the _No-Boundary_ takes \(max\{O_{c}(G_{i})\}+O_{c}(\tilde{G})\) in index construction.
**Query Processing**. As discussed previously, \(L_{i}\) cannot answer \(G_{i}\)'s query correctly, so how can _No-Boundary_ answers correctly? Before giving the answer, we first prove that although \(\tilde{L}\) is built upon \(\{L_{i}\}\), its correctness for the shortest distance between any two boundary vertices still holds.
**Theorem 1**.: \(\forall s,t\in B,d_{G}(s,t)=d_{\tilde{L}}(s,t)\)_._
Proof.: We divide all the scenarios into three cases, as shown in Figure 3-(a), and prove them in the following:
Case 1: \(s\) and \(t\) belong to the same \(G_{i}\) and \(p_{G}(s,t)\) only passes through the interior of \(G_{i}\). Then \(d_{G}(s,t)=d_{G_{i}}(s,t)\) and \((s,t)\in\tilde{G}\). Since \(\tilde{w}(s,t)=d_{G_{i}}(s,t)\) was used in \(\tilde{G}\)'s construction, it is obvious that \(d_{G}(s,t)=d_{\tilde{G}}(s,t)=d_{\tilde{L}}(s,t)\);
Case 2: \(s\in G_{i}\) and \(t\in G_{i}\) but \(p_{G}(s,t)\) goes outside;
Case 3: \(s\in G_{i}\) and \(t\in G_{j}\) from different partitions.
For Cases 2 and 3, we take the concise form of \(p_{G}(s,t)\) by extracting only the boundary vertices as \(p_{c}=\langle s,b_{0},b_{1},\ldots,b_{n},t\rangle\) (\(b_{i}\in B,0\leq i\leq n\)). For two adjacent vertices \(b_{i},b_{j}\in p_{c}\), 1) if \(b_{i}\) and \(b_{j}\) are in the same partition, then its correctness is the same as the sub-case 1; 2) if \(b_{i}\) and \(b_{j}\) are in different partitions, then \((b_{i},b_{j})\) is an inter-edge with \((b_{i},b_{j})\in\tilde{G}\) and it is naturally correct. Therefore, the shortest distance can be correctly calculated by accumulating only the edge weights on \(\tilde{G}\), so \(\tilde{L}\) is correct.
Based on Theorem 1, we can answer the shortest distance queries of _No-Boundary_ by following strategies:
_Case 1: Two query nodes are in the same partition \(G_{i}\), i.e., \(s,t\in G_{i}\), we report \(Q(s,t)\) according to Lemma 2._
**Lemma 2**.: \(\forall s,t\in G_{i}\)_, \(d_{G}(s,t)=\min\{d_{L_{i}}(s,t),\ \min\{\)\(d_{L_{i}}(s,b_{i1})\)\(+d_{\tilde{L}}(b_{i1},b_{i2})+d_{L_{i}}(b_{i2},t)\}\}\), where \(b_{i1},b_{i2}\in B_{i}\)._
Proof.: We denote \(d_{L_{i}}(s,t)\) as \(d_{2}\), \(\min\{d_{L_{i}}(s,b_{i1})\)\(+d_{\tilde{L}}(b_{i1},b_{i2})\)\(+d_{L_{i}}(b_{i2},t)\}\) as \(d_{4}\), and divide all the scenarios into two cases:
Case 1: \(p_{G}(s,t)\) does not go outside of \(G_{i}\), as shown in the left side of Figure 3-(b). In fact, no matter \(s\) and \(t\) are boundary vertices or not, \(d_{L_{i}}(s,t)\) (_i.e.,_\(d_{2}\)) is enough to answer \(d_{G}(s,t)\) as \(L_{i}\) is built based on \(G_{i}\) and \(G_{i}\) contains all necessary information for finding the shortest path.
Case 2: \(p_{G}(s,t)\) passes outside of \(G_{i}\), as shown in the right side of Figure 3-(b). If \(s\) and \(t\) are both boundary vertices, \(d_{G}(s,t)=d_{\tilde{L}}(s,t)\) and thus Lemma 2 holds by referring to Theorem 1. If \(s\) and \(t\) are both non-boundary vertices, we take the concise form of \(p_{G}(s,t)\) by extracting only the boundary vertices as \(p_{c}=\langle s,b_{0},b_{1},\ldots,b_{n},t\rangle\) (\(b_{i}\in B,0\leq i\leq n\)). Therefore, \(d_{4}\) can correctly deal with this case as the \(d_{G}(b_{0},b_{n})\) can be answered by \(d_{\tilde{L}}(b_{0},b_{n})\) according to Theorem 1, while \(d_{G}(s,b_{0})\) and \(d_{G}(b_{n},t)\) can be answered by \(d_{L_{i}}(s,b_{0})\) and \(d_{L_{i}}(b_{n},t)\) by referring to Case 1. If either \(s\) or \(t\) are non-boundary vertex, its distance is the special case of \(d_{4}\) and can be easily proved.
_Case 2: Two query nodes are in different partitions, i.e., \(s\in G_{i},t\in G_{j},i\neq j\), we report \(Q(s,t)\) according to Lemma 3._
Fig. 3: Different Categories of OD Distribution
**Lemma 3**.: \(\forall s\in G_{i},t\in G_{j}(i\neq j),d_{G}(s,t)=\)__
\[\begin{cases}d_{L}(s,t)&s,t\in B\\ \min\limits_{b_{\notin}B_{j}}\{d_{L}(s,b_{q})+d_{L_{j}}(b_{q},t)\}&s\in B,t\notin B \\ \min\limits_{b_{\notin}B_{i}}\{d_{L_{i}}(s,b_{p})+d_{\tilde{L}}(b_{p},t)\}&s \notin B,t\notin B\\ \min\limits_{b_{\in}B_{i},b_{q}\in B_{j}}\{d_{L_{i}}(s,b_{p})+d_{\tilde{L}}(b_ {p},b_{q})+d_{L_{j}}(b_{q},t)\}&s\notin B,t\notin B\end{cases}\]
Proof.: We prove it according to the following cases:
Case 1: Both \(s\) and \(t\) are boundary vertices. It is correct according to Theorem 1.
Case 2: Either \(s\) or \(t\) is a boundary vertex. Suppose \(s\) is an inner vertex of \(G_{i}\) and \(t\) is a boundary vertex, as shown in Figure 3-(c). We take the concise \(p_{s,t}\) by extracting the boundary vertices as \(p_{c}=\langle s,b_{0},\ldots,b_{n},t\rangle\). Then \(b_{0}\in B_{i}\) and \(p_{s,t}\) can be treated as concatenated by two sub-paths \(p_{s,b_{0}}\oplus p_{b_{0},t}\). Specifically, \(d_{G}(s,b_{0})=d_{L_{i}}(s,b_{0})\) by referring to Case 1 here, and \(d_{G}(b_{0},t)=d_{\tilde{G}}(b_{0},t)=d_{\tilde{L}}(b_{0},t)\) by referring to Theorem 1.
Case 3: Neither \(s\) nor \(t\) is a boundary vertex. It can be proved by extending Case 2.
In summary, when \(s\) and \(t\) are both boundary vertices, we can use \(\tilde{L}\) to answer \(d_{G}(s,t)\); otherwise, we have to use \(L_{i},L_{j}\) and \(\tilde{L}\). The intra-query complexity is \(max\{O_{q}(L_{i}),B_{i}\times B_{j}\times O_{q}(\tilde{L}),B_{j}\times O_{q}(L _{j})\}\), while the inter-query takes \(O_{q}(\tilde{L})\) time when \(s\) and \(t\) are both boundary vertices, and the complexity of other case is the worst of them: \(max\{B_{i}\times O_{q}(L_{i}),B_{i}\times B_{j}\times O_{q}(\tilde{L}),B_{j} \times O_{q}(L_{j})\}\).
**Index Update.** We divide index update into two scenarios:
_Scenario 1: Intra-edge weight change._ As shown in Figure 2-(b), when \(e\in E_{intra}\) (\(e\in E_{j}\)) changes, we first update \(L_{j}\) and compare the old and new weights of \(e(b_{j1},b_{j2})\) between boundary in \(G_{j}\). If there is an edge weight update, we need to further update \(\tilde{L}\).
_Scenario 2: Inter-edge weight change._ When \(e\in E_{inter}\) changes, only \(\tilde{L}\) needs update.
**Lemma 4**.: _The indexes of No-Boundary can be correctly maintained with the above update strategy._
Proof.: In the _inter-edge update_ case, since \(e\in\tilde{G},e\notin G_{i},\forall e\in E_{inter}\), the weight change of \(e\) could only affect the correctness of \(\tilde{L}\). So only \(\tilde{L}\) should be checked and updated. In the _Intra-edge update_ case, since \(e\in G_{i}\), its weight change will firstly affect \(L_{i}\). Then it could affect \(\tilde{G}\) as \(e_{\tilde{G}}(b_{l_{1}},b_{bl_{2}})=d_{L_{i}}(b_{l_{1}},b_{bl_{2}})\). So \(\tilde{L}\) also needs update if \(e_{\tilde{G}}(b_{il_{1}},b_{bl_{2}})\) changes after checked.
The update complexity is \(max\{O_{u}(G_{i})\}+O_{u}(\tilde{G})\).
### _Post-Boundary Strategy_
Since the time complexity of query processing in _No-Boundary_ is worse than _Pre-Boundary_, could we accelerate it? We solve this question by propose novel _Post-Boundary_ strategy which utilizes \(\tilde{L}\) to fix the incorrect boundary pairs of \(G_{i}\) and fix \(L_{i}\), thus achieving efficiency query processing.
**Index Construction.** There are 5 steps for index construction and the first three steps are identical with _No-Boundary_, adding two steps for post-processing (Figure 2-(b) yellow).
_Step 1-3:_ Same with _No-Boundary_ (see Section III-B).
_Step 4:_ Compute \(d_{\tilde{L}}(b_{i1},b_{i2})\) by \(\tilde{L}\) for each partition \(i\), and insert shortcuts \(e(b_{i1},b_{i2})=d_{\tilde{L}}(b_{i1},b_{i2})\) into \(G_{i}\) to get \(G^{\prime}_{i}\);
_Step 5:_ Fix \(L_{i}\) using the updated partition \(G^{\prime}_{i}\), denoted as \(L^{\prime}_{i}\).
**Query Processing.** Since \(L^{\prime}_{i}\) is correct, the query processing of _Post-Boundary_ is the same as the _Pre-Boundary_ by using \(\tilde{L}\) together with \(\{L^{\prime}_{i}\}\).
**Index Update.** It is similar to _No-Boundary_, with an additional judgment and processing as shown in Figure 2-(b).
_Scenario 1: Intra-edge weight change_. Suppose \(e\in E_{i}\) changes, we update \(G_{i},L_{i}\) and then update \(\tilde{G},\tilde{L}\) if any \(d_{L_{j}}(b_{j1},b_{j2})\) changes. After that we may need to further update \(\{G^{\prime}_{i}\},\{L^{\prime}_{i}\}\) if \(d_{G^{\prime}_{i}}(d_{li},d_{i2})\) and \(d_{\tilde{L}}(d_{li},d_{i2})\) are different.
_Scenario 2: Inter-edge weight change._ Suppose \(e\in E_{inter}\) changes, we update \(\tilde{G},\tilde{L}\) and then update \(\{G^{\prime}_{i}\},\{L^{\prime}_{i}\}\).
**Lemma 5**.: _The indexes of Post-Boundary Strategy can be correctly maintained with the above update strategy._
Proof.: Firstly, we prove the necessity to keep both \(\{G_{i}\},\{L_{i}\}\) and \(\{G^{\prime}_{i}\},\{L^{\prime}_{i}\}\). Similarly to the _Pre-Boundary Strategy_, those boundary edges in \(\{G^{\prime}_{i}\}\) would keep the old smaller value such that the index could not be correctly updated as explained in Lemma 1. So keeping \(\{G_{i}\},\{L_{i}\}\) gives us a chance to update \(\tilde{L},\{L_{i}\}\) correctly as proved in Lemma 4. Then, following the _No-Boundary Strategy_ update, we recompute the shortest distance between the all-pair boundaries leveraging \(\tilde{L}\) and compare their values on \(L^{\prime}_{i}\), then update \(G^{\prime}_{i},L^{\prime}_{i}\).
Therefore, the update time complexity is the linear combination of the above procedures: \(max\{O_{u}(G_{i})\}+O_{u}(\tilde{G})+O_{q}(\tilde{G})+max\{O_{u}(G_{i})\}\). The time complexities of these three _partitioned index strategies_ are summarized in Table I.
As shown in the index construction part of Figure 2, _Pre-Boundary_ precomputes the all-pair shortest distance among boundary vertices of each partition, and thus the newly added shortcuts of \(G_{i}\) and \(\tilde{G}\) (red edges) are of correct edge weight. For example, the edge weight of \(e(v_{5},v_{12})\) is 6 which is the path length of the shortest path \(p_{G}(v_{5},v_{12})=\langle v_{5},v_{3},v_{10},v_{12}\rangle\). While _No-Boundary_ and _Post-Boundary_ only leverage \(L_{i}\) to construct the shortcuts for overlay graph \(\tilde{G}\), and thus the edge weight of shortcut \(e(v_{5},v_{12})\) is 7 which preserves the distance of path \(p_{G_{3}}(v_{5},v_{12})=\langle v_{5},v_{4},v_{12}\rangle\). Moreover, _Post-Boundary_ utilizes \(L\) to get the shortest distance of boundary pairs of each partition and insert the shortcuts to get \(G^{\prime}_{i}\) (_e.g.,_ insert \(e(v_{5},v_{12})=6\) to \(G^{\prime}_{3}\)). Therefore, _Post-Boundary_ has the same efficiency as _Pre-Boundary_ in processing the query pairs within the same partition, which is faster than _No-Boundary_.
_Balance Discussion:_ There exists a balance of index performance among the three variants mentioned above. In terms of the query processing, the _No-Boundary_ is slower than both _Pre-Boundary_ and _Post-Boundary_, since the boundary vertices and path concatenation should be considered for _OD_ within
one partition. In terms of the index update, the _No-Boundary_ and _Post-Boundary_ enjoy faster speed as the boundary all-pair distance can be neglected. In terms of the index size, the _Post-Boundary_ needs twice the storage space as that of _No-Boundary_ and _Pre-Boundary_, even though it has advantages in both query processing and index update. Therefore, no one strategy is better than the others in all aspects and could be selected based on the application scenario.
### _Pruned-Boundary Optimization_
As analyzed previously, the density of \(\tilde{G}\) increases dramatically as the all-pair boundary vertices in each partition are connected during construction. Moreover, the high density could affect the index performance [8] by slowing down the index construction, query processing, and index update. Then is it possible to decrease the density of \(\tilde{G}\) by deleting some edges in \(\{B_{i}\times B_{i}\}\)? As proved in Theorem 1, the shortest distance between boundary vertices is well-preserved in \(\tilde{G}\), then could the distance still be preserved in \(\tilde{G}\) if some edges are removed? In the following, we start by considering the connectivity of boundary vertices and define them accordingly:
**Definition 4** (Half/Full-Connected Boundary Vertex).: _In \(G_{i}\), \(b_{p}\in B_{i}\) is a half-connected boundary vertex if \(\exists u\in N(b_{p}),u\in G_{i},u\notin B_{i}\) and we denote them as \(B_{i}^{H}\); otherwise, \(b_{p}\) is a full-connected boundary vertex and denoted as \(B_{i}^{F}\)._
Intuitively, \(\forall b_{p}\in B_{i}^{F}\), the first edge on its shortest path to other boundary vertices in \(B_{i}\) only leads to another boundary vertex according to the Definition 4. Thus the edges between \(b_{p}\) and its neighbors seem enough to preserve \(b_{p}\)'s connections to other boundary vertices \(B_{i}\backslash b_{p}\). Whereas for \(b_{q}\in B_{i}^{H}\), it is insufficient to only include the edges between \(b_{q}\) and its boundary neighbors in \(\tilde{G}\) since its shortest path to other boundary vertices could also pass through its non-boundary neighbors. Therefore, we propose to shrink \(\tilde{G}\) to \(\tilde{G}^{\prime}\) by taking the edges between half-connected boundary vertices \(B_{i}^{H}\times B_{i}^{H}\) and the adjacent edges of full-connected boundary vertices.
**Lemma 6**.: \(d_{G_{i}}(b_{s},b_{t})=d_{\tilde{G}_{i}^{\prime}}(b_{s},b_{t})\)_, \(\forall(b_{s},b_{t})\in B_{i}\times B_{i}\)._
Proof.: Since \(d_{G_{i}}(b_{s},b_{t})=d_{\tilde{G}_{i}}(b_{s},b_{t})\) holds with \(e_{\tilde{G}_{i}}(b_{s},b_{t})=d_{G_{i}}(b_{s},b_{t})\) and \(E_{\tilde{G}_{i}^{\prime}}\subset E_{\tilde{G}_{i}}\), it should be that \(d_{\tilde{G}_{i}^{\prime}}(b_{s},b_{t})\leq d_{\tilde{G}_{i}^{\prime}}(b_{s},b _{t})\), that is \(d_{G_{i}}(b_{s},b_{t})\leq d_{\tilde{G}_{i}^{\prime}}(b_{s},b_{t})\). Then our next step is to prove that \(d_{\tilde{G}_{i}^{\prime}}(b_{s},b_{t})\leq d_{G_{i}}(b_{s},b_{t})\). We categorize the combination of \(b_{s}\) and \(b_{t}\) into three types:
Case 1: \(b_{s},b_{t}\in B_{i}^{H}\). Since \(e_{\tilde{G}_{i}^{\prime}}(b_{s},b_{t})=d_{G_{i}}(b_{s},b_{t})\) with \((b_{s},b_{t})\in\tilde{G_{i}^{\prime}}\), it is clear that \(d_{G_{i}}(b_{s},b_{t})=d_{\tilde{G}_{i}^{\prime}}(b_{s},b_{t})\).
Before analyzing Case 2 and Case 3, we suppose that the concise form of \(p_{G_{i}}(b_{s},b_{t})\) by taking only the boundary vertices is \(\{b_{s},b_{0},\ldots,b_{k},b_{t}\},(0\leq k)\).
Case 2: \(b_{s}\in B_{i}^{F},b_{t}\in B_{i}^{H}\). In the case of \(k=0\), it means that \(b_{s}\in N(b_{t})\) and \(d_{G_{i}}(b_{s},b_{t})=e_{G_{i}}(b_{s},b_{t})\). Since \((b_{s},b_{t})\in\tilde{G_{i}^{\prime}}\) with \(e_{\tilde{G_{i}^{\prime}}}(b_{s},b_{t})=e_{G_{i}}(b_{s},b_{t})\), it should be that \(d_{G_{i}}(b_{s},b_{t})=d_{\tilde{G}_{i}^{\prime}}(b_{s},b_{t})\). Then in the case of \(k>0\), we suppose that the first half-connected boundary vertex on \(p_{G_{i}}(b_{s},b_{t})\) is \(b_{l}\), that is \(b_{j}\in B_{i}^{F}(0\leq j<l),b_{l}\in B_{iH}\). The shortest distance value can be accumulated as \(d_{G_{i}}(b_{s},b_{t})=\sum_{j=1}^{l-1}e_{G_{i}}(b_{j-1},b_{j})+d_{G_{i}}(b_{ l},b_{t})\). As \(b_{j}\in B_{i}^{F}(0\leq j<l)\), then \(e_{G_{i}}(b_{j-1},b_{j})=e_{\tilde{G}_{i}^{\prime}}(b_{j-1},b_{j})\) according to the definition 4. And \(d_{G_{i}}(b_{l},b_{t})=d_{\tilde{G}_{i}^{\prime}}(b_{l},b_{t})\) by referring to the above Case 1 as \(b_{l},b_{t}\in B_{i}^{H}\). So \(d_{G_{i}}=\sum_{j=1}^{l-1}e_{\tilde{G_{i}^{\prime}}}(b_{j-1},b_{j})+d_{\tilde{ G_{i}^{\prime}}}(b_{b},b_{t})\) holds, which indicates that \(d_{\tilde{G_{i}^{\prime}}}(b_{s},b_{t})\leq d_{G_{i}}(b_{s},b_{t})\).
Case 3: \(b_{s},b_{t}\in B_{i}^{F}\). Similarly, we suppose that the first half-connected vertex on \(p_{G_{i}}(b_{s},b_{t})\) is \(b_{l}\). Then \(d_{G_{i}}(b_{s},b_{t})=d_{G_{i}}(b_{s},b_{t})+d_{G_{i}}(b_{l},b_{t})\geq d_{ \tilde{G_{i}^{\prime}}}(b_{s},b_{t})+d_{\tilde{G_{i}^{\prime}}}(b_{l},b_{t})\) by referring to Case 2 above as \(b_{s}\in B_{i}^{F},b_{l}\in B_{i}^{H},b_{t}\in B_{i}^{F}\). So it could also be proved that \(d_{\tilde{G_{i}^{\prime}}}(b_{s},b_{t})\leq d_{G_{i}}(b_{s},b_{t})\).
By summarizing the above three cases, \(d_{G_{i}}(b_{s},b_{t})=d_{\tilde{G_{i}^{\prime}}}(b_{s},b_{t})\), and the \(\tilde{L}^{\prime}\) is also correct.
Therefore, Lemma 6 shows that our index correctness can still be guaranteed after _pruned-boundary optimization_.
## IV Partitioned SP Index Scheme
Having figured out how to deal with boundaries for better index performance (efficient query or quick update), the partition method and _SP_ index are other critical components for _PSP index_. Among them, the _SP_ index is well studied and analyzed in section II-C. Then, there comes a problem: how could we choose a partition method or which partition method contributes to our preferred performance? This is not trivial because: 1) dozens of partition methods (as seen in section II-B) with different characteristics were proposed and applied in the past decades; 2) they are classified under different criteria while it is not clear which criterion is beneficial to _SP_ index; 3) their relationship with (or the effect on) _SP_ index is unknown and has never been studied. Based on these doubts, we first identify the partition equivalence under existing classification criteria and propose a novel and SP-oriented category of partition methods. Then we establish the _PSP index scheme_ by coupling the _partitioned index strategy_, _partition method_, and _SP index_, and analyze the index complexity under different partition categories.
### _Partition Equivalence_
According to [48], the existing partition algorithms have many classification methods from the perspectives of _partition manners_ (spectral, flow, graph glowing, contraction and multi-level), _partition objectives_ (balance and minimal cut), _computation manner_ (in-memory, distributed and streaming), and _cut category_ (edge-cut and vertex-cut), etc. Among them, the ones with _minimal cut_ as objective is promising. However, they can still be categorized into _edge-cut_ and _vertex-cut_ further down.
We generalize the _vertex-cut_ to _edge-cut_ by duplicating those vertices which are cut into different partitions and connecting the duplicated vertices with their original vertices through an edge of zero weight. Suppose in a graph \(G\), a vertex \(x\) is cut into \(l+1\) different partitions \(\{G_{0},G_{1},\ldots,G_{l}\}\) in a _vertex-cut partition_ with \(x\) along with its neighbors \(\{x_{n_{i}}\}(0\leq i\leq l)\) belonging to subgraph \(G_{i}\). To obtain its equivalent _edge-cut partition_, we transform \(G\) to \(G^{\prime}\): \(\forall\ x\in X\) (_cut vertex_ set), keep the connection between \(x\) and its partial
neighbors \(\{x_{n_{0}}\}\), duplicate \(x\) as \(x_{i}\) connecting neighbors \(x_{n_{i}}\), and connect \(x\) with \(x_{i}\) by an edge with \(e(x,x_{i})=0\). Then we partition \(G^{\prime}\) by using _edge-cut partition_: cut each added edge \((x,x_{i})\), which leads to the following lemma:
**Lemma 7**.: _The edge-cut partition of \(G^{\prime}\) by cutting those added edges \((x,x_{i})\) is equivalent to the vertex-cut partition of \(G\) by cutting the vertices in \(X\)._
Following by, we verify that the partition equivalence has no effect on the shortest distance computation.
**Theorem 2**.: \(\forall s,t\in V,d_{G}(s,t)=d_{G^{\prime}}(s,t)\)_._
Proof.: We classify all the scenarios into two cases:
Case 1: \(p_{G}(s,t)\) (\(p_{s,t}\) for short) does not pass through any \(x\in X\). It indicates that \(p_{s,t}\) is totally within one partition \(G_{i}(1\leq i\leq k)\). Since the vertex-cut partition of \(G\) and the edge-cut partition of \(G^{\prime}\) are equivalent, \(d_{G_{i}}(s,t)=d_{G}(s,t)=d_{G^{\prime}}(s,t)\) holds.
Case 2: \(p_{s,t}\) pass through \(x\in X\). We suppose that \(p_{s,t}=\langle s,\ldots,x_{f},x,x_{b},\ldots,t\rangle\) with \(x_{f}\in G_{i},x_{b}\in G_{j}\), then it can correspond to \(p^{\prime}_{s,t}=\langle s,\ldots,x_{f},x_{i},x,x_{j},x_{b},\ldots,t\rangle\) in \(G^{\prime}\). Since \(e(x_{i},x)=e(x,x_{j})=0\), \(l(p_{s,t})=l(p^{\prime}_{s,t})\) holds which prove that \(d_{G}(s,t)=d_{G^{\prime}}(s,t)\) is correct as well.
Even though they are equivalent, their influence on the partitioned index is different in the following two aspects:
1. _Boundary Inter-Connection:_ Firstly, within partitions, they have the same complexity even though the vertex-cut's partition sizes are slightly larger due to the sharing boundaries. However, in _Pre-Boundary_ and _Post-Boundary_ of the overlay graph, the boundaries of the edge-cut are only fully connected to vertices in one partition, while the vertex-cuts' boundaries are fully connected to multiple partitions, which makes the graph much denser. Further, due to the equivalence between _PLL_ and _tree-decomposition_[26], we can use the decomposed tree to estimate the label size. Specifically, due to the ancestor-descendant relations between a vertex and its neighbors (also the ones connected through shortcuts), they will appear on a single branch of the tree. When we use edge-cut, the boundaries can be organized in the unit of partitions and reduce the tree height effectively [17]. However, when we use the vertex cut, the larger neighbor number first prolongs the branch length. What's worse, the _one-boundary-multiple-partition_ structure prohibits partition-oriented processing. Therefore, the vertex-cut's label size is more likely to reach the worst-case label size \(O(|B|^{2})\), where the tree decays to a stick.
2. _Boundary Vertex Number:_ The second and the most important issue comes with the huge boundary vertex number of the vertex-cut. As discussed previously, the overlay graph construction requires \(B_{i}\times B_{i}\) new edges, and the cross-partition query requires \(B_{i}\times B_{j}\) combinations. If a vertex-cut creates 10k boundaries for each partition, which is ordinary, then these two numbers would grow to the scale of billions and dramatically deteriorate efficiency.
### _SP-Oriented Partitioning Classification_
Based on how the networks structures affect the partitioned shortest path index, we propose a new classification of graph partition methods that has three categories: 1) _Planar_ (_spectral partitioning_[37, 46], _graph growing_[49], _flow-based partitioning_[34, 35], _node-swapping_[62, 63]), 2) _Core-Periphery_ (_core-tree_[6, 79, 83], _sketch_[30, 84]), and 3) _Hierarchical_ (_HiTi_[41], _SHARC_[24], _G-Tree_[44], _SMOG_[71]) from the perspective of the shortest distance application. Firstly, the _Planar_ treats partitions equally on one level:
**Definition 5**.: _(Planar Partitioning). Planar partitioning decomposes a graph \(G\) into multiple equally-important subgraphs \(\{G_{i}|1\leq i\leq k\}(k\in[2,\infty])\) with 1) \(\bigcup_{i\in[1,k]}V(G_{i})=V\), \(V(G_{i})\cap V(G_{j})=\emptyset\), or 2) \(\bigcup_{i\in[1,k]}E(G_{i})=E\), \(E(G_{i})\cap E(G_{j})=\emptyset,\forall i,j\in[1,k],i\neq j\)._
The second _Core-Periphery Partition_[85, 86, 87, 88] treats the partitions discriminatively by taking some important vertices as "Core" and the remaining ones as "Peripheries":
**Definition 6**.: _(Core-Periphery Partitioning). Core-periphery partitioning decomposes a graph \(G\) into two distinct parts: a core subgraph \(G_{c}\) and \(k-1\) peripheral subgraphs \(\{G_{i}|1\leq i\leq k-1\}(k\in[2,\infty])\), satisfying \(\bigcup_{i\in[1,k-1]}V(G_{i})\cup V(G_{c})=V,V(G_{c})\cap V(G_{i})=\emptyset( \forall 1\leq i\leq k-1)\) and \(V(G_{i})\cap V(G_{j})=\emptyset(\forall i,j\in[1,k-1],i\neq j)\)._
There are two big streams depending on how the core is formed: 1) _Core-Tree Decomposition_[6, 79, 83] forms the core through tree decomposition [89], and the resulting periphery part is a set of small-width trees. Specifically, it leverages _minimum degree elimination_[90] to contract vertices of lower degree first such that the graph is contracted from the edge towards the center, generating a set of growing trees ("Peripheries") around a shrinking while denser graph ("Core") formed by those non-contracted vertices. The contraction terminates when the width of one tree reaches the previously set threshold; 2) _Sketch_[91, 92, 93, 84, 94, 95, 96, 97, 98, 99].
Fig. 4: Partition Results on Example Graph \(G\)
selects a set of vertices as _landmarks_, which can be regarded as the core and treats the remaining parts as periphery. It generally works on extremely large graphs where the above tree decomposition is impossible. The landmarks are used for distance estimation (precomputing landmarks to peripheries) or highways (precomputing index between landmarks).
The third _Hierarchical Partitioning_ organizes the network partitions hierarchically and each level is equivalent to _Planar Partitioning_ (_Core-Periphery_ can also be organized hierarchically [97] but it is essentially organizing the landmarks).
**Definition 7**.: _(**Hierarchical Partitioning.**) Hierarchical partitioning organizes a graph \(G\) hierarchically within \(H\) levels where each level \(h\) is a planar partitioning \(\{G_{i}^{h}\}\) of \(G\) and \(\exists G_{j}^{h-1}\supset G_{i}^{h},\forall G_{i}^{h}(1\leq h<H)\)._
To simplify the presentation, Figure 4 shows the partition results of an example graph \(G\) with respect to different partitioning methods, where the red vertices represent the corresponding boundary vertices. Specifically, _planar partitioning_ generates four partitions with equivalent vertex size while _core-periphery partitioning_ produces both "Core" and "Periphery" (Note that the periphery of Core-Tree decomposition is a set of small-width trees while Sketch treats the non-core part as periphery. The yellow vertex in each periphery is the "root vertex"). _Hierarchical partitioning_ organizes partitions hierarchically, and the leaf nodes (the lowest level partition result) may have the same partition result as planar partitioning if the same partitioning method is used.
As analyzed in Section III, the boundary number of each partition is crucial to _PSP index_. Therefore, we propose the following remark to clarify the relations of these partition methods and how they influence path-finding.
_Remark: Boundary Constraint on Partition Choice_. The planar and hierarchical partitions have no limit to the boundary number even though reducing the cut size is one of their optimization goals so that the path index may suffer from large boundary numbers. On the other hand, the core-tree decomposition limits the boundary number to be no larger than the pre-defined bandwidth so their performance would not be deteriorated by the boundary if we set the bandwidth wisely. Therefore, the planar and hierarchical partitions are better used for small treewidth networks such that balanced partitions and fewer boundaries can be achieved together. While core-periphery is suitable for large treewidth networks as it deliberately limits the boundary vertex number by bandwidth.
### _Coupling Partition Strategy, Index, and Partition Structure_
After identifying the three dimensions of partitioned shortest path indexes (_partitioned index strategy_, _path index_, _partition structure_), now we are ready to couple them together to assemble various indexes to satisfy different scenarios. We organize and illustrate them in Figure 5-(a). Specifically, we order the strategies and indexes based on their efficiency on query and update (other factors like construction time and space consumption are omitted), and we can obtain a _PSP index_ by choosing one from each category and have a rough idea of its performance. It should be noted that all the existing _PSP indexes_ can find their position in it as discussed in Section VI and most of them are query-oriented, while there leaves a vast space of combinations to generate "new" indexes. Although we are able to enumerate them all, in the following, we choose to introduce the representative ones from the perspectives of query-oriented and update-oriented of each partition structure as examples of how to couple the strategies and indexes. Although they are all new structures proposed in this work, we only analyze their components and performance but do not present the details due to the page limit.
### _Coupled Planar Partition Index_
Given a planar partition, the partition strategies can apply directly to it. Next, we introduce how to construct the _PSP_ indexes that are efficient for query or index update.
#### V-D1 Query-Oriented Planar SP Index
Firstly in terms of partitioned index strategy, _Pre-Boundary_ and _Post-Boundary_ are efficient in query answering, with all the existing works relying on the _Pre-Boundary_. Since our proposed _Post-Boundary_ is faster than _Pre-Boundary_ in index construction, we used it as the partitioned index strategy. Secondly, in terms of path index, we could choose _TD_ as the index for both partitions and overlay graph. Although _all-pair_ is faster, its space consumption is intolerable. Its structure is shown in Figure 5-(b), and we elaborate its procedures below:
_Construction._ It takes \(O(V_{max}\cdot(\log V_{max}+h_{max}\cdot w_{max}))\) time for the partition _TD_, \(O(V_{\hat{G}}(\log V_{\hat{G}}+h_{\hat{G}}\cdot w_{\hat{G}}))\) for the overlay _TD_, \(O(B_{max}^{2}w_{\hat{G}})\) for boundary correction, and \(O(w_{\hat{G}}+w_{max}^{2}\cdot\delta)\) for the partition _TD_ update, where \(\delta\) is the affected shortcut number and \(max\) are the corresponding max values in the partitions.
_Query._ The intra-query takes \(O(w_{max})\) as partition index is correct, and the inter-query takes \(O(max\{B_{max}w_{max},B^{2}maxw_{\tilde{G}})\}\) for the partition and overlay query and \(O(B_{max}^{2})\) for the combinations.
_Update._ It takes \(O(w_{max}^{2}\delta)\) to update partition _TD_, \(O(w_{\tilde{G}}^{2}\delta)\) to update overlay _TD_, and \(O(B_{max}^{2}w_{\tilde{G}})\) to check boundaries.
#### V-D2 Update-Oriented Planar SP Index
In terms of the partitioned index strategy, we use _No-Boundary_ as it requires the least effort to update. In terms of indexes, we can choose
Fig. 5: Dimensions of _PSP_ Index Scheme and Representative Coupling
_CH_ as the underlying index as it is fast to update while the query processing is better than direct search (Figure 5-(c)).
_Construction._ It is faster with \(O(V_{max}\cdot w_{max}^{2}\cdot\log V_{max})\) for partition _CH_ and \(O(V_{\tilde{G}}\cdot w_{\tilde{G}}^{2}\cdot\log V_{\tilde{G}})\) for overlay _CH_.
_Query._ The query time is longer with \(O(max\{B_{max}\cdot w_{max}\log V_{max},\,B_{max}^{2}\cdot w_{\tilde{G}}\log V_{ \tilde{G}})\}\) for the intra- and overlay searching, and \(O(B_{max}^{2})\) for the combinations.
_Update._ It is faster with \(O(\delta w_{max})\) for partition _CH_ maintenance and \(O(\delta w_{\tilde{G}})\) for overlay _CH_ maintenance.
### _Coupled Core-Periphery Partition Index_
The core-periphery partition index comprises the core index \(L_{c}\) and the periphery index \(\{L_{i}\}\). It seems that \(L_{c}\) and \(L_{i}\) can be constructed by their corresponding subgraphs \(G_{c}\) and \(G_{i}\), respectively. Although the core does not belong to any partition, they are connected to the partitions, and we treat the core as the overlay graph. As for the two kinds, _core-tree_ is suitable for general-size high-degree networks where indexes are still constructible, while _sketch_ is used on networks with billions of vertices. Different from the previous planar partition, the core here usually has a large degree so its index is limited to _PLL_. As for sketch, we omit it here because its _PLL_ core + pruned direct search seems to be the only solution for huge networks. Next, we discuss the remaining parts.
#### Iv-E1 Query-Oriented Core-Tree SP Index
For the strategy, the query-oriented still uses the _Post-Boundary_ as the queries within the periphery can be handled without the core index. The periphery index uses _TD_ because the periphery usually has a small degree. This structure is shown in Figure 5-(d).
_Construction._ Because periphery is constructed through contraction, we regard it as a by-product of the partition phase and do not construct their labels in the first step. Then it takes \(O(w_{c}E_{c}\log V_{c}+w_{c}^{2}V_{c}\log^{3}V_{c})\) for the core _PLL_, \(O(B_{max}^{2}w_{c}\log V_{c})\) for boundary correction, and \(O(V_{max}\cdot w_{max}^{2}\cdot\log V_{max})\) for periphery label.
_Query._ The intra-query takes \(O(w_{max})\) time. The inter-query takes \(O(max\{B_{max}w_{max},B_{max}^{2}w_{c}\log V_{c}\})\) for the periphery and core, and \(O(B_{max}^{2})\) for the combinations.
_Update._ It takes \(O(w_{max}^{2}\cdot\delta)\) for the periphery _TD_, \(O(w_{c}E_{c}\log V_{c})\) for the core _PLL_, and \(O(B_{max}^{2}w_{c}\log V_{c})\) for boundary correction.
#### Iv-E2 Update-Oriented Core-Tree SP Index
As shown in Figure 5-(e), _No-Boundary_ is still used as the boundary strategy, while _CH_ is used for faster periphery update.
_Construction._ Only core needs \(O(w_{c}E_{c}\log V_{c}+w_{c}^{2}V_{c}\log^{3}V_{c})\) time.
_Query._ The query time is longer with \(O(max\{B_{max}w_{max}\log V_{max},\,B_{max}^{2}w_{c}\log V_{c}\})\) for intra and core, and \(O(B_{max}^{2})\) for combinations.
_Update._ It is faster with \(O(\delta w_{max})\) for periphery shortcuts and \(O(w_{c}E_{c}\log V_{c})\) for the core _PLL_.
### _Coupled Hierarchical Partition Index_
This category organizes \(L\) levels of partitions hierarchically with several lower partitions forming a larger partition on the higher level. We use \(L_{i}^{l}\) to denote the index of partition \(G_{i}^{l}\) on level \(l\). Different from the state-of-the-art _G-Tree_ stream of indexes which uses all-pair in their hierarchical overlay graph, we replace it with the hierarchical labels for better query and update performance. Specifically, for vertices in each layer, we store their distance to vertices in their upper layers. As this is essentially 2-hop labeling, we use _TD_ to implement it with orderings corresponding to the boundary vertex hierarchy. Such a replacement in the overlay index could answer queries and update much faster than the original dynamic programming-based layer all-pair index.
#### Iv-E1 Update / Query-Oriented Hierarchical Index
As for the partitions, this structure tends to generate partitions with small sizes so we inherit the original search for fast query processing. Consequently, no partition index leads to inevitable boundary all-pair searches. Fortunately, our _No-Boundary_ restricts the search space to the small partition compared with _G-Tree_'s whole graph _Pre-Boundary_ (Figure 5-(f) and (g)).
_Construction._ The partition boundary all-pair takes \(O(B_{max}\cdot V_{max}(logV_{max}+E_{max}))\), and its by-product boundary-to-partition can be cached for faster inter-query. Then the overlay _TD_ takes \(O(V_{\tilde{G}}(logV_{\tilde{G}}+h_{\tilde{G}}w_{\tilde{G}}))\) time.
_Query._ The intra-query takes \(O(B_{max}V_{max}(logV_{max}+E_{max})\) for the direct search (very rare as the partitions are small), while the inter-query The query takes \(O(B_{max}w_{\tilde{G}})\) for intra- (constant time with cache) and hierarchical query, and \(O(B_{max}^{2})\) for combination.
_Update._ It takes \(O(B_{max}\cdot V_{max}(logV_{max}+E_{max}))\) to update the partition all-pairs and \(O(w_{\tilde{G}}^{2}\cdot\delta)\) to update the overlay graph.
## V Experimental Evaluation
In this section, we evaluate the proposed methods. All the algorithms are implemented in C++ with full optimization on a server with 4 Xeon Gold 6248 2.6GHz CPUs (total 80 cores / 160 threads) and 1.5TB memory.
### _Experimental Settings_
**Datasets and Queries.** We test on 3 weighted road networks and 3 complex networks (Table II). In particular, we follow
[10] to generate the edge weights for unweighted complex networks in a manner that is inversely proportional to the highest degree of the endpoints. We randomly generate 10,000 queries and 1,000 update instances for each dataset to assess the query processing and index maintenance efficiency, respectively.
**Algorithms.** We compare with three state-of-the-art baselines from the three partition structures: 1) _Forest Label (FHL)_[16, 17] belonging to _Pre-Boundary + TD/TD + Planar_, 2) _Core-Tree (CT)_[6, 79] belonging to _No-Boundary + TD/PLL_ + Core, and 3) _G-Tree_[43, 45] belonging to _Pre-Boundary_ + All-Pair + Hierarchy. We implement the five new couplings (Section IV) and name them: _Q-Planar_, _U-Planar_, _Q-Core_, _U-Core_, and _UQ-Hier_, with \(Q\) and \(U\) denoting the Query- and Update-orient. We also implement the partitioned index for TD, CH, and PLL (denoted as _P-TD_, _P-CH_, and _P-PLL_). The default thread number of all methods is set to 150.
**Parameter Setting.** According to our preliminary results, we set the partition number to \(64\) for planar method, while the bandwidth of core-periphery methods are set to \(20\). As for hierarchical index, we follow the setting of G-tree, setting the fan-out \(f\) as \(4\) and the maximum leaf node size \(\tau\) as 128 (_NY_), 256 (_FL_), and 512 (_W_). In addition, for each graph, we set one vertex order for the methods with the same partition structure and apply it in all experiments to guarantee fairness.
**Performance Metrics.** We measure the performance of shortest path method from four aspects: index construction time \(t_{c}\), query time \(t_{q}\), update efficiency \(t_{u}\), and index size \(s\).
### _Performance Comparison_
**Exp 1: Effect of Partition Method.** We first test the five best partition methods for planar index structure (_KathyPar_[33], _PUNCH_[35], _METIS_[40], _HEP_[74] and _CLUGP_[68]). Because _P-TD_ requires partition to be a connected [27], only _PUNCH_ can be used for it. As shown in Figure 6, _PUNCH_ outperforms others in query and update because it reduces the boundary number spatially, whilst vertex-cuts (_HEP_ and _CLUGP_) perform worse because they generate much more boundaries. As evidenced by both theoretical analysis and experimental results, vertex-cut is worse than edge-cut for partition-based pathfinding. As for the hierarchical partitioning, we follow _G-Tree_ and use its integrated _METIS_.
**Exp 2: Effect of Partitioned Index Strategy.** We evaluate different partitioned index strategies on _NY_ and _FL_. As shown in Figure 7 (a)-(d), the _No-Boundary_ has the lowest construction time (\(6\times\) speed-up for _P-TD_ and _P-CH_), smallest index size, and lowest update time (\(73\times\) and \(5420\times\) speed-up for _P-TD_ and _P-CH_) with slightly worse performance on query time. The _Post-Boundary_ has the same query time as _Pre-Boundary_, a shorter construction time and update time than _Pre-Boundary_ in most cases. We next compare the efficiency of different query types (OD pair are _both boundaries (B-B)_, _same partition (S-P)_, _different partitions (P-P)_, and _one boundary one partition (P-B)_) for _No-Boundary_ and _Post-Boundary_ strategies. Figure 7 (e)-(f) shows that the _Post-Boundary_ can greatly improve the S-P query (\(37\times\) and \(19\times\) speed-up for _P-TD_) while it has almost the same performance with _No-Boundary_ in other types. The performance gain comes from the _Post-Boundary_'s correct shortcuts. Finally, we evaluate the effectiveness of _Pruned-Boundary_ optimization by measuring the average vertex degree of the overlay graph. We use _PUNCH_ and _KaHyPur_ as the representatives of edge-cut on road networks and complex networks, while _HEP_ for vertex-cut. As shown in Table III, the degree decreases with the _Pruned-Boundary_ in most cases. However, it cannot improve _PUNCH_ as _PUNCH_ already has a very good partition result.
**Exp 3: Comparison of Partitioned SP Index Methods.** We compare our proposed _PSP_ indexes with the state-of-the-art and the non-partitioned indexes in Figure 8. We first analyze the performance of our proposed indexes against their counterparts: 1) _Q-Planar_ has the same query performance and index size with _FHL_, but it is faster to construct and update (\(84\times\) in W); 2) _U-Planar_ is slower to query, but it is much faster to construct (\(8.6\times\) in FL) and update (more than \(1000\times\) in all) with smaller index size; 3) _UQ-Hier_ has faster construction, smaller index, faster query and update than _G-Tree_; 4) _Q-Core_ has longer construction and update time than _CT_, and it seems to have similar query performance. This is because the query improvement lies in the _S-P_ query type but not the others, and it is the number of _S-P_ query that determines the overall improvement; 5) _U-Core_ has faster construction time, smaller index and faster update than _CT_, but its query is slower; 6) _Sketch_ can scale up to very large graphs, but its query and update efficiency are slower than _Q-Core_ and _U-Core_. Note that _CT_, _Q-Core_ and _U-Core_ cannot finish computation on _WI_ due to insufficient memory, which is caused by "_Curse of Pruning Power_" of _PLL_[10] and the denser contracted core graph.
Secondly, we have the following observations: 1) Partition-based methods construct index faster with a smaller index, slower query, and longer update compared with their non-partitioned counterparts; 2) Road networks can achieve the best performance with _TD_ index since it is best for small treewidth networks; 3) Another suitable index for the road network is _CH_ as _P-CH_ could achieve faster query and update than that of _P-PLL_ in medium networks. It is because its partitioned query processing does not involve distance concatenation; 4) Complex networks work best under _core-periphery_ since the auxiliary information needed for index update could be easily out of memory under other partition
Fig. 6: Effect of Graph Partition Methods. Bar: Query, Ball: Update
methods; 5) _Sketch_ can be used widely in terms of both network type and size due to the small core-bounded partition.
**Exp 4: Influence on Complicated Path Problems (CSP).** To better demonstrate the power of our partition strategies, we conduct experiments of 2-dimensional constrained shortest paths by optimizing _FHL_[16]. Apart from NY, we also test on Colorado (COL) network with 435,666 vertices and 1,057,0666 edges. As shown in Table IV, the post-boundary strategy is the fastest to construct and has the smallest label size. Although _no partition label_ is the fastest in query, it is slow to construct and has huge index sizes. More importantly, it fails to construct on higher dimensional _CSP_ scenarios [31]. As for the _Pre-Boundary_, it suffers from the slow boundary pair skyline path search while _Post-Boundary_ solves it.
## VI Related Work
We categorize the existing methods based on our scheme: **1) _Pre-Boundary+Search/All-Pair+Hierarchy/Planar_**: These methods (_HiTi_[42], _Graph Separators_, _Customizable Route Planning_[28][71], _ParDiSP_[22, 98]) pre-compute the shortest distance between boundaries (some hierarchically) to guide the search. In a broad sense, _Arc-flag_[23] and _SHARC_[24] also belong to this category. _G-tree_[43, 45] uses dynamic programming to compute the distance between layer's all-pair information to replace searching, and it is widely used in _kNN_[44], _ride-sharing_[99], _time-dependent routing_[100], and _machine learning-based_ path finding [101]. They are slow to construct due to _Pre-B_ and slower to query to direct search; **2) _Pre-Boundary+Search/PLL+Planar_**: _COLA_[15] builds the labels for the _skyline shortest path_ on the overlay graph to answer the constrained shortest path query. This structure's construction suffers from _Pre-B_ and query suffers from searching; **3) _Pre-Boundary+TD/TD+Planar: FHL_[16, 17, 31] partition builds the _TD_ both within and between partitions for multi-dimensional skyline paths. As validated in Exp 4, our _Post-B_ can speedup _Pre-B_'s construction dramatically; **4) _Pre-Boundary+PLL/ PLL+Planar_**: _T2Hop_[13, 14] utilizes two layers of _PLL_ to reduce the complexity of long range time-dependent paths, and this structure's performance is limited by _PLL_; **5) _No-Boundary+PLL/TD+Core_**: _Core-Tree_[6, 79] is the baseline and we have discussed it; **6) _Pre-Boundary+Search/All-Pair/PLL+Sketch_**: This category works on huge graphs where index is nearly impossible so only a small number of landmarks are selected to either help prune the search [29, 30, 84] or approximate result [92, 94].
## VII Conclusions
In this work, we decouple the partitioned shortest path indexes and propose a universal scheme with three dimensions: partitioned index strategy, path index, and partition structure. For partitioned index strategies, we propose two new strategies and pruned-boundary optimization for better index construction and update performance. For partition structure, we propose a new path-oriented classification and identify the factors influencing the _PSP index_ performance. We also provide index maintenance solutions for classic _PSP_ indexes. To demonstrate the usefulness of this scheme, we further recouple these dimensions and propose five new indexes that are either more efficient in query or update than the current state-of-the-arts.
Fig. 8: Shortest Path Index Comparison. (Bar: Query and Update Time; Ball: Construction Time and Index Size)
Fig. 7: Effect of Partitioned Index Strategy. (For (a)-(d), Bar: Construction Time and Query; Ball: Index Size and Update) |
2303.10564 | A Controlled Mean Field Model for Chiplet Population Dynamics | In micro-assembly applications, ensemble of chiplets immersed in a dielectric
fluid are steered using dielectrophoretic forces induced by an array of
electrode population. Generalizing the finite population deterministic models
proposed in prior works for individual chiplet position dynamics, we derive a
controlled mean field model for a continuum of chiplet population in the form
of a nonlocal, nonlinear partial differential equation. The proposed model
accounts for the stochastic forces as well as two different types of nonlocal
interactions, viz. chiplet-to-chiplet and chiplet-to-electrode interactions.
Both of these interactions are nonlinear functions of the electrode voltage
input. We prove that the deduced mean field evolution can be expressed as the
Wasserstein gradient flow of a Lyapunov-like energy functional. With respect to
this functional, the resulting dynamics is a gradient descent on the manifold
of joint population density functions with finite second moments that are
supported on the position coordinates. | Iman Nodozi, Abhishek Halder, Ion Matei | 2023-03-19T04:42:45Z | http://arxiv.org/abs/2303.10564v2 | # A Controlled Mean Field Model for Chiplet Population Dynamics
###### Abstract
In micro-assembly applications, ensemble of chiplets immersed in a dielectric fluid are steered using dielectrophoretic forces induced by an array of electrode population. Generalizing the finite population deterministic models proposed in prior works for individual chiplet position dynamics, we derive a controlled mean field model for a continuum of chiplet population in the form of a nonlocal, nonlinear partial differential equation. The proposed model accounts for the stochastic forces as well as two different types of nonlocal interactions, viz. chiplet-to-chiplet and chiplet-to-electrode interactions. Both of these interactions are nonlinear functions of the electrode voltage input. We prove that the deduced mean field evolution can be expressed as the Wasserstein gradient flow of a Lyapunov-like energy functional. With respect to this functional, the resulting dynamics is a gradient descent on the manifold of joint population density functions with finite second moments that are supported on the position coordinates.
## I Introduction
This work is motivated by micro-assembly applications, such as printer systems [1, 2] and manufacturing of photovoltaic solar cells, where an array of electrodes can be used to generate spatio-temporally non-homogeneous electric potential landscapes for dynamically assembling the "chiplets"-micron sized particles immersed in dielectric fluid-into desired patterns. In such applications, the electric potentials generated by the array of electrodes induce non-uniform dielectrophoretic forces on the chiplets, thereby resulting in a population-level chiplet dynamics. The purpose of the present work is to propose a controlled mean field model for the same.
There have been several works [3, 4, 5, 6, 7] on the modeling and dielectrophoretic control of chiplet population. However, a continuum limit macroscopic dynamics that accounts for both chiplet-to-chiplet and chiplet-to-electrode nonlocal interactions, as considered herein, has not appeared before.
The mean field limit pursued here involves considering the number of chiplets and electrodes as infinity, i.e., to think both of them as continuum population. There are two reasons why this could be of interest. _First,_ the continuum limit helps approximate and better understand the dynamics for large but finitely many chiplets and electrodes, which is indeed the situation in the engineering applications mentioned before. _Second_, the distributed control synthesis problem for large but finite population becomes computationally intractable, as noted in recent works [8, 6, 9]. A controlled mean field model opens up the possibility of designing a controller in the continuum limit with optimality guarantees. Such a controller can then be applied to a large but finite population with sub-optimality bounds. We clarify here that in this work, we only present the mean field model and its properties. We leave the control synthesis problem for our follow up work.
As in prior works such as [6], we consider the chiplet dynamics in two dimensional position coordinate. Specifically, let \(\mathbf{x}(t)\in\mathbb{R}^{2}\) denote the position vector of a chiplet at any fixed time \(t\in[0,\infty)\), and let
\[u:\mathbb{R}^{2}\times[0,\infty)\mapsto[u_{\min},u_{\max}]\subset\mathbb{R}\]
denote a causal deterministic control policy, i.e., \(u=u(\mathbf{x},t)\). The control \(u\) represents the electrode voltage input, and in practice, the typical voltage range \([u_{\min},u_{\max}]=[-400,400]\) Volt. We denote the collection of admissible control policies as \(\mathcal{U}\). For a typical experimental set up detailing the sensing-control architecture, see [6, Sec. II].
A viscous drag force balances the controlled force vector field \(\mathbf{f}^{u}\) induced by the joint effect of the chiplet-to-chiplet and chiplet-to-electrode interactions. At the low Reynolds number context relevant here, the viscous drag force is proportional to \(\dot{\mathbf{x}}\), where the proportionality constant \(\mu\) denotes the viscous coefficient of the dielectric fluid. Ignoring the acceleration due to negligible mass of a chiplet, the dynamics then takes a form
\[\underbrace{\mu\dot{\mathbf{x}}}_{\text{viscous drag force}}=\underbrace{\mathbf{f}^{u}}_{ \text{controlled interaction force}}+\text{ noise} \tag{1}\]
where the noise may result from stochastic forcing due to environmental fluctuations (e.g., dielectric fluid impurities) and/or unmodeled dynamics.
_Contributions:_ In this paper, we make the following two specific contributions.
* We derive a controlled mean field dynamics (Sec. III) for the macroscopic motion of the chiplet population. The derived model is non-affine in control, and rather non-standard compared to the existing nonlocal dynamics models available in the literature.
* We establish that the derived mean field dynamics model can be understood as the Wasserstein gradient flow (Sec. IV) of a free energy functional over the manifold of chiplet population density functions.
## II Notations and Preliminaries
**Wasserstein distance.** The Wasserstein distance \(W\) between a pair of probability density functions \(\rho_{1}(\mathbf{x}),\rho_{2}(\mathbf{y})\) (or between corresponding probability measures in general) with
finite second moments, respectively supported on \(\mathcal{X},\mathcal{Y}\subseteq\mathbb{R}^{d}\), is defined as
\[W(\rho_{1},\rho_{2}):=\!\!\left(\!\!\inf_{\rho\in\Pi_{2}(\rho_{1},\rho_{2})}\! \!\!\!\int_{\mathcal{X}\times\mathcal{Y}}\!\!\!\|\mathbf{x}-\mathbf{y}\|_{2}^{2}\;\rho( \mathbf{x},\mathbf{y})\mathrm{d}\mathbf{x}\mathrm{d}\mathbf{y}\right)^{\frac{1}{2}} \tag{2}\]
where \(\Pi_{2}\left(\rho_{1},\rho_{2}\right)\) is the collection of all joint probability density functions \(\rho(\mathbf{x},\mathbf{y})\) supported on the product space \(\mathcal{X}\times\mathcal{Y}\) having finite second moments, \(\mathbf{x}\) marginal \(\rho_{1}\), and \(\mathbf{y}\) marginal \(\rho_{2}\). As such, (2) involves an infinite dimensional linear program that goes back to the work of Kantorovich [10]. It is well-known [11, p. 208] that \(W\) is a metric on the space of probability density functions (more generally, on the space of probability measures). Under mild assumptions, the minimizing measure \(\rho^{\mathrm{opt}}(\mathbf{x},\mathbf{y})\mathrm{d}\mathbf{x}\mathrm{d}\mathbf{y}\) is supported on the graph of the optimal transport map \(T^{\mathrm{opt}}:\mathcal{X}\mapsto\mathcal{Y}\) pushing the measure \(\rho_{1}(\mathbf{x})\mathrm{d}\mathbf{x}\) forward to \(\rho_{2}(\mathbf{y})\mathrm{d}\mathbf{y}\). For many connections between the Wasserstein metric and theory of optimal mass transport, we refer the readers to [11, 12].
**Wasserstein gradient of a functional.** Let \(\mathcal{P}\left(\mathbb{R}^{d}\right)\) denote the space of all probability density functions supported over the subsets of \(\mathbb{R}^{d}\), and denote the collection of probability density functions with finite second moments as \(\mathcal{P}_{2}\left(\mathbb{R}^{d}\right)\subset\mathcal{P}\left(\mathbb{R} ^{d}\right)\). The _Wasserstein gradient_ of a functional \(\Phi:\mathcal{P}_{2}\left(\mathbb{R}^{d}\right)\mapsto\mathbb{R}\), denoted as \(\nabla^{W}\Phi\), evaluated at \(\rho\in\mathcal{P}_{2}\left(\mathbb{R}^{d}\right)\), is given by [13, Ch. 8]
\[\nabla^{W}\Phi\left(\rho\right):=-\nabla\cdot\left(\rho\nabla\frac{\delta \Phi}{\delta\rho}\right) \tag{3}\]
where \(\nabla\) denotes the standard Euclidean gradient, and \(\frac{\delta}{\delta\rho}\) denotes the functional derivative w.r.t. \(\rho\).
To exemplify the definition (3), consider the functional \(\Phi(\rho)=\int\rho\log\rho\) (negative entropy) for \(\rho\in\mathcal{P}_{2}\left(\mathbb{R}^{d}\right)\). Then \(\frac{\delta\Phi}{\delta\rho}=1+\log\rho\), \(\nabla(1+\log\rho)=\nabla\rho/\rho\), and we get \(\nabla^{W}\Phi\left(\rho\right)=-\nabla\cdot\nabla\rho=-\Delta\rho\), where \(\Delta:=\nabla\cdot\nabla\) denotes the Euclidean Laplacian operator.
**Other notations.** The notation \(\langle\cdot,\cdot\rangle\) is used to denote either the standard Euclidean inner product of vectors, or the \(L^{2}\) inner product of functions, as evident from the context. For any natural number \(n\), we use the finite set notation \([\![n]\!]:=\{1,2,\ldots,n\}\). The symbols \(\operatorname{ess\,sup}\), \(\mathbb{E}\), \(\mathbb{P}\), \(\mathbf{I}_{2}\), \(\|\cdot\|_{2}\) and \(\|\cdot\|_{\infty}\) denote the essential supremum, the expectation, the probability measure, the \(2\times 2\) identity matrix, the vector 2 and \(\infty\) norms, respectively. The symbol \(\sim\) is used as a shorthand for "follows the statistical distribution density".
Given probability measures \(\mu_{0},\mu_{1}\) on \(\mathbb{R}^{d}\), the _total variation distance_\(\operatorname{dist}_{\mathrm{TV}}(\mu_{0},\mu_{1}):=\frac{1}{2}\sup_{f} \left|\int f\;\mathrm{d}(\mu_{0}-\mu_{1})\right|\) where the supremum is over all measurable \(f:\mathbb{R}^{d}\to\mathbb{R}\), \(\|f\|_{\infty}\leq 1\). For \(f:\mathbb{R}^{d}\to\mathbb{R}\), we define its _Lipschitz constant_\(\|f\|_{\mathrm{Lip}}:=\sup_{\mathbf{x}\neq\mathbf{y}}\frac{\left|f(\mathbf{x}-f(\mathbf{y}) \right|)}{\|\mathbf{x}-\mathbf{y}\|_{2}^{2}}\), and its _bounded Lipschitz constant_\(\|f\|_{\mathrm{BL}}:=\max\{\|f\|_{\infty},\|f\|_{\mathrm{Lip}}\}\). The _bounded Lipschitz distance_[14, Ch. 11.3] between probability measures \(\mu_{0},\mu_{1}\) is \(\operatorname{dist}_{\mathrm{BL}}(\mu_{0},\mu_{1}):=\sup_{\|f\|_{\mathrm{BL}} \leq 1}\left|\int f\;\mathrm{d}(\mu_{0}-\mu_{1})\right|\). Notice that \(\operatorname{dist}_{\mathrm{BL}}(\mu_{0},\mu_{1})\leq 2\operatorname{dist}_{ \mathrm{TV}}(\mu_{0},\mu_{1})\).
For \(\mathcal{X}\subseteq\mathbb{R}^{d}\), we use \(C_{b}(\mathcal{X})\) to denote the space of all bounded continuous functions \(\varphi:\mathcal{X}\mapsto\mathbb{R}\), and \(C_{b}^{k}(\mathcal{X})\) comprises those which are also \(k\) times continuously differentiable (in the sense of mixed partial derivatives of order \(k\)). We say that a function sequence \(\{g_{n}\}_{n\in\mathbb{N}}\) where \(g_{n}\in L^{1}(\mathcal{X})\), converges weakly to a function \(g\in L^{1}(\mathcal{X})\), if \(\lim_{n\to\infty}\int_{\mathcal{X}}\left(g_{n}-g\right)\psi=0\) for all \(\psi\in C_{b}(\mathcal{X})\). We symbolically denote the weak convergence as \(g_{n}\rightharpoonup g\).
## III Controlled Mean Field Model
In this Section, we introduce the chiplet population dynamics. Such model has its origin in the physical processes enabling silicon microchips to be manipulated by both electrophoretic and dielectrophoretic forces when they are placed in dielectric carriers such as Isopar-M [15]. These carriers have low conductivity which allows long-range Coulomb interactions. In general, the dielectrophoretic forces dominate, and they are induced by the potential energy generated by electrostatic potentials created in electrodes. The electrodes are formed by depositing nm-scale Molybdenum-Chromium (MoCr) onto a glass substrate via vapor deposition and then directly patterning them with a laser ablation tool. The electrodes are then insulated from the chiplets and dielectric fluid by thermally laminating a micrometer-scale thick perfluoroalkoxy (PFA) film. The dielectric forces act on the chiplets, while viscous drag forces proportional to their velocities oppose their motion. Due to the negligible mass of the chiplets, their acceleration can be ignored.
Let us denote the _normalized chiplet population density function_ (PDF) at time \(t\) as \(\rho(\mathbf{x},t)\). By definition, \(\rho\geq 0\) and \(\int_{\mathbb{R}^{2}}\rho\;\mathrm{d}\mathbf{x}=1\) for all \(t\).
We make the following assumptions.
1. Under an admissible control policy \(u\in\mathcal{U}\), the chiplet normalized population distribution over the two dimensional Euclidean configuration space remains absolutely continuous w.r.t. the Lebesgue measure \(\mathrm{d}\mathbf{x}\) for all \(t\in[0,\infty)\). In other words, the corresponding PDFs \(\rho(\mathbf{x},t)\) exist for all \(t\in[0,\infty)\).
2. Under an admissible control policy \(u\in\mathcal{U}\), we have \(\rho\in\mathcal{P}_{2}(\mathbb{R}^{2})\) for all \(t\).
The sample path dynamics of a chiplet position is governed by a controlled nonlocal vector field
\[\mathbf{f}^{u}:\mathbb{R}^{2}\times[0,\infty)\times\mathcal{U}\times\mathcal{P}_{2}( \mathbb{R}^{2})\mapsto\mathbb{R}^{2}\]
induced by a controlled _interaction potential_\(\phi^{u}:\mathbb{R}^{2}\times\mathbb{R}^{2}\times[0,\infty)\mapsto\mathbb{R}\), i.e.,
\[\mathbf{f}^{u}(\mathbf{x},t,u,\rho):=-\nabla\left(\rho*\phi^{u}\right), \tag{4}\]
where \(*\) denotes _generalized convolution_ in the sense
\[\left(\rho*\phi^{u}\right)(\mathbf{x},t):=\int_{\mathbb{R}^{2}}\phi^{u}(\mathbf{x},\mathbf{y},t)\rho(\mathbf{y},t)\mathrm{d}\mathbf{y}.\]
The superscript \(u\) in \(\phi^{u}\) emphasizes that the potential depends on the choice of control policy. In particular,
\[\phi^{u}(\mathbf{x},\mathbf{y},t) :=\phi^{u}_{\mathrm{cc}}(\mathbf{x},\mathbf{y},t)+\phi^{u}_{\mathrm{cc}}( \mathbf{x},\mathbf{y},t), \tag{5a}\] \[\phi^{u}_{\mathrm{cc}}(\mathbf{x},\mathbf{y},t) :=\frac{1}{2}C_{\mathrm{cc}}\left(\|\mathbf{x}-\mathbf{y}\|_{2}\right) \left(\bar{u}(\mathbf{y},t)-\bar{u}(\mathbf{x},t)\right)^{2},\] (5b) \[\phi^{u}_{\mathrm{cc}}(\mathbf{x},\mathbf{y},t) :=\frac{1}{2}C_{\mathrm{cc}}\left(\|\mathbf{x}-\mathbf{y}\|_{2}\right) \left(u(\mathbf{y},t)-\bar{u}(\mathbf{x},t)\right)^{2}, \tag{5c}\]
for \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{2}\) and
\[\bar{u}(\mathbf{x},t):=\frac{\int_{\mathbb{R}^{2}}C_{\text{cc}}\left(\left\|\mathbf{x}- \mathbf{y}\right\|_{2}\right)u(\mathbf{y},t)\rho(\mathbf{y},t)\mathrm{d}\mathbf{y}}{\int_{ \mathbb{R}^{2}}C_{\text{cc}}\left(\left\|\mathbf{x}-\mathbf{y}\right\|_{2}\right)\rho( \mathbf{y},t)\mathrm{d}\mathbf{y}}. \tag{6}\]
The subscripts cc and ce denote the chiplet-to-chiplet and chiplet-to-electrode interactions, respectively. As before, the superscript \(u\) highlights the dependence on the choice of control policy. In (5b)-(5c), \(C_{\text{cc}}\) and \(C_{\text{cc}}\) respectively denote the chiplet-to-chiplet and chiplet-to-electrode capacitances. These capacitances can be determined using two dimensional electrostatic COMSOL(r) [16] simulations for a symmetric chiplet geometry. Such simulation model comprises two metal plates with dimensions defined by the chiplet and electrode geometry, surrounded by a dielectric with properties identical to those of the Isopar-M solution. The capacitances are computed from the charges that result on each conductor when an electric potential is applied to one and the other is grounded. Once the capacitance among chiplets and electrodes at different distances are computed, differentiable parameterized capacitance function approximations (e.g., linear combination of error functions) can be fitted to that data.
In words, (5a) says that the total controlled interaction potential \(\phi^{u}\) is a sum of the chiplet-to-chiplet interaction potential \(\phi^{u}_{\text{cc}}\) given by (5b), and the chiplet-to-electrode interaction potential \(\phi^{u}_{\text{cc}}\) given by (5c).
The expressions for (5b), (5c), (6) arise from capacitive electrical circuit abstraction that lumps the interaction between the electrodes and the chiplets. In [6, Sec. III], such an abstraction was detailed for a finite population of \(n\) chiplets and \(m\) electrodes. The expressions (5b), (5c), (6) generalize those in the limit \(n,m\to\infty\). On the other hand, specializing (5b), (5c), (6) for a finite population \(\{\mathbf{x}_{i}\}_{i\in[n]}\) with \(\rho\equiv\frac{1}{n}\sum_{i=1}^{n}\delta_{\mathbf{x}_{i}}\) where \(\delta_{\mathbf{x}_{i}}\) denotes the Dirac delta at \(\mathbf{x}_{i}\in\mathbb{R}^{2}\), indeed recovers the development in [6, Sec. III].
**Remark 1**.: _An immediate observation from (5) is that even though the potential \(\phi^{u}_{\text{cc}}\) is symmetric in \(\mathbf{x},\mathbf{y}\), the potential \(\phi^{u}_{\text{cc}}\) is not. Therefore, the overall controlled interaction potential \(\phi^{u}\) is not symmetric in \(\mathbf{x},\mathbf{y}\)._
Without loss of generality, we assume unity viscous coefficient in (1), i.e., \(\mu=1\) (since otherwise we can re-scale the \(\mathbf{f}^{u}\)). In addition, assuming the chiplet velocity is affected by additive standard Gaussian White noise, the sample path dynamics of the \(i\)th chiplet position \(\mathbf{x}_{i}(t)\) then evolves as per a controlled interacting diffusion, i.e., as a Ito stochastic differential equation (SDE) with _nonlocal_ nonlinear drift:
\[\mathrm{d}\mathbf{x}_{i}=\mathbf{f}^{u}(\mathbf{x}_{i},t,u,\rho)\;\mathrm{d}t+\sqrt{2 \beta^{-1}}\;\mathrm{d}\mathbf{w}_{i}(t),\quad i\in[\![n]\!], \tag{7}\]
where \(\mathbf{f}^{u}\) is given by (4), \(\beta>0\) denotes inverse temperature, and \(\mathbf{w}_{i}(t)\in\mathbb{R}^{2}\) denote i.i.d. realizations of a standard Wiener process that is \(\mathcal{F}_{t}\)-adapted on a complete filtered probability space with sigma-algebra \(\mathcal{F}\) and associated filtration \(\left(\mathcal{F}_{t}\right)_{t\geq 0}\). In particular, \(\mathcal{F}_{0}\) contains all \(\mathbb{P}\)-null sets and \(\mathcal{F}_{t}\) is right continuous.
The study of SDEs with nonlocal nonlinear drift originated in [17], and has grown into a substantial literature, see e.g., [18, 19]. In statistical physics, such models are often referred to as "propagation of chaos"-a terminology due to Kac [20]. A novel aspect of the model (7) w.r.t. the existing literature is that the interaction potential \(\phi^{u}\) has a nonlinear dependence on the control policy \(u(\mathbf{x},t)\) as evident from (5).
### _Existence-Uniqueness of Solution for (7)_
For a given causal control policy \(u\in\mathcal{U}\), it is known [21, Thm. 2.4] that an interacting diffusion of the form (7) with initial condition \(\mathbf{x}_{i0}\sim\rho_{0}\) admits unique weak solution provided the following four conditions hold:
(i) the drift \(\mathbf{f}^{u}\) is jointly Borel measurable w.r.t. \(\mathbb{R}^{2}\times[0,\infty)\times\mathcal{P}\left(\mathbb{R}^{2}\right)\),
(ii) the diffusion coefficient \(\sqrt{2\beta^{-1}}\mathbf{I}_{2}\) is invertible, and the driftless SDE \(\mathrm{d}\mathbf{z}(t)=\sqrt{2\beta^{-1}}\mathrm{d}\mathbf{w}(t)\) admits unique strong solution,
(iii) the drift \(\mathbf{f}^{u}\) is uniformly bounded,
(iv) there exists \(\kappa>0\) such that
\[\|\mathbf{f}^{u}\left(\mathbf{x},t,u(\mathbf{x},t),\rho\right)-\mathbf{f}^{u}\left( \mathbf{x},t,u(\mathbf{x},t),\widehat{\rho}\right)\|_{2}\] \[\quad\leq\kappa\;\mathrm{dist}_{\mathrm{TV}}\left(\rho,\widehat{ \rho}\right)\quad\text{uniformly in }(\mathbf{x},t)\in\mathbb{R}^{2}\times[0,\infty).\]
We assume that the capacitances \(C_{\text{cc}},C_{\text{cc}}\) in (5)-(6) are sufficiently smooth, and the control \(u\) can be parameterized to ensure smoothness for guaranteeing that \(\nabla_{\mathbf{x}}\phi^{u}_{\text{cc}},\nabla_{\mathbf{x}}\phi^{u}_{\text{cc}}\) (and thus \(\nabla_{\mathbf{x}}\phi^{u}\)) are \(\|\cdot\|_{2}\) Lipschitz and uniformly bounded.
As \(\nabla_{\mathbf{x}}\phi^{u}\) is bounded, \(\mathbf{f}^{u}=\int_{\mathbb{R}^{2}}\nabla_{\mathbf{x}}\phi^{u}(\mathbf{x},\mathbf{y},t)\rho( \mathbf{y})\mathrm{d}\mathbf{y}\), which being an average of Lipschitz, is itself Lipschitz and thus continuous. Since \(\mathbf{f}^{u}\) is continuous, the preimage of any Borel set in \(\mathbb{R}^{2}\) under \(\mathbf{f}^{u}\) is a measurable set in \(\mathbb{R}^{2}\times[0,\infty)\times\mathcal{U}\times\mathcal{P}_{2}(\mathbb{R}^{ 2})\). Thus, condition (i) holds.
Condition (ii) holds for any \(\beta>0\) since \(\mathbf{z}(t)\) is a Wiener process with variance \(2\beta^{-1}\).
For (iii), we find \(\underset{(\mathbf{x},t)\in\mathbb{R}^{2}\times[0,\infty)]}{\operatorname{esssup}}\| \mathbf{f}^{u}\left(\mathbf{x},t,u(\mathbf{x},t),\rho\right)\|_{\infty}\)
\[=\underset{(\mathbf{x},t)\in\mathbb{R}^{2}\times[0,\infty)}{\operatorname{esssup}}\| \int_{\mathbb{R}^{2}}\nabla_{\mathbf{x}}\phi^{u}(\mathbf{x},\mathbf{y},t)\rho(\mathbf{y}) \mathrm{d}\mathbf{y}\|_{\infty}\] \[\leq\underset{(\mathbf{x},t)\in\mathbb{R}^{2}\times[0,\infty)}{\operatorname{ esssup}}\int_{\mathbb{R}^{2}}\|\nabla_{\mathbf{x}}\phi^{u}(\mathbf{x},\mathbf{y},t)\rho(\mathbf{y})\|_{ \infty}\mathrm{d}\mathbf{y}\] \[\leq\underset{\mathbb{R}^{2}}{\operatorname{esssup}}\underset{(\bm {x},t)\in\mathbb{R}^{2}\times[0,\infty)]}{\operatorname{esssup}}\|\nabla_{\mathbf{x}} \phi^{u}(\mathbf{x},\mathbf{y},t)\rho(\mathbf{y})\|_{\infty}\mathrm{d}\mathbf{y}\] \[=\underset{\mathbb{R}^{2}}{\operatorname{esssup}}\underset{(\bm {x},t)\in\mathbb{R}^{2}\times[0,\infty)]}{\operatorname{esssup}}\|\nabla_{\mathbf{x}} \phi^{u}(\mathbf{x},\mathbf{y},t)\|_{\infty}\rho(\mathbf{y})\mathrm{d}\mathbf{y} \tag{8}\]
where we used the Leibniz rule, triangle inequality, and that \(\rho\geq 0\). Per assumption, \(\nabla_{\mathbf{x}}\phi^{u}\) is uniformly bounded, and we have: (8) \(\leq M\int_{\mathbb{R}^{2}}\rho(\mathbf{y})\mathrm{d}\mathbf{y}=M\) for some constant \(M>0\).
Condition (iv) holds because
\[\|\mathbf{f}^{u}\left(\mathbf{x},t,u(\mathbf{x},t),\rho\right)-\mathbf{f}^{u}\left(\mathbf{x},t,u( \mathbf{x},t),\widetilde{\rho}\right)\|_{2}\] \[= \|\nabla_{\mathbf{x}}\int_{\mathbb{R}^{2}}\phi^{u}(\mathbf{x},\mathbf{y},t)( \rho(\mathbf{y})-\widetilde{\rho}(\mathbf{y}))\mathrm{d}\mathbf{y}\|_{2}\] \[= \|\int_{\mathbb{R}^{2}}\left(\nabla_{\mathbf{x}}\phi^{u}(\mathbf{x},\mathbf{y},t)\left(\rho(\mathbf{y})-\widetilde{\rho}(\mathbf{y})\right)\mathrm{d}\mathbf{y}\|_{2}\right.\] \[\leq c\;\mathrm{dist}_{\mathrm{BL}}(\rho,\widetilde{\rho})\leq\kappa\; \mathrm{dist}_
### _Derivation of the Controlled Mean Field Model_
Our next result (Theorem 1) derives the macroscopic mean field dynamics as a _nonlinear_ Fokker-Planck-Kolmogorov partial differential equation (PDE), and establishes the consistency of the mean field dynamics in the continuum limit vis-a-vis the finite population dynamics.
**Theorem 1**.: _Supposing **A1**, consider a population of \(n\) interacting chiplets, where the \(i\)th chiplet position \(\mathbf{x}_{i}\in\mathbb{R}^{2}\), \(i\in\llbracket n\rrbracket\), evolves via (7). Denote the Dirac measure concentrated at \(\mathbf{x}_{i}\) as \(\delta_{\mathbf{x}_{i}}\) and let the random empirical measure \(\rho^{n}:=\frac{1}{n}\sum_{i=1}^{n}\delta_{\mathbf{x}_{i}}\). Consider the empirical version of the dynamics (7) given by_
\[\mathrm{d}\mathbf{x}_{i}=\mathbf{f}^{u}\left(\mathbf{x}_{i},t,u,\rho^{n}\right)\,\mathrm{d }t+\sqrt{2\beta^{-1}}\,\mathrm{d}\mathbf{w}_{i}(t),\]
_with respective initial conditions \(\mathbf{x}_{0i}\in\mathbb{R}^{2}\), \(i\in\llbracket n\rrbracket\), which are independently sampled from a given PDF \(\rho_{0}\) supported on a subset of \(\mathbb{R}^{2}\). Then, as \(n\to\infty\), almost surely \(\rho^{n}\rightharpoonup\rho\) where the deterministic function \(\rho\) is a PDF that evolves as per the macroscopic dynamics_
\[\frac{\partial\rho}{\partial t} =-\nabla\cdot(\rho\mathbf{f}^{u})+\beta^{-1}\Delta\rho\] \[=\nabla\cdot\left(\rho\nabla\left(\rho*\phi^{u}+\beta^{-1}(1+ \log\rho)\right)\right), \tag{9}\]
_with the initial condition_
\[\rho(\cdot,t=0)=\rho_{0}\in\mathcal{P}\left(\mathbb{R}^{2}\right)\,\,(\text{ given}). \tag{10}\]
Proof.: To describe the dynamics of \(\rho^{n}\) as \(n\to\infty\), we start with investigating the time evolution of the quantity
\[\left\langle\varphi,\rho^{n}\right\rangle:=\frac{1}{n}\sum_{i=1}^{n}\varphi \left(\mathbf{x}_{i}\right) \tag{11}\]
for any compactly supported test function \(\varphi\in C_{b}^{2}(\mathbb{R}^{2})\).
Using Ito's rule, we have
\[\mathrm{d}\varphi\left(\mathbf{x}_{i}\right)=L_{\rho^{n}}\varphi\left(\mathbf{x}_{i} \right)\mathrm{d}t+\nabla\varphi^{\top}\left(\mathbf{x}_{i}\right)\sqrt{2\beta^{-1 }}\mathrm{d}\mathbf{w}_{i} \tag{12}\]
wherein the infinitesimal generator
\[L_{\rho}\varphi(\mathbf{x}):=\left\langle\mathbf{f}^{u}(\mathbf{x},t,u,\rho),\nabla_{\bm {x}}\varphi(x)\right\rangle+\beta^{-1}\Delta\varphi. \tag{13}\]
Thus,
\[\mathrm{d}\left\langle\varphi,\rho^{n}\right\rangle =\frac{1}{n}\sum_{i=1}^{n}\mathrm{d}\varphi\left(\mathbf{x}_{i}\right)\] \[=\left\langle L_{\rho^{n}}\varphi,\rho^{n}\right\rangle\mathrm{d }t+\frac{1}{n}\sum_{i=1}^{n}\sqrt{2\beta^{-1}}\nabla\varphi^{\top}\left(\mathbf{x }_{i}\right)\mathrm{d}\mathbf{w}_{i}\] \[:=\left\langle L_{\rho^{n}}\varphi,\rho^{n}\right\rangle\mathrm{d }t+\mathrm{d}M_{t}^{n} \tag{14}\]
where \(M_{t}^{n}\) is a local martingale.
Because \(\varphi\in C_{b}^{2}(\mathbb{R}^{2})\), we have \(\left|\sqrt{2\beta^{-1}}\nabla\varphi^{\top}\left(\mathbf{x}_{i}\right)\right|\leq C\) uniformly for some \(C>0\). Notice that the quadratic variation of the noise term in (14) is
\[\left[M_{t}^{n}\right]:=\frac{1}{n^{2}}\sum_{i=1}^{n}\int_{0}^{t}\left|\sqrt{2 \beta^{-1}}\nabla\varphi^{\top}\left(\mathbf{x}_{i}(s)\right)\right|^{2}\,\,\mathrm{ d}s\leq\frac{tC^{2}}{n},\]
and using Doob's martingale inequality [22, Ch. 14.11],
\[\mathbb{E}\left(\sup_{t\leq T}M_{t}^{n}\right)^{2}\leq\mathbb{E}\left(\sup_{t \leq T}\left(M_{t}^{n}\right)^{2}\right)\leq 4\mathbb{E}\left(\left(M_{t}^{n}\right)^{2}\right)\]
\[\leq 4\mathbb{E}\left(\left[M_{t}^{n}\right]\right)\leq\frac{4tC^{2}}{n}.\]
Hence in the limit \(n\to\infty\), the noise term in (14) vanishes, resulting in a deterministic evolution equation.
For any \(t>0\), we take \(\{\rho^{n}\}_{n=1}^{\infty}\) to be the (random) elements of \(\Omega=C([0,\infty),\mathcal{P}(\mathbb{R}^{2}))\), the set of continuous functions from \([0,\infty)\) into \(\mathcal{P}(\mathbb{R}^{2})\) endowed with the topology of weak convergence. Following the argument of Oelschlager [23, Proposition 3.1], the sequence \(\mathbb{P}_{n}\) of joint PDFs on \(\Omega\) induced by the processes \(\{\rho^{n}\}_{n=1}^{\infty}\), is relatively compact in \(\mathcal{P}\left(\Omega\right)\), which is the space of probability measures on \(\Omega\). Oelschlager's proof makes use of the Prohorov's theorem [24, Ch. 5]. The relative compactness implies that the sequence \(\mathbb{P}_{n}\) weakly converges (along a subsequence) to some \(\mathbb{P}\), where \(\mathbb{P}\) is the joint PDFs induced by the limiting process \(\rho\). By Skorohod representation theorem [24, Theorem 6.7], the sequence \(\{\rho^{n}\}_{n=1}^{\infty}\) converges \(\mathbb{P}\)-almost surely to \(\rho\). Since the martingale term in (14) vanishes as \(n\to\infty\), we obtain
\[\mathrm{d}\left\langle\varphi,\rho\right\rangle=\left\langle L_{\rho}\varphi, \rho\right\rangle\,\mathrm{d}t=\left\langle\varphi,L_{\rho}^{*}\rho\right\rangle \mathrm{d}t \tag{15}\]
where \(L^{*}\) is the adjoint (see e.g., [25, Ch. 2.3, 2.5], [26, p. 278]) of the generator \(L\) given by (13), and is defined as
\[L_{m}^{*}\rho(x,t): =-\nabla\cdot\left(\rho\mathbf{f}^{u}(\mathbf{x},t,u,m)\right)+\beta^{-1} \Delta\rho\] \[=\nabla\cdot\left(\rho\nabla\left(m*\phi^{u}+\beta^{-1}(1+\log\rho) \right)\right)\]
where \(m\in\mathcal{P}\left(\mathbb{R}^{2}\right)\). For any test function \(\varphi\in C_{b}^{2}(\mathbb{R}^{2})\), (15) is valid almost everywhere, and therefore, \(\rho\) is almost surely a weak solution to the nonlinear Fokker-Planck-Kolmogorov PDE initial value problem (9)-(10).
Notice that the Cauchy problem (9)-(10) involves a _nonlinear nonlocal PDE_ which in turn depends on control policy \(u\).
The solution \(\rho(\mathbf{x},t)\), \(\mathbf{x}\in\mathbb{R}^{2}\), \(t\in[0,\infty)\), for the Cauchy problem (9)-(10) is understood in weak sense. In other words, for all compactly supported smooth test functions \(\theta\in C_{c}^{\infty}\left(\mathbb{R}^{2},[0,\infty)\right)\), the solution \(\rho(\mathbf{x},t)\) satisfies
\[\int_{0}^{\infty}\!\!\!\int_{\mathbb{R}^{2}}\!\!\left(\frac{\partial\theta}{ \partial t}\!+\!L_{\rho}\theta\!\right)\!\rho\,\mathrm{d}\mathbf{x}\,\,\mathrm{d}t +\!\int_{\mathbb{R}^{2}}\!\rho_{0}(\mathbf{x})\theta(\mathbf{x},0)\,\mathrm{d}\mathbf{x}=0 \tag{16}\]
where \(L_{\rho}\) is defined as in (13). The reason why \(\rho\) satisfying (16) for all \(\theta\in C_{c}^{\infty}\left(\mathbb{R}^{2},[0,\infty)\right)\) is called a "weak solution" of (9)-(10) is because such \(\rho\) may not be sufficiently smooth to satisfy (9). In the next Section, we provide a variational interpretation of the solution for problem (9)-(10).
## IV Chiplet Population Dynamics as Wasserstein Gradient Flow
The structure of the PDE in (9) motivates defining an _energy functional_
\[\Phi(\rho) :=\Phi_{\mathrm{cc}}(\rho)+\Phi_{\mathrm{ce}}(\rho)+\mathbb{E}_{ \rho}\left[\beta^{-1}\log\rho\right]\] \[=\mathbb{E}_{\rho}\left[\rho*\phi^{u}+\beta^{-1}\log\rho\right] \tag{17}\]
where \(\mathbb{E}_{\rho}\) denotes the expectation w.r.t. the PDF \(\rho\), and
\[\Phi_{\mathrm{ce}}(\rho) :=\int_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\phi_{\mathrm{ce}}^{u}( \mathbf{x},\mathbf{y})\rho(\mathbf{x})\rho(\mathbf{y})\mathrm{d}\mathbf{x}\,\,\mathrm{d}\mathbf{y},\] (18a) \[\Phi_{\mathrm{ce}}(\rho) :=\int_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\phi_{\mathrm{ce}}^{u}( \mathbf{x},\mathbf{y})\rho(\mathbf{x})\rho(\mathbf{y})\rho(\mathbf{y})\mathrm{d}
In (17), the term \(\mathbb{E}_{\rho}\left[\rho*\phi^{u}\right]\) quantifies the _interaction energy_ while the term \(\beta^{-1}\mathbb{E}_{\rho}\left[\log\rho\right]\) (scaled negative entropy) quantifies the _internal energy_. We have the following result.
**Theorem 2**.: _Let \(\Phi:\mathcal{P}_{2}\left(\mathbb{R}^{2}\right)\mapsto\mathbb{R}\) be the energy functional given in (17). Then, (i) the chiplet population dynamics given by (4), (5), (9) is Wasserstein gradient flow of the functional \(\Phi\), i.e.,_
\[\frac{\partial\rho}{\partial t}=-\nabla^{W}\Phi(\rho). \tag{19}\]
_(ii) \(\Phi\) is a Lyapunov functional that is decreasing along the flow generated by (9), i.e., \(\frac{\mathrm{d}}{\mathrm{d}t}\Phi\leq 0\)._
Proof.: (i) We start by noticing that the functional derivative
\[\frac{\delta\Phi}{\delta\rho}=\rho*\phi^{u}+\beta^{-1}(1+\log\rho). \tag{20}\]
Next, we rewrite (9) as
\[\frac{\partial\rho}{\partial t}=\nabla\cdot\left(\rho\nabla\frac{\delta\Phi}{ \delta\rho}\right), \tag{21}\]
which by definition (3), yields (19).
(ii) To show that \(\Phi\) is decreasing along the flow generated by (9), we find
\[\begin{split}\frac{\mathrm{d}}{\mathrm{d}t}\Phi&= \int\frac{\delta\Phi}{\delta\rho}\;\frac{\partial\rho}{\partial t}\mathrm{d} \boldsymbol{x}\\ &\stackrel{{\eqref{eq:def_eq_
interactions (viz. chiplet-to-chiplet and chiplet-to-electrode) jointly induce a macroscopic dynamics in terms of the joint PDF evolution of the chiplet ensemble. Our results establish consistency of the model in a limiting sense, and demonstrate that the resulting PDF evolution can be seen as an infinite dimensional gradient descent of a Lyapunov-like energy functional w.r.t. the Wasserstein metric.
While we focused our development for the derivation of the controlled mean field model, our future work will investigate the synthesis of optimal control of the chiplet joint PDF w.r.t. suitable performance objective that allows steering an initial joint PDF to a desired terminal joint PDF. Such feedback steering problems are generalized variants of the so-called Schrodinger bridge problem [33]. We note that the feedback synthesis for density steering subject to a controlled mean field nonlocal PDE is relatively less explored but has started appearing in recent works; see e.g., [34, 35, 36].
|
2308.04663 | Classification of lung cancer subtypes on CT images with synthetic
pathological priors | The accurate diagnosis on pathological subtypes for lung cancer is of
significant importance for the follow-up treatments and prognosis managements.
In this paper, we propose self-generating hybrid feature network (SGHF-Net) for
accurately classifying lung cancer subtypes on computed tomography (CT) images.
Inspired by studies stating that cross-scale associations exist in the image
patterns between the same case's CT images and its pathological images, we
innovatively developed a pathological feature synthetic module (PFSM), which
quantitatively maps cross-modality associations through deep neural networks,
to derive the "gold standard" information contained in the corresponding
pathological images from CT images. Additionally, we designed a radiological
feature extraction module (RFEM) to directly acquire CT image information and
integrated it with the pathological priors under an effective feature fusion
framework, enabling the entire classification model to generate more indicative
and specific pathologically related features and eventually output more
accurate predictions. The superiority of the proposed model lies in its ability
to self-generate hybrid features that contain multi-modality image information
based on a single-modality input. To evaluate the effectiveness, adaptability,
and generalization ability of our model, we performed extensive experiments on
a large-scale multi-center dataset (i.e., 829 cases from three hospitals) to
compare our model and a series of state-of-the-art (SOTA) classification
models. The experimental results demonstrated the superiority of our model for
lung cancer subtypes classification with significant accuracy improvements in
terms of accuracy (ACC), area under the curve (AUC), and F1 score. | Wentao Zhu, Yuan Jin, Gege Ma, Geng Chen, Jan Egger, Shaoting Zhang, Dimitris N. Metaxas | 2023-08-09T02:04:05Z | http://arxiv.org/abs/2308.04663v1 | # Classification of Lung cancer subtypes on CT images with synthetic pathological priors
###### Abstract
The accurate diagnosis on pathological subtypes for lung cancer is of significant importance for the follow-up treatments and prognosis managements. In this paper, we propose self-generating hybrid feature network (SGHF-Net) for accurately classifying lung cancer subtypes on computed tomography (CT) images. Inspired by studies stating that cross-scale associations exist in the image patterns between the same case's CT images and its pathological images, we innovatively developed a pathological feature synthetic module (PFSM), which quantitatively maps cross-modality associations through deep neural networks, to derive the "gold standard" information contained in the corresponding pathological images from CT images. Additionally, we designed a radiological feature extraction module (RFEM) to directly acquire CT image information and integrated it with the pathological priors under an effective feature fusion framework, enabling the entire classification model to generate more indicative and specific pathologically related features and eventually output more accurate predictions. The superiority of the proposed model lies in its ability to self-generate hybrid features that contain multi-modality image information based on a single-modality input. To evaluate the effectiveness, adaptability, and generalization ability of our model, we performed extensive experiments on a large-scale multi-center dataset (i.e., 829 cases from three hospitals) to compare our model and a series of state-of-the-art (SOTA) classification models. The experimental results demonstrated the superiority of our model for lung cancer subtypes classification with significant accuracy improvements in terms of accuracy (ACC), area under the curve (AUC), and F1 score.
## 1 Introduction
Lung cancer is one of the most prevalent malignant tumors and is the leading cause of cancer-related mortality globally [1]. It has been estimated that 2.2 million new cases of lung cancer and 1.8 million deaths occurred worldwide due to this disease in 2020 [2]. Lung cancer is a complex and diverse disease. According to the origins of tissues and the biologic behaviors of tumors, lung cancer can be classified into small-cell lung carcinoma (SCLC) and non-small-cell lung carcinoma (NSCLC) [3]. Approximately 85% of lung cancers are NSCLC, of which lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC) are the histological subtypes with the highest clinical incidence rates [4, 5]. Studies have shown that the vast diversity between LUAD and LUSC can be revealed at the molecular, pathological, and clinical levels [6, 7]. Consistent with their diversity, the responses of them to the same therapeutic strategy may also be distinct [7, 8]. In [9], the activity of immune checkpoint inhibitors was found to be different in LUAD than that in LUSC. In addition, Scagliotti et al. reported varying outcomes for LUAD and LUSC patients treated with the chemotherapy drug pemetrexed [10]. More recently, targeted anti-vascular endothelial growth factor (VEGF) bevacizumab therapy was found to be effective in treating patients with LUAD, whereas it was listed as a contraindication to LUSC [11]. Therefore, as lung cancer is a disease with rapid development and a poor prognosis, performing accurate pathological subtypes diagnosis of lung cancer at an early stage is critical for effective and personalized therapeutic managements.
A variety of clinical techniques have been developed to assist physicians in diagnosing lung cancer, such as chest radiography, computed tomography (CT), bronchoscopy, pathological examination and so forth [12]. Among these modalities, fast imaging and non-invasive CT scan has become the most commonly used method for early cancer detection and diagnosis [13]. By offering the three-dimensional anatomical image, CT is informative in terms of revealing the sizes, shapes, positions, metastasis statuses and heterogeneity of tumors. For certain pathological subtypes of lung cancer, several radiological manifestations can be utilized as diagnostic indicators for preliminary assessments [14, 15, 16, 17]. However, as the interpretations of radiographic results depend heavily on clinical experience, diagnostic opinions may vary among physicians. Moreover, tumors at the early stage may lack typical clinical manifestations, which makes it difficult to detect subtle pathological changes through conventional visual assessments. As a result, a highly accurate computer-aided CT analysis system for diagnosing lung cancer subtypes is in high demand.
Deep learning (DL), as an automatic quantification method, is regarded as an effective and promising approach for addressing the aforementioned problems [18]. By automatically extracting and analyzing high-throughput features from radiographic images with specific end-to-end deep neural networks, DL enables the quantitative identification of the image variations exhibited by different lesions and subsequently yields diagnostic and prediction accuracy improvements. In recent years, milestones have been achieved with DL in the field of automatic classifications on CT images through various proposed convolutional neural networks (CNNs) and learning strategies [19, 20, 21, 22]. These approaches hold advantages over visual assessments by providing physicians with more rapid and accurate diagnostic assistance to some extent. However, for the complex task of automatically classifying cancer subtypes on CT images, the classification accuracies and robustness levels of existing models may still not be satisfactory. Likewise, the limitations of these classification models can be attributed to the atypical radiological manifestations of certain cases. In addition, the original CT images usually carry a great deal of redundant information, which is also a non-negligible obstacle that prevents DL algorithms from reaching satisfactory precision rates.
Figure 1: The detailed image patterns in the paired CT images and pathological images of LUAD case (a) \(\&\) (b), and LUSC case (c) \(\&\) (d).
In clinical practice, to obtain the most accurate diagnoses of cancer subtypes, pathological examination is highly recommended as a further complementary test. This modality is regarded as the "gold standard" in cancer diagnosis since the pathological images obtained during the examination contain information of cell morphology, cell differentiation degree, cell density and other information. Hence, providing the information contained in pathological images as additional knowledge for a conventional radiological feature (RF)-based model may promote it to output a more accurate prediction. However, it is worth noting that pathological examinations are invasive, as they require tissue specimens to be obtained through needle biopsy or surgical resection [12]. In some cases, the physical conditions of patients or the potential risks of complications may limit the possibility of utilizing CT-guided needle biopsy [23, 24]. Thus, pathological examination may not always be readily available in the early diagnosis phase.
In nature, the CT and pathological images of the same lesion are the visual expressions of pathological tissues at different spatial scales and in different resolutions. Recently, there have been increasing interests in identifying the cross-scale associations between pathological images and radiographic images through high-level pathological and radiological feature approaches. [25] found that the radiological features that capture the heterogeneity of NSCLC from contrast-enhanced CT images have correlations with certain histopathologic markers generated due to hypoxia and angiogenesis. [26] reported distinct associations between the radiological and histopathological patterns of LUAD, where the tumor margin configurations and solidity glass opacity levels on CT images correspond to certain cell growth patterns. In [27], Alvarez-Jimenez further identified the cross-scale associations between the radiological and pathological features of NSCLC by showing the relationships between CT intensity values and matched cell density statistics. Moreover, Khorrami confirmed the effectiveness of using the radiological features associated with lymphocyte distributions to predict the therapy outcomes and survival rates of NSCLC patients [28]. Based on these cross-modality correlation findings, optimal strategies can be developed to obtain the underlying pathological information of tumors from CT images.
In this study, we propose a novel self-generating hybrid feature network (SGHF-Net) for accurately classifying lung cancer subtypes on CT images. Inspired by studies on the cross-modality associations between CT images and pathological images, we propose to exploit these correlations with DL techniques to acquire the gold-standard pathological image features from CT images. More importantly, we develop an effective feature fusion framework to integrate these synthetic pathological features (SPFs) into a benchmark RF-based model, guiding the entire classification model to be more inclined to extract the pathologically relevant features from CT inputs. The main property of our model is that it takes paired CT and pathological images as training data while only requires the CT images in the subsequent validation periods. In this way, our model enables the self-generation of multi-modality hybrid features while relying on a single image source (i.e., CT images) in clinical applications. Such an approach not only breaks the information deficiency limitations of a single diagnostic modality but also compensates for the absence of synergy between different modalities in the traditional one-way diagnostic process. Moreover, the proposed pathological priors guided strategy can be adopted by many state-of-the-art (SOTA) classification networks without incurring extra costs. To summarize, the key contributions and novelties of this work are listed as follows.
Figure 2: Pipeline of the proposed novel CT-based classification model, SGHF-Net: \(P=C(F(f_{\text{p}},f_{\text{r}}))\), where \(C(\cdot)\) denotes the classification component; \(F(\cdot)\) denotes the fusion module; \(f_{\text{p}}\) and \(f_{\text{r}}\) are the pathological and radiological features, respectively.
* We design a pathological feature synthetic module (PFSM), which quantitatively maps the cross-scale associations between dual medical imaging modalities, to generate the high-level deep features of pathological images from CT images.
* We design a radiological feature extraction module (RFEM) to directly acquire the radiographic information contained in CT images and integrate it with the PFSM under an effective feature fusion framework, forming more indicative and robust hybrid features for cancer subtypes classification.
* We build a large-scale multi-center dataset with data from three different tertiary hospitals. Through a series of experiments, the results all confirm the superiority of our approach for lung cancer subtypes classification.
## 2 Method
In this section, the proposed novel lung cancer subtypes classification model, SGHF-Net, is introduced. The inherent challenge of applying DL approaches in complex CT-based classification scenarios is the hard-to-achieve high accuracy. Restricted by the finiteness of valid manifestations and the interference caused by redundant information, it is highly challenging to achieve satisfactory performance with a model that relies on features from a single image modality (i.e., CT images). Therefore, as depicted in Fig. 2, we designed two key components for the proposed model, namely, the pathological feature synthetic module (PFSM) and the radiological feature extraction module (RFEM), to utilise CT images to self-generate hybrid features containing information from two image modalities. In the following subsections, the functions and training details of each module are elaborated.
### Pathological Feature Synthetic Module
The PFSM was designed to quantitatively map the correlations between the dual medical image modalities, so as to derive the "gold standard" information contained in the corresponding pathological images from CT images. Throughout the training phase, the PFSM was trained firstly with paired pathological and CT images, and then the well-trained PFSM was integrated into the benchmark RF-based model, working as pathological priors to co-supervise the RFEM parameters training process with the ground truth. Fig. 3 illustrates the training procedure of the PFSM, which involves the procedures of extracting the pathological features from the pathological image patches and synthesizing
Figure 3: The training procedure of the **PFSM** (notably, the pathological images, as the gold-standard reference images, are only required during the training process of the PFSM.)
the corresponding pathological features with CT images. Notably, the pathological images involved in this work are whole-slide images (WSIs), which are the digitized versions of glass slides that have been processed by a dedicated slide scanner [29].
#### 2.1.1 Pathological Feature Extraction from WSIs
The high-level pathological features extracted from pathological images are the crucial cornerstones during the PFSM training process. Here, a subtype classification model with pathological images as inputs was pretrained, and once the training process was completed, its feature extraction blocks were taken out separately to extract the most representative high-level features for the subsequent PFSM training step. Notably, the desired high-level features in this work could be represented as a \(512\times 1\) feature vector. We adopted the well-known Vision Transformer (ViT) [30] as the pathology-based classification model and carried out the following operations.
First, the original pathological image obtained after preprocessing was cropped into patches with identical sizes of 560\(\times\)560. Then, the cropped patches with more than 80% cancer coverage were selected as the model inputs. To make the model jointly attend the information contained in these patches from different sub-spaces, we applied the multi-head self-attention mechanism in the network architecture,
\[\mathrm{MultiHead}(Q,K,V)=\mathrm{Concat}(head_{1},...,head_{h})W^{O} \tag{1}\]
where
\[head_{t}=\mathrm{Attention}(QW_{t}^{Q},KW_{t}^{K},VW_{t}^{V}) \tag{2}\]
\[\mathrm{Attention}(Q_{t},K_{t},V_{t})=\mathrm{softmax}\left(\frac{Q_{t}K_{t} ^{T}}{\sqrt{d_{k}}}\right)V_{t} \tag{3}\]
where \(Q\), \(K\), and \(V\) refer to the query, key, and value parameters of pathological patches, respectively, \(t\) is the \(t^{th}\) head, and \(W_{t}^{Q}\in\mathbb{R}^{d_{model}\times d_{k}},W_{t}^{K}\in\mathbb{R}^{d_{ model}\times d_{k}},W_{t}^{V}\in\mathbb{R}^{d_{model}\times d_{v}}\) and \(W^{O}\in\mathbb{R}^{hd_{v}\times d_{model}}\) are weighted parameters. Here, \(h=12\), and \(d_{model}/h=d_{k}=d_{v}=64\). Following the architecture of the Transformer network and relying on self-attention, the model then output the weighted sum of the values of all input pathological patches.
During training, the whole dataset was divided into a training dataset and a testing dataset on a pro-rata basis (i.e., 80%: 20%). Following five-fold cross-validation, the patches belonging to different subjects and their corresponding pathological labels were crossly applied for model training. To update the network parameters, we applied the cross-entropy loss, as defined in Eq. (4), as the loss function:
\[\mathcal{L}_{\text{PI}}=-\frac{1}{N}\sum_{i}[y_{i}\cdot\log{(p_{i})}+(1-y_{i}) \cdot\log{(1-p_{i})}], \tag{4}\]
where \(\mathcal{L}_{\text{PI}}\) refers to the loss, \(N\) refers to the total number of training samples, and \(i\) refers to the \(i^{th}\) sample. \(y_{i}\) is the label of the \(i^{th}\) sample, and it is a sign function, which equals one if the label is the same as LUSC and zero otherwise. \(p_{i}\) is the probability that the prediction of the \(i^{th}\) sample belongs to the LUSC class.
#### 2.1.2 Pathological Feature Synthesis with CT Images
Synthesizing the pathological features with CT images was a critical step of this work, which was achieved with a conditional generative adversarial network (CGAN). First introduced by Goodfellow et al. in 2014 [31], the generative adversarial network (GAN) has produced remarkable outcomes in a variety of applications [32, 33], including image super-resolution reconstruction [34], image classification [35], texture transferring [36], cross-modality synthesis [37], and etc. The GAN, whose framework design was inspired by the two-player minimax game, is a special type of generative models. It is composed of two neural networks, a generator \(\mathcal{G}\) and a discriminator \(\mathcal{D}\). The generator aims to capture the distribution of an input dataset, while the discriminator seeks to determine the probability that a sample is from the real data distribution rather than \(\mathcal{G}\)[31]. By simultaneously training both networks in an adversarial manner, the generator and discriminator eventually reach a dynamic equilibrium, where the data distribution generated by \(\mathcal{G}\) is close to the real data distribution, and the probability predicted by \(\mathcal{D}\) equals 0.5. Due to the successful applications of GAN in many fields, much effort has been dedicated to improving the performance of GAN. For instance, plenty of studies have suggested that adding additional tasks to a GAN is beneficial for improving the performance achieved on the original REAL/FAKE discrimination task [38, 39]. Inspired by these studies, we utilized \(\mathcal{G}\) to generate high-level feature vectors of CT images with a specific class label (denoted as \(c\)). Furthermore, \(\mathcal{D}\) contained two functions, including (i) discriminating whether a feature vector was generated with the pathological images or the CT images, and (ii) predicting the class label for this feature vector. The ultimate goal of our CGAN training was for the discriminator
to fail to identify which imaging modality a feature vector was generated from while enabling to provide the correct subtype classification for this feature vector.
The networks of our proposed \(\mathcal{G}\) and \(\mathcal{D}\) were implemented with multiple convolutional blocks and fully-connected (FC) layers. Specifically, ResNet and DenseNet were evaluated in comparison as the backbone of the \(\mathcal{G}\) network. The discriminator \(\mathcal{D}\) was composed of three FC layers, where each of the first two FC layers was followed by a rectified linear unit (ReLU) activation layer and a batch normalization (BN) layer. Besides, there were two output predictions, i.e., REAL/FAKE (task one) and LUAD/LUSC (task two). During the training process, the network parameters of \(\mathcal{G}\) and \(\mathcal{D}\) were defined as
\[\left\{\begin{array}{l}\theta_{\mathcal{G}}=W_{\mathcal{G}};b_{\mathcal{G}} \\ \theta_{\mathcal{D}}=W_{\mathcal{D}};b_{\mathcal{D}}\end{array}\right. \tag{5}\]
where \(W\). and \(b\). refer to the weights and bias of the network. \(\theta_{\mathcal{G}}\) and \(\theta_{\mathcal{D}}\) were updated through iterative optimization by solving a min-max problem. The input of \(\mathcal{G}\) (i.e., the CT image with a corresponding class label \(c\)) was denoted as \(z\), and the output of \(\mathcal{G}\) (i.e., the \(512\times 1\) feature vector generated from CT image) was denoted as \(x_{\mathcal{G}}\). We defined the corresponding probability distributions of \(z\), \(c\) and \(x_{g}\) as \(p_{z}\), \(p_{c}\), and \(p_{g}\). The nonlinear mapping from \(z\) to \(x_{\mathcal{G}}\) can be defined as
\[x_{\mathcal{G}}=\mathcal{G}(z;\theta_{\mathcal{G}}) \tag{6}\]
In practice, we expected to maximize the similarity between \(x_{\mathcal{G}}\) (fake sample) and the target vector \(x_{r}\) (real sample), which was a \(512\times 1\) feature vector extracted from the pathological image. For \(\mathcal{D}\), its input \(x\) was either the generated sample \(x_{\mathcal{G}}\) or the real sample \(x_{r}\). We then formulated the learning process of \(\mathcal{D}\) as
\[(y_{1},y_{2})=\mathcal{D}(x;\theta_{\mathcal{D}}) \tag{7}\]
where \(y_{1}\rightarrow(0,1)\), and \(y_{2}\rightarrow(0,1)\) are the two probability distributions for task one and task two, respectively. \(y_{1}\) reflects the probability of \(\mathcal{D}\)'s input coming from the real sample group. It equals one if the input \(x\) comes from \(x_{r}\), and zero if \(x\) comes from \(x_{g}\). \(y_{2}\) reflects the probability of different cancer subtypes. It equals one if the label is LUSC and zero if the label is LUAD.
During the training process of the CGAN, the overall objective function, which was designed to maximize the log likelihood of the correct source and the correct class label, was composed of two parts, \((\mathcal{L}_{\mathcal{D}1},\mathcal{L}_{\mathcal{G}1})\) and \((\mathcal{L}_{\mathcal{D}2},\mathcal{L}_{\mathcal{G}2})\). For task one, the mathematical expressions of the min-max problem for \(\mathcal{D}\) and \(\mathcal{G}\) can be defined as:
\[\mathcal{L}_{\mathcal{D}1}= \max_{\mathcal{D}}\mathbb{E}_{x_{r}\sim p_{r}(x)}\left[\log \mathcal{D}\left(x_{r}\right)\right] \tag{8}\] \[+\mathbb{E}_{x_{\mathcal{G}}\sim p_{g}(x)}\left[\log\left(1- \mathcal{D}\left(x_{\mathcal{G}}\right)\right)\right] \tag{9}\]
where \(p_{r}\) is the probability of real data distribution. Eq. (8) and Eq. (9) train the generator to minimize the difference between the generated sample and the real sample, eventually encouraging the discriminator to maximize the log likelihood of the estimations for the correct input source. For task two, the loss function used for \(\mathcal{D}\) and \(\mathcal{G}\) to maximize the estimation accuracy is formulated as:
\[\mathcal{L}_{\mathcal{D}2},\mathcal{L}_{\mathcal{G}2}= \max_{\mathcal{D},\mathcal{G}}\mathbb{E}_{x\sim p_{c}(c)}\left[ \log y_{2}\right] \tag{10}\] \[+\mathbb{E}_{x_{r}\sim p_{c}(1-c)}\left[\log\left(1-y_{2}\right)\right]\]
Eventually, \(\mathcal{D}\) was trained with \(\mathcal{L}_{\mathcal{D}1}+\mathcal{L}_{\mathcal{D}2}\), and \(\mathcal{G}\) was trained with \(\mathcal{L}_{\mathcal{G}1}+\mathcal{L}_{\mathcal{G}2}\). During the training process, we saturated \(\mathcal{D}\) before updating the parameters of \(\mathcal{G}\), as suggested by [31]. More specifically, in each iteration, \(\theta_{\mathcal{G}}\) was optimized only when the discriminator was trained to its optimality, i.e., \(\theta_{\mathcal{D}}\) completed updates through a stochastic gradient calculation, which compiled with the Jensen-Shannon divergence between \(p_{g}\) and \(p_{r}\). The global minimum of the training criterion was achieved when \(p_{g}=p_{r}\), implying that the high-level feature vector synthesized with CT images could approximate the pre-extracted high-level pathological features for the specific class.
### Radiological Feature Extraction Module
The RFEM is another major module of our classification model, which is used to acquire radiographic features from CT images. In this work, we trained the RFEM under the guidance of both the ground truth and the pathological prior knowledge. To employ the synthetic pathological features as the guidance prior, we developed an effective feature fusion framework, where the high-level \(512\times 1\) pathological feature vector synthesized with CT images and the high-level \(512\times 1\) radiological feature vector extracted from CT images were concatenated together before being fed to the final
FC layer, which was followed by an output layer. Here, we optimized the RFEM with a loss function \(\mathcal{L}_{\text{r}}\) defined as follows:
\[\mathcal{L}_{\text{r}}=-\frac{1}{M}\sum_{j}[y_{j}\cdot\log{(p_{j})}+(1-y_{j}) \cdot\log{(1-p_{j})}], \tag{11}\]
where \(j\) refers to the \(j^{th}\) sample and \(M\) refers to the total number of training samples. \(y_{j}\) is the label of the \(j^{th}\) sample, and it equals one if the label is the same as LUSC and zero otherwise. \(p_{j}\) is the probability that the prediction of the \(j^{th}\) sample belongs to LUSC class. It is worth noting that during the training process, the parameters of the well-trained PFSM were fixed.
### The Overall Loss Function
The overall objective function \(\mathcal{L}_{total}\) of our SGHF-Net classification model is defined as:
\[\mathcal{L}_{\text{total}}=\lambda_{r}\mathcal{L}_{\text{r}}+\lambda_{p} \mathcal{L}_{\text{GAN}} \tag{12}\]
where \(\lambda_{r}\) and \(\lambda_{p}\) are the tuning parameters used to balance the contributions of the RFEM and the PFSM, respectively; \(\mathcal{L}_{\text{GAN}}\) is the representative sign of all objective functions of the GAN.
### Evaluation Metrics
We evaluated the performance of our method with three widely-adopted metrics, i.e., the accuracy (ACC), the area under the curve (AUC), and the F1 score. ACC is the ratio of the number of accurate predictions to the number of total model inputs. It is the basic metric for evaluating a classification model and is defined as
\[ACC=\frac{TP+TN}{TP+TN+FP+FN}, \tag{13}\]
where TP, TN, FP and FN represent True Positives, True Negatives, False Positives, and False Negatives, respectively. The AUC, which is a helpful addition for evaluating a classification model, refers to the area under the receiver operating characteristic (ROC) curve. The ROC curve is a graphical plot illustrating the true-positive rate (TPR) and false-positive rate (FPR) parameters, which are defined as follows:
\[TPR=\frac{TP}{TP+FN} \tag{14}\]
\[FPR=\frac{FP}{FP+TN} \tag{15}\]
The F1 score, which is the harmonic mean between precision and recall, is also an efficient criterion for reflecting the performance of a binary classifier. It is defined as
\[F_{1}=\frac{TP}{TP+1/2(FP+FN)} \tag{16}\]
All of these metrics have values ranging from 0 to 1, and a higher value indicates better model performance.
\begin{table}
\begin{tabular}{c c c||c c c||c c} \hline \hline \multicolumn{6}{c}{Multi-Center Dataset} \\ \hline \hline \multicolumn{2}{c||}{Hospital A} & \multicolumn{4}{c||}{Hospital B} & \multicolumn{4}{c}{Hospital C} \\ \multicolumn{2}{c||}{(Affiliated Dongyang Hospital} & \multicolumn{4}{c}{(Sir Run Run Shaw Hospital,} \\ \multicolumn{2}{c||}{of Wenzhou Medical University)} & Zhejiang University School of Medicine ) & \multicolumn{4}{c}{of Chinese Academy of Sciences)} \\ \hline Patient & LUAD & LUSC & Patient & LUAD & LUSC & Patient & LUAD & LUSC \\ (n=191) & (n=106) & (n=85) & (n=388) & (n=212) & (n=176) & (n=250) & (n=150) & (n=100) \\ \hline Age (year) & 60.43\(\pm\)11.03 & 66.71\(\pm\)6.83 & Age (year) & 60.77\(\pm\)9.39 & 64.13\(\pm\)7.26 & Age (year) & 61.76\(\pm\)10.15 & 64.52\(\pm\)8.02 \\ Male & 35 & 85 & Male & 88 & 170 & Male & 61 & 97 \\ Female & 71 & 0 & Female & 124 & 6 & Female & 89 & 3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Clinical information of the multi-center patient cohort.
### Implementation Details
The architectures of our proposed pathological priors guided classification model were deployed with the PyTorch framework on the Ubuntu 18.04 operating system. The CUDA 10.2 toolkit and CUDA Deep Neural Network (cuDNN) 8.0.2 were utilized to accelerate the model training, and all experiments were conducted on a Tesla V100-SXM2 graphic card. A number of SOTA networks were evaluated as the backbone of our proposed model, including ResNet-18, ResNet-34, ResNet-50, DenseNet-121, DenseNet-169, and the Transformer. All networks were trained from scratch with batch sizes of 2 and 16 for 3D CT images and 2D pathological images, respectively. In addition, a total of 400 epochs were run for each approach, and adaptive moment estimation (Adam) with an initial learning rate of 0.0001 was adopted as the optimizer to update the model parameters.
## 3 Dataset
This study was conducted on a large-scale multi-center dataset formed by three hospitals, the Affiliated Dongyang Hospital of Wenzhou Medical University (hospital A), Sir Run Run Shaw Hospital, Zhejiang University School of Medicine (hospital B), and the Cancer Hospital of The University of Chinese Academy of Sciences (hospital C). Notably, the data retrospectively collected from hospital A, including CT images and their corresponding pathological images, were used for training and testing the proposed model. The data from hospital B and hospital C containing only CT images were applied to evaluate the stability and generalization ability of the well-trained network. Table 1 demonstrates the demographic characteristics of all patients in this study.
A total of 191 patients from hospital A who had undergone both CT scanning and biopsy/surgical specimen examinations were enrolled in this study. All patients were histopathologically diagnosed with lung cancer with either LUAD or
Figure 4: The workflow of pathological images and CT images preprocessing: (a) obtaining the original WSI (b) conducting color normalization (c) processing ROI delineation (d) generating patches from WSI with constant size (e) obtaining the original CT image (f) segmenting masks (g) generating patch from CT with constant size.
LUSC subtype. To ensure the consistency of the clinical data, we only collected data with time intervals between the pathological examinations and CT examinations that were within two weeks. In addition, the numbers of enrolled patients with final diagnoses of their LUAD/LUSC cancer subtypes from hospital B and hospital C was 388 and 250, respectively. Before training and testing our proposed approach, preprocessing the original pathological images and the multi-center CT images was essential. Brief explanations of the related operations are presented in the following subsections.
### Pathological Image Preprocessing
The pathological images involved in this work were WSIs, where the sizes of all WSIs were 77460\(\pm\)14662 (mean\(\pm\)standard deviation) pixels wide and 59317\(\pm\)11014 pixels high. With their high resolution, WSIs offered the ultra-fine details of the examined tissue samples, while their excessive file sizes made them a challenge for the proposed learning model to read. Thus, the original WSIs needed to be converted into readable patches with a uniform size. In addition, to ensure the validity of the feature information, color normalization for all WSIs and delineation of the region of interest (ROI) were compulsory before performing patch cropping. Fig. 4 (a)-(d) illustrates the whole WSI preprocessing procedure.
The color information contained in tissue slices is critical for pathological histology diagnosis. Although the slices were stained with the same reagents (hematoxylin and eosin, H&E), the differences among the chemical manufactures or the scanners used for image acquisition may have led to unwanted appearance variations. To avoid the influence of color variations, the technique called structure-preserving color normalization (SPCN) was adopted in this study [40, 41], which properly preserved the stain densities and biological structure.
With respect to ROI delineation, the suspected cancerous regions of all WSIs were delineated by three experienced pathologists on the Automated Slide Analysis Platform (ASAP) [42]. The patches were cropped only within the ROI to ensure that the cancer-related features were contained. After completing the preparations mentioned above, patches with sizes of 560\(\times\)560 pixels were eventually cropped for each WSI.
### CT Image Preprocessing
A high-quality set of CT volumes from hospital A was acquired on a PHILIPS Brilliance 64-slice CT system, where the tube voltage and current were 120 kV and 250 mA, respectively. Additionally, CT data from hospital B and hospital C were obtained with a SIEMENS SOMATOM Definition AS 64 CT scanner and a SIEMENS Biograph 16 PET/CT scanner, respectively. To ensure that the malignancies in various patient cases were all captured within the scanning region, the CT data were reconstructed with the same settings: a 0.7 mm in-plane resolution and a 2.0 mm slice thickness. The tumors of the multi-center CT images were segmented by a radiology specialist with the ITK-SNAP 3.8 medical image processing software [43]. To cover all cancer-related features, the cancer masks were automatically dilated by three voxels during the region delineation procedure. A final volume of interest (VOI) with a constant size of 256\(\times\)256\(\times\)128 was cropped for each CT image. Furthermore, the values of all generated patches were normalized between zero and one. The workflow of CT image preprocessing can be viewed in Fig. 4 (e)-(g).
## 4 Experiments
To study the proposed classification model SGHF-Net and evaluate its performance, a series of experiments were carried out in this section, consisting of a comparison study with the SOTAs, a ablation study, an impact analysis regarding the number of parameters, a Transformer embedding test, and a multi-center data external validation study. The metrics ACC, AUC, and F1 score were utilized to evaluate the performance of the tested methods. For a better evaluation, the demonstrated results were the average of the five-fold cross-validation.
\begin{table}
\begin{tabular}{c c c c c c c} \hline Metrics & ResNet-18 & ResNet-34 & ResNet-50 & DenseNet-121 & DenseNet-169 & Ours \\ \hline ACC & 83.13\(\pm\)3.25 & 83.30\(\pm\)4.64 & **83.45\(\pm\)7.15** & 82.95\(\pm\)2.99 & 83.13\(\pm\)4.19 & 87.68\(\pm\)6.81 \\ AUC & 88.58\(\pm\)6.11 & 88.48\(\pm\)4.18 & **89.88\(\pm\)7.14** & 88.55\(\pm\)4.90 & 88.70\(\pm\)4.32 & 91.18\(\pm\)6.35 \\ F1 score & 82.03\(\pm\)5.22 & 81.95\(\pm\)3.52 & **82.70\(\pm\)6.65** & 82.58\(\pm\)4.38 & 82.15\(\pm\)3.88 & 86.73\(\pm\)6.97 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The comparisons between the proposed model (marked in red font) and other SOTAs (marked in black font) in terms of ACC(%), AUC(%) and F1 score(%).
### Comparison with the SOTAs
To evaluate the feasibility of the proposed SGHF-Net classification model, we conducted a comparison study between the proposed model and five other SOTA classification models, i.e., ResNet-18, ResNet-34, ResNet-50, DenseNet-121, and DenseNet-169. Here, we built the backbone architectures of our model, i.e., the RFEM and the PFSM, by adopting ResNet-18. The SOTA methods were trained from scratch with only the CT images of hospital A as their inputs, while our model was trained with the paired pathological and CT images of hospital A. During the validation phase for each fold, all comparison methods were evaluated on the CT testing dataset.
Ahead of the comparison, the effectiveness of the pathological feature extractor, which exists in our model only during the training phase, was first evaluated. This feature extractor was part of a pre-trained ViT. Taking the pathological images as inputs, this ViT yielded satisfactory results with 93.84%\(\pm\)2.23% ACC and 93.18%\(\pm\)1.92% AUC values for lung cancer subtypes classification, which proved the reliability of its feature extractor.
In Table 2, the quantitative evaluation results obtained by our model and the five SOTA methods are summarized. The highest value of each metric for all SOTA methods is highlighted in bold. Through the comparisons, we can make the following observations. Overall, our proposed classification model (SGHF-Net) outperformed all of the other classification methods with respect to every metric. For ACC, our approach achieved a result of 87.68%\(\pm\)6.81%, while the highest value among the SOTA methods was provided by ResNet-50 at 83.45%\(\pm\)7.15%. The significant improvements yielded by our model can be confirmed through this 4.23% difference. For the AUC, the best and the second-best results were 91.18%\(\pm\)6.35% and 89.88%\(\pm\)7.14%, respectively, which were reached by our model and ResNet-50. For the F1 score metric, we realized a 4.03% improvement, yielding quantitative results of 86.73%\(\pm\)6.97%. These numerical results reveal that the CT-based classification model did improve its performance by incorporating the relevant pathological prior knowledge contained in the pathological images into the RF-based benchmark network.
Figure 5: The ablation test of the PFSM in terms of ACC, AUC and F1 score.
Figure 6: The ablation test of the RFEM in terms of ACC, AUC and F1 score.
### Ablation Study
In this work, two key components are contained in the entire design of the proposed model, i.e., the PFSM and the RFEM. To determine the contributions of each module to the achieved performance improvements, an ablation study was carried out.
#### 4.2.1 The Effectiveness of Synthetic Pathological Priors
In the above subsection, we demonstrated the superiority of our model through comparisons with other SOTA methods, where we adopted ResNet-18 as the backbone to construct the network. To further investigate the contributions of the PFSM, we also integrated it into the other benchmark models, i.e., ResNet-34, ResNet-50, DenseNet-121, and DenseNet-169, for lung cancer subtypes classification. Correspondingly, the structure of the PFSM was kept consistent with that of the RFEM. To conduct a fair comparison, the weight ratio of the two modules remained 1:1 regardless of which backbone was utilized.
Illustrated in Fig. 5 (a), (b) and (c), the ACC, AUC, and F1 score were applied for the comparison between the benchmark classification models and the proposed approach with guidance provided by the pathological priors. The effectiveness of the PFSM was prominent. In addition to ResNet-18, taking ResNet-34 as the backbone, the metrics were increased from 83.30% to 88.45% for the ACC, 88.48% to 92.01% for the AUC, and 81.95% to 85.95% for the F1 score. Taking ResNet-50 as the backbone, the ACC, AUC and F1 score increased from 83.45% to 88.15%, 89.88% to 92.20%, and 82.70% to 86.92%, respectively. Taking DenseNet-121 as the backbone, the ACC, AUC and F1 score increased from 82.95% to 86.51%, 88.55% to 91.16%, and 82.58% to 84.90%, respectively. For DenseNet-169, increases from 83.13% to 86.69%, 88.70% to 92.66%, and 82.15% to 84.29% could be seen for the ACC, AUC and F1 score metrics. In general, the integration of synthetic pathological priors into the benchmark RF-based network yielded significant improvements in its classification performance. Therefore, the contributions of the PFSM can be confirmed.
#### 4.2.2 The Effectiveness of Radiological Features
In our design, the final output layer produces classification predications by relying on the hybrid/fused features, which are formed by the concatenation of high-level radiological features and synthetic pathological features. Previous experiments have studied the effects of the synthetic pathological features. Thereafter, to analyze the contributions of the generated radiological features, we built a model that solely relied on the synthetic pathological features for lung cancer subtypes classification, and compared it with the proposed approach. The construction of this model was the same as that of the benchmark radiology-based classification model. To facilitate the following discussion, we name this model as the synthetic-pathological-feature (SPF)-based model.
In Fig. 6 (a), (b) and (c), the comparisons between the SPF-based model and our proposed model are shown with the same metric set. Likewise, we tested the same five networks as the backbone of the SPF-based model. Comparing the numerical results, it can be observed that the model based only on the SPFs suffered from performance degradation. Taking ResNet-18 as the backbone, the ACC, AUC, and F1 score decreased from 87.68% to 82.74%, 91.18% to 89.80%, and 86.73% to 78.96%, respectively, from our proposed model to the SPF-based model. For ResNet-34, the metrics dropped from 88.45% to 82.65% for the ACC, 92.01% to 89.68% for the AUC, and 85.95% to 80.33% for the F1 score. Taking ResNet-50 as the backbone, the ACC, AUC and F1 score decreased from 88.15% to 82.84%, 92.20%
Figure 7: Impact analysis regarding the varying number of parameters in terms of ACC, AUC and F1 score.
to 88.66%, and 86.92% to 79.76%. Taking DenseNet-121 as the backbone, the decreases from 86.51% to 77.19%, 91.16% to 81.09%, and 84.90% to 73.02% for the ACC, AUC and F1 score metrics, respectively, are evident. For DenseNet-169, the ACC, AUC and F1 score decreased from 86.69% to 76.66%, 92.66% to 81.51%, and 84.29% to 73.67%, respectively. Thus, it implies that RFEM also plays a critical role in the proposed model.
### Impact Analysis Regarding the Number of Parameters
Compared with the benchmark RF-based classification model, our approach exhibits significant accuracy improvements. However, it is worth noting that the number of parameters in our model is twice as much as that in the benchmark model. To study the impact of the number of parameters on the model's performance, we trained a model that had the same structure as the proposed method while replacing its PFSM with another RFEM. To be more specific, this model, namely, the double parameters RF-based model, was made of two paralleled RFEM and a FC layer, under the same feature fusion framework as that of the proposed method. As shown in Fig. 7, the corresponding ACC, AUC and F1 score values were compared among the benchmark RF-based model, the double parameters RF-based model, and our proposed model. When adopting ResNet-18, ResNet-34, ResNet-50, DenseNet-121, and DenseNet-169 as the backbone, the numbers of parameters in the benchmark model were 10.80**m**, 18.3**sm**, 26.88**m**, 1.96**m**, and 3.01**m**, while that of the benchmark model with twice as many parameters and our model were 21.60**m**, 36.76**m**, 53.76**m**, 3.92**m**, and 6.02**m**, where **m** refers to million. Comparing the metrics in Fig. 7, it can be concluded that doubling the number of parameters did not dominate the optimization of the model.
### Transformer Embedding Test
In addition to the above-mentioned SOTA methods, we further extended our proposed model by replacing the ResNet backbone with a powerful Transformer network. Similarly, comparison studies between the benchmark RF-based model and our proposed model were conducted. For the benchmark model that was equipped with the Transformer, the ACC, AUC and F1 score were 83.56%\(\pm\)5.84%, 90.06%\(\pm\)5.99% and 81.42%\(\pm\)7.29%, respectively, which were slightly better than the best performance of the above tested SOTA methods but far from the results of our approach with ResNet backbone. Furthermore, by adapting the network of our proposed method with the Transformer, 88.04%\(\pm\)5.29%, 92.48%\(\pm\)6.10%, and 85.53%\(\pm\)7.05% ACC, AUC, and F1 score values could be obtained, demonstrating remarkable improvements. These results confirm the great value of the additional pathological priors information in accurately performing medical classification tasks.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Metrics & \multicolumn{2}{c}{ACC} & AUC \\ \hline Model & Benchmark Model & Proposed Model & Benchmark Model & Proposed Model \\ \hline ResNet-18 & 71.11\(\pm\)4.32 & 74.89\(\pm\)2.12 & 75.24\(\pm\)2.33 & 79.15\(\pm\)4.49 \\ ResNet-34 & 70.44\(\pm\)5.90 & 75.58\(\pm\)2.9 & 75.08\(\pm\)2.90 & 79.72\(\pm\)4.76 \\ ResNet-50 & 72.08\(\pm\)2.88 & 74.62\(\pm\)1.61 & 76.02\(\pm\)2.92 & 78.91\(\pm\)4.84 \\ DenseNet-121 & 72.27\(\pm\)2.42 & 74.29\(\pm\)3.6 & 74.01\(\pm\)4.32 & 77.60\(\pm\)2.49 \\ DenseNet-169 & 71.21\(\pm\)3.55 & 73.67\(\pm\)2.72 & 75.04\(\pm\)3.55 & 76.50\(\pm\)4.40 \\ ViT & 72.76\(\pm\)2.79 & 74.97\(\pm\)3.07 & 76.47\(\pm\)3.54 & 79.69\(\pm\)4.31 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The validation tests of benchmark model (marked in black font) and proposed model (marked in red font) on dataset from hospital C in terms of ACC(%) and AUC(%).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Metrics & \multicolumn{2}{c}{ACC} & AUC \\ \hline Model & Benchmark Model & Proposed Model & Benchmark Model & Proposed Model \\ \hline ResNet-18 & 72.64\(\pm\)2.48 & 76.00\(\pm\)2.3 & 79.17\(\pm\)3.13 & 82.07\(\pm\)3.30 \\ ResNet-34 & 69.60\(\pm\)6.11 & 76.05\(\pm\)2.18 & 78.28\(\pm\)3.99 & 83.44\(\pm\)2.6 \\ ResNet-50 & 72.88\(\pm\)2.71 & 76.16\(\pm\)3.37 & 79.23\(\pm\)3.08 & 83.30\(\pm\)3.36 \\ DenseNet-121 & 72.60\(\pm\)2.65 & 75.20\(\pm\)2.04 & 77.82\(\pm\)5.18 & 81.20\(\pm\)2.08 \\ DenseNet-169 & 73.04\(\pm\)1.54 & 75.92\(\pm\)2.56 & 79.68\(\pm\)2.37 & 82.12\(\pm\)1.44 \\ ViT & 73.77\(\pm\)3.13 & 76.89\(\pm\)3.91 & 80.07\(\pm\)4.33 & 84.29\(\pm\)4.12 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The validation tests of benchmark model (marked in black font) and proposed model (marked in red font) on dataset from hospital B in terms of ACC(%) and AUC(%).
### External Validation
This study was conducted on a large-scale multi-center dataset, which was formed with data acquired from three hospitals. With the training dataset from hospital A, the advancements of our proposed model over the benchmarks were shown through a number of experiments. Moreover, it can be claimed that the achieved improvements are attributed to the PFSM. In this subsection, we applied the well-trained model on the CT dataset from hospitals B and C to validate its generalization ability and robustness. In Table 3 and 4, the external performance validation results obtained for the data acquired from these two hospitals are presented, showing the comparisons between the benchmark RF-based model and our proposed model. Judging from the results, though it has to be admitted that both the benchmark model and our model performed worse on the data from hospitals B and C than those from hospital A, these models still provided acceptable ACCs and AUCs. More importantly, on the same dataset, the proposed classification model SGHF-Net attained superior performance to that of the benchmark network, achieving both ACC and AUC improvements.
## 5 Discussion
The key challenge encountered when using DL algorithms in clinical applications is to achieve high and stable diagnostic accuracy. Studies have achieved improved model performance by designing more complex structures or increasing the number of utilized parameters. In this study, we proposed an original strategy and framework by applying the multi-modality hybrid features to enhance the network training process. This model was trained with paired pathological images and CT images, while it relied only on CT images during the validation and testing phases. Hence, the novelty of the proposed model lies in its ability to self-generate hybrid features with single-modality inputs. From the results in Table 2, 3 and 4, it can be seen that our framework yielded obviously higher accuracies than other SOTA methods on both the local-center validation dataset and multi-center validation dataset.
In section 4.2, we further proved the effectiveness of the PFSM and RFEM through an ablation study. By separately removing the PFSM and RFEM from the complete network, we confirmed the significance of both parts in the whole model. Regarding the PFSM, the synthetic pathological features not only enhanced the specificity of the pathology-related information, but also suppressed the interference caused by irrelevant information in the original CT images. Moreover, as demonstrated in Fig. 5, the pathological priors guided strategy could be easily adopted by other classification models to achieve improved performance. In addition to the PFSM, the RFEM also played an important role in the entire framework, as represented in Fig. 6. From a comprehensive perspective, the radiological features obtained by RFEM provided constraints to reduce network degeneration of PFSM. Therefore, these two modules are complementary to each other for a more accurate cancer subtypes classification.
During the training procedure of the PFSM, we designed an improved strategy for extracting pathological features from WSIs. Considering the limitation of the GPU memory in this work, it was impossible to directly extract pathological features from the entire WSIs with large image sizes. We chose to apply the ViT network to approximate the feature extraction design. Patches with constant image sizes were randomly collected from the original WSIs and embedded into tokens. Therefore, the pathological feature of one WSI could be obtained from all the patches based on their attention values. The other feature strategies involved converting patches into features and then selecting the most suitable features for the original WSIs, which was proven to be less efficient for this task than the ViT strategy. However, using patches extracted from WSIs with a constant size, this approach still has limitations in representing the global information contained in WSIs. In future research, a more comprehensive multi-scale feature extraction strategy may be further developed to acquire more abundant pathological information.
In section 4.5, external datasets were applied to validate the multi-center performance of our network. According to Table 3 and 4, it could be concluded that our proposed classification model was superior to other SOTA methods. However, compared with their validation results for hospital A, all models exhibited non-negligible degradation on the multi-center datasets, whether they included our pathological priors guided strategy or not. Under multi-center circumstances, differences of CT scanners or varying operation habits of radiologists may result in inevitable deviations in the image resolutions or CT values for the same lesion, which is a major challenge with respect to the model's generalization ability. Comparing the results obtained on different datasets, our framework had no significant advantage in dealing with multi-center effects. The introduction of clinical information in future studies may be one of the solutions to this multi-center problem.
In future work, we will further explore the feasibility of implementing the proposed approach in more complex applications, e.g. multi-focal cancer/combined cancer diagnosis with CT images, cancer invasion degree assessment with CT images, and other cancer-related clinical problems. In addition, we can also extend this framework to wider areas and solve more multi-modal tasks.
## 6 Conclusion
In this paper, we proposed a novel deep learning network, SGHF-Net, for accurately classifying lung cancer subtypes on CT images. The proposed model was constructed under a feature fusion framework with two key components, a pathological feature synthetic module (PFSM) and a radiological feature extraction module (RFEM). These two modules assisted the network in acquiring both high-level pathological features and high-level radiological features from CT images alone, where the pathological features contained information about the corresponding pathological images. Following the feature fusion framework, the hybrid features covering both pathological priors and radiographic information could then be obtained by integrating the two modules together. Through a series of strategy comparisons, we demonstrated the superiority of the proposed framework and confirmed that the complementarity of synthetic pathological priors (i.e., pathological features) could significantly improve the accuracy of CT-based lung cancer subtypes classification approaches. Additionally, the proposed pathological priors guided strategy has considerable potential to be extended to more complicated applications.
## Acknowledgments
This work was supported by the National Natural Science Foundation of China (No. 62001425) and the Key Research and Development Program of Zhejiang Province (No. 2021C03029).
|